Adding 3D Objects to 2D Images With Ease

On Friday, I wrote about the Throwable Panoramic Ball Camera, a device that takes panoramic images by simply tossing it into the air. One use case that came to mind was capturing HDR sphere images to be used in realistically placing 3D models in real-world scenarios. If Kevin Karsch’s research moves into the public space, using HDR images for this reason may soon be a thing of the past.

Kevin Karsch is a Computer Science PhD student at the University of Illinois at Urbana Champaign who is currently researching computer graphics and computer vision. What makes Kevin’s research so interesting is what he calls Physically grounded photo editing. From the description:

Current image editing software only allows 2D manipulations with no regard to the high level spatial information that is present in a given scene, and 3D modeling tools are sometimes complex and tedious for a novice user. Our goal is to extract 3D scene information from single images to allow for seamless object insertion, removal, and relocation. This process can be broken into three somewhat independent phases: luminaire inference, perspective estimation (depth, occlusion, camera parameters), and texture replacement. We are working on developing novel solutions to each of these phases, in hopes of creating a new class of physically-aware image editors.

In other words: the software aims to allow people to easily insert 3D objects into existing 2D photographs. Kevin has posted the following video on his Vimeo page, describing the process and results with examples:

Found via PhotoWeeklyOnline INC