Optimization was the name of the game for the Obama Digital team. We optimized just about everything from web pages to emails. Overall we executed about 500 a/b tests on our web pages in a 20 month period which increased donation conversions by 49% and sign up conversions by 161%. As you might imagine this yielded some fascinating findings on how user behavior is influenced by variables like design, copy, usability, imagery and page speed.
On Friday, I wrote about the Throwable Panoramic Ball Camera, a device that takes panoramic images by simply tossing it into the air. One use case that came to mind was capturing HDR sphere images to be used in realistically placing 3D models in real-world scenarios. If Kevin Karsch’s research moves into the public space, using HDR images for this reason may soon be a thing of the past.
Kevin Karsch is a Computer Science PhD student at the University of Illinois at Urbana Champaign who is currently researching computer graphics and computer vision. What makes Kevin’s research so interesting is what he calls Physically grounded photo editing. From the description:
Current image editing software only allows 2D manipulations with no regard to the high level spatial information that is present in a given scene, and 3D modeling tools are sometimes complex and tedious for a novice user. Our goal is to extract 3D scene information from single images to allow for seamless object insertion, removal, and relocation. This process can be broken into three somewhat independent phases: luminaire inference, perspective estimation (depth, occlusion, camera parameters), and texture replacement. We are working on developing novel solutions to each of these phases, in hopes of creating a new class of physically-aware image editors.
In other words: the software aims to allow people to easily insert 3D objects into existing 2D photographs. Kevin has posted the following video on his Vimeo page, describing the process and results with examples:
Found via PhotoWeeklyOnline INC
Dustin Curtis provides his reasoning as to why Apple sticks with a 3.5” screen:
Touching the upper right corner of the screen on the Galaxy S II using one hand, with its 4.27-inch screen, while you’re walking down the street looking at Google Maps, is extremely difficult and frustrating. I pulled out my iPhone 4 to do a quick test, and it turns out that when you hold the iPhone in your left hand and articulate your thumb, you can reach almost exactly to the other side of the screen. This means it’s easy to touch any area of the screen while holding the phone in one hand, with your thumb. It is almost impossible to do this on the Galaxy S II.
Makes sense to me. Apple doesn’t compete on a feature list, they compete on experience.