For the past several years, improvements in smartphone cameras have followed the “more megapixels” mantra. Samsung’s Galaxy S5 is up from 13 to 16 megapixels; Sony’s new Xperia Z2 packs a 20.7-megapixel Exmor model; and Nokia’s Lumia 1020 with PureView is a 41-megapixel monster. However, Google’s recent sensor-laden smartphone prototype, Project Tango, could herald a new direction.
Though Mountain View is focused on 3D mapping, so-called depth camera tech could dramatically improve all the pictures you take with your smartphone. By using two lenses with different focal lengths, for example, you could zoom in on subjects with quality that rivals bulky optical zooms. It could also eliminate a number of other shortcomings without adding an awkward hump like the one seen on the Lumia 1020. You could soon have much better light sensitivity, less noise and depth of field control that rivals a DSLR. The benefits are clear, but Google is not alone in its pursuit. The battle for a better smartphone camera is on, and you could be the one to reap the rewards.
Project Tango: 3D Mapping First
Though Google’s Project Tango has shone a bright light on multi-sensor technology, the hardware on its prototype handset (shown above) was actually developed by a company called Movidius. Like a mobile Kinect, it consists of a high-res camera, a low-res tracking sensor, an infrared depth scanner and a CPU. The Myriad 1 brain processes all the inputs at teraflop speeds using several hundred milliwatts of power. In a demo video from last year, Movidius showed off various applications like VR motion tracking, post-capture refocusing (à la Lytro), computational zoom and mobile 3D scanning.
For its purposes, Google has keyed in on depth scanning with Project Tango. That would enable anyone with a smartphone or wearable like Google Glass to map their indoor environment using only a smartphone. Obviously, the search giant has a strong commercial interest in that function, given how tight the Maps app has become with its search business. As such, its Advanced Technology and Projects (ATAP) group (the part of Motorola it didn’t sell to Lenovo) created a prototype phone equipped with Movidius’ hardware and an SDK for developers. It’s hoping developers will come up with innovative mapping and location functions that could one day become Android apps.
However, one overlooked aspect in the Project Tango coverage has been the technology’s potential to vastly improve smartphone photography. Thanks to onboard sensors and enormous, imaging-specific horsepower, Movidius’ tech could sort out some of the annoying limitations of taking snaps with your phone. One demo in its technology display, for instance, shows how you could zoom into a scene without the considerable pixelation normally seen on a smartphone. In another example, selective Lytro-like focusing was used on a photo after it was taken, but with more precision thanks to depth sensors. Presumably, developers could tap into those features as well as the 3D mapping to create apps with an immediate, tangible benefit to consumers. Whether Google’s SDK will permit such development or not remains to be seen.
Pelican imaging: 16 lenses, one camera
Qualcomm-backed Pelican Imaging takes a completely different approach to depth sensing. It’s developed an array of 16 lenses in a 4 x 4 grid, each of which captures only red, green or blue colors to produce 8-megapixel images. The process reduces noise by eliminating the cross talk between pixels produced by regular CMOS sensors. Offset lenses allow depth information to be captured passively (unlike the infrared Movidius system), enabling a variety of functions and effects. For example, Pelican can perform the same selective-focus trick as Movidius after a picture is taken. It could also bring clearer images in low light and even 3D image stabilization for smoother video and decreased motion blur. The company has also showed off more dramatic effects, like isolating a subject using depth info and placing it into another shot.
Last year, Pelican told us that its imaging tech would start to appear in smartphones sometime in 2014. It had received a huge vote of confidence (and cash) from Nokia, the smartphone maker leading the charge on camera technology with PureView. However, we met with Pelican here at MWC 2014 and it has now backtracked, saying its sensors won’t be installed in any handsets until at least 2015. It’s holding out for a deal with a major smartphone manufacturer, rather than settling for contracts with smaller OEMs. We can imagine, however, that any large company would be wary of risking a new handset on unproven technology unless it’s clearly an improvement on the status quo. Though Pelican’s sensor is clearly interesting, we’re not sure it can say that yet.
Core Photonics: Replacing the point-and-shoot
Israeli company Corephotonics is another Qualcomm-backed camera sensor player. Unlike Movidius, it’s focused squarely on straight-up camera technology and sees depth sensing as mere window dressing. In fact, during MWC 2014, the company told us that its goal is nothing less than to bring smartphone cameras on par with decent-quality compact zoom models. To do that, it has taken a different tack than Movidius and Pelican by using two high-resolution cameras with different focal lengths. The prototype we saw had a pair of 13-megapixel imagers, one with a standard wide-angle lens and the other with a 3X telephoto. By comparing pixels, its software can enable zooming with optical-like quality for video and stills. The image above, for instance, compares its results with that of a 5x digital zoom. It also brings other advantages of dedicated cameras, like reduced noise, better low-light performance and shallower depth of field.
Though the module looked like it might line up with the two-camera-hole HTC M8 leak, the company denied any connection. A spokesperson did say, however, that its technology is being explored by various smartphone companies and added that there are no downsides compared to current phone cameras. Indeed, as we saw at their Mobile World Congress booth, the sensors delivered not only sharp zoomed still pictures, but smooth zoomed-in video as well, a huge improvement over current shooters. Though you could argue that Samsung’s Galaxy Camera and other optical zoom models are better, the Corephotonics’ module is tiny enough to slip into devices without substantial changes. That would eliminate the dreaded PureView hump and let makers retain the slim profiles consumers have grown accustomed to.
Another factor that Corephotonics feels confident about is power consumption. Its passive tech doesn’t draw much more power than a regular camera, and the company told us that any technology using active depth sensors, like Movidius’ module, is bound to drain a handset quicker. It also felt that its tech had an edge on Pelican’s multi-sensor array, since it supports higher resolutions (Pelican claims its modules produce 8-megapixel images.) Corephotonics also believes that Google’s Project Tango could lead to SDKs that will allow app makers to deal with depth info — something it could capitalize on.
The image is everything
As it dawns on consumers that jamming more pixels onto a small sensor doesn’t necessarily make their pictures better, camera companies are reviewing their options. Depth cameras look mighty tempting, especially with companies like Google, Qualcomm and Nokia behind them. But the biggest potential lies simply in making your pictures better. A lack of zooming capability is a serious shortcoming, as are poor low-light capabilities and grainy images. Adding megapixels or boosting sensors can help a bit, but those tweaks add unwanted bulk and expense to cameras. If those issues are put to bed, people may finally chuck their compact or point-and-shoot cameras once and for all. That’s the kind of revolution that could make or break this technology — any other benefits, like Google’s vaunted 3D mapping, are just icing on the cake.