Presented at CVPR this week, the camera designed by Gordon WetzsteinDonald Dansereau and colleagues at the University of California in San Diego, is the very first light field, single-lens, wide field of view camera intended to improve the vision of robots.

Assistant Prof. Gordon Wetzstein and postdoctoral scholar Donald Dansereau with a prototype of the monocentric camera that captured the first single-lens panoramic light fields. Image credit: L.A. Cicero

Currently, the cameras used by robots are not the most effective. They gather information in a strictly 2-dimensional method, looking at an environment from multiple perspectives before it can understand the objects materials, and movements around them, not an ideal way to see for driverless cars or drones. The newly designed camera can obtain the same information with only one, clear 4D image.

Dansereau compares the old tech to the new as being like the difference between a peephole and a window. “A 2D photo is like a peephole because you can’t move your head around to gain more information about depth, translucency or light scattering. Looking through a window, you can move and, as a result, identify features like shape, transparency, and shininess,” he said.

The camera technology is based on research done 20 years ago by Pat Hanrahan and Marc Levoy, both professors at Stanford, into light field photography, a type of photography that captures additional light information. Where a typical 2D camera takes an image focused on only one object, light field photography allows a camera to capture a 4D image, one that includes special information like the distance and direction of light to the lens. With this additional information, users can focus the picture to anywhere in the camera’s field of vision, up to 140 degrees with the new camera, after it’s been taken.

Dansereau and Wetzstein hope that robots equipped with their new camera will be able to navigate through rain and other vision obstacles. “We want to consider what would be the right camera for a robot that drives or delivers packages by air,” said Dansereau.

138-degree LF panorama and a depth estimate based on a standard local 4D gradient method, shown as 2D slices of larger 72 MPix (15 × 15 × 1600 × 200) 4D LFs. Source: STANFORD COMPUTATION IMAGING.