Acquired Electronics360

Industrial Electronics

Unlocking 3D Vision from Ordinary Digital Camera Technology

18 September 2015

Modern digital cameras are equipped with an array of functions—from autofocus and image stabilization to panoramas and high-definition video. A team of engineers from Duke University has unlocked a previously unrecognized 3D imaging capability of modern cameras by repurposing its existing components.

The capability was successfully demonstrated in a proof-of-concept laboratory experiment using a small deformable mirror—a reflective surface that can direct and focus light. The research demonstrates how equivalent technology in modern digital cameras, the image stabilization and focus modules, can be harnessed to achieve the same results without additional hardware.

The purpose of the experiment was to extract depth-of-field information from a single shot image—rather than traditional 3D imaging techniques that require multiple images, without image quality tradeoffs. When integrated into commercial cameras, the visualization technique can improve core functions such as image stabilization, and increase the speed of autofocus, which would then enhance the quality of photographs.

The research team, led by David Brady, a professor at Duke, was able to overcome these hurdles, developing an adaptive system that may accurately extract 3D data while maintaining the ability to capture a full-resolution 2D image without a dramatic system change, such as switching out a lens. Brady and his team presented their findings in Optica, the high-impact, Open Access journal from The Optical Society.

Remarkable 3-D Vision from Ordinary Digital Camera Technology. Source: Wikipedia.orgRemarkable 3-D Vision from Ordinary Digital Camera Technology. Source: Wikipedia.orgModern digital cameras, especially those with video capabilities, are frequently equipped with modules that remove jitter from recordings. They do this by measuring the inertia or motion of the camera and compensate by rapidly moving the lens—making multiple adjustments per second—in the module. This same hardware can also change the image capture process, recording additional information about the scene. With proper software and processing, this additional information can unlock the otherwise hidden third dimension.

The first step is to enable the camera to record 3D information. This is achieved by programming the camera to perform three functions simultaneously: sweeping through the focus range with the sensor, collecting light over a set period of time in a process called integration, and activating the stabilization module.

As the optical stabilization is engaged, it wobbles the lens to move the image relative to a fixed point. This, in conjunction with a focal sweep of the sensor, integrates that information into a single measurement in a way that preserves image details while granting each focus position a different optical response. The images that would have otherwise been acquired at various focal settings are directly encoded into this measurement based on where they reside in the depth of field.

The researchers use a comparatively long exposure time to compensate for the set-up of the equipment. To emulate the workings of a camera, a beam splitter was necessary to control the deformable lens: This extra step sacrifices about 75% of the light received. The researchers then process a single exposure taken with this camera and obtain a data-rich product known as a data cube, which is essentially a computer file that includes both the all-focused 2D image as well as an extra element known a depth map. This depth map data, in effect, describes the focus position of each pixel of the image. Since this information is already encoded into the single measurement, it is possible to construct a depth map for the entire scene.

The final step is to process the image and depth map with a commercial 3D graphics engine, similar to those that render 3D scenes in video games and computer-generated imagery used in Hollywood movies.

So far, the feat has only been performed in laboratory settings with surrogate technologies, the researchers believe the techniques they employed could be applied to basic consumer products. The result would be a more efficient autofocusing process, as well as the added third dimension to traditional photography.

Related Links:

Paper: "Image translation for single-shot focal tomography," Patrick Llull et al., Optica, 2, 9, 822 (2015)

Powered by CR4, the Engineering Community

Discussion – 0 comments

By posting a comment you confirm that you have read and accept our Posting Rules and Terms of Use.
Engineering Newsletter Signup
Get the Engineering360
Stay up to date on:
Features the top stories, latest news, charts, insights and more on the end-to-end electronics value chain.
Weekly Newsletter
Get news, research, and analysis
on the Electronics industry in your
inbox every week - for FREE
Sign up for our FREE eNewsletter


Date Event Location
22-24 May 2018 Los Angeles, CA
04-07 Jun 2018 Boston, MA
06-08 Jun 2018 Los Angeles, CA
18-22 Jun 2018 Honolulu, Hawaii
12-16 Aug 2018 Vancouver, Canada
Find Free Electronics Datasheets