Researchers in the MIT Media Lab have developed a $500 "nano-camera" that is based on "Time of Flight" technology like that used in Microsoft's recently launched second-generation Kinect device. Unlike existing devices based on this technology, the new camera operates in rain, fog, or even when aimed at translucent objects.
In a conventional Time of Flight camera, a light signal is sent to a scene, where it bounces off an object and returns to strike the pixel. Since the speed of light is known, it is then simple for the camera to calculate the distance the signal has travelled and therefore the depth of the object it has been reflected from.
However, multiple reflections that mix with the original signal and return to the camera make it difficult to determine which is the correct measurement.
"Using the current state of the art, such as the new Kinect, you cannot capture translucent objects in 3-D," said Achuta Kadambi, a graduate student at MIT. The new technique generates 3-D models of translucent or near-transparent objects, which are used to cancel reflections.
The new device uses a common encoding technique used in the telecommunications industry to calculate the distance a signal has traveled. The technique is similar to existing techniques that clear blurring in photographs, which un-smears a blurry picture to produce a sharper picture, according to the researchers. In the new camera, the technique un-smears the individual optical paths.
Ramesh Raskar, an associate professor of media arts and sciences and leader of the Camera Culture group within the MIT Media Lab, developed the method with Kadambi and other researchers at MIT, as well as researchers at the University of Waikato in New Zealand.
In 2011, Raskar's group unveiled a trillion-frame-per-second camera capable of capturing a single pulse of light as it travelled through a scene. The camera does this by probing the scene with a femtosecond impulse of light, then uses fast but expensive (around $500,000 to build) laboratory-grade optical equipment to take an image each time. In contrast, the new $500 "nano-camera" using inexpensive LEDs probes the scene with a continuous-wave signal that oscillates at nanosecond periods to reach a time resolution within one order of magnitude of femto-photography.
The three-dimensional camera, which was presented last week at Siggraph Asia in Hong Kong, could be used in medical imaging and collision-avoidance detectors for cars, and to improve the accuracy of motion tracking and gesture-recognition devices used in interactive gaming.
- Retinal Implant Breakthrough Said to Restore Sight to the Blind
- Researchers Build Piezoelectric Nano-Generator Using Nature as a Guide
- U.S., Japanese Companies Reportedly Form Alliance Around STT-MRAM
- Magnetic Nanoparticles Could Prevent Hotspots in Systems and Electronics
- Micron Preps Memory-Based Automata Processor
- Device Harvests Energy Wirelessly From Microwave Signals
- OLEDs Forgo Noble Metals to Generate Light with Little Heat