A self-driving car. Source: Flickr/Christine und Hagen Graf
The Camera Culture group at MIT’s media lab has been developing imaging systems for the past 10 years. From a camera that can see around corners to one that can read the text in closed books, all of these imaging systems were developed using a “time-of-flight” approach. This approach gauges distance by measuring the time it takes light projected into a scene to bounce back to a sensor.
Members of the Camera Culture group have presented a new approach to time-of-flight imaging that increases its depth resolution 1,000-fold. This type of imaging could make self-driving cars more practical.
The new approach could allow self-driving cars to have more accurate distance measurements through the fog. Fog has been a major obstacle to the development of more reliable self-driving cars.
The existing time-of-flight systems have a range of two meters and a depth resolution of around a centimeter. This is good enough for assisted-parking and collision-detection systems for today’s cars.
Achuta Kadambi, joint Ph.D. student in electrical engineering and computer science and media arts and sciences, and author of the paper explained, “As you increase the range, your resolution goes down exponentially. Let's say you have a long-range scenario, and you want your car to detect an object further away so it can make a fast update decision. You may have started at 1 centimeter, but now you're back down to [a resolution of] a foot or even 5 feet. And if you make a mistake, it could lead to loss of life."
At a distance of two meters, the MIT researchers’ system has a depth resolution of two micrometers. Kadambi conducted tests where he sent a light signal through 500 meters of the optical fiber with regularly spaced filters along the length, in order to simulate the power falloff incurred over longer distances, before feeding it to the system. The tests suggest that at a range of 500 meters, the MIT system should achieve a depth resolution of a centimeter.
In time-of-flight imaging, a short burst of light is fired into a scene and a camera measures the time it takes to return. This indicates the distance of the object that is reflected. The longer the light burst, the vaguer the measurement of how far it’s traveled. Light-burst length is one of the factors that determines the strength of system resolution.
The other factor is detection rate. Modulators, which turn the light beam on and off, can switch a billion times a second. But today’s detectors can make only 100 million measurements a second. Detection rate is what limits existing time-of-flight systems to centimeter-scale resolution.
But, according to Kadambi, there is another imaging technique that can enable higher resolution. This technique is called interferometry, where a light beam is split in two, and half of it is kept circulating locally, while the other half — called a “sample beam — is fired into a virtual scene.
The reflected sample beam is recombined with the locally circulated light and the difference between the two beams and it yields a precise measure of the distance the sample beam has traveled. But interferometry requires careful synchronization of the two light beams.
"You could never put interferometry on a car because it's so sensitive to vibrations," Kadambi said. "We're using some ideas from interferometry and some of the ideas from LIDAR, and we're really combining the two here."
The team is also using some ideas from acoustics, like the occurrence of “beating,” when two singers are slightly out of tune and the interplay of their voices produces another tune that has a frequency that is the difference between the two notes.
This same idea is true with light pulses. If a time-of-flight imaging system is firing light into a scene at the rate of a billion pulses a second, and the turning light is combined with light pulsing at almost a billion times a second, the result will be a light signal pulsing at once a second. This rate is easily detectable with a commodity video camera. The slow “beat” will contain all the phase information that is necessary to gauge distance.
But instead of trying to synchronize two high-frequency light signals, Kadambi modulates the returning single, using the same technology that produced it in the beginning. This means they pulse the already pulsed light. The result is the same but this approach is much more practical for automotive systems.
Gigahertz optical systems are naturally better at compensating for fog than the lower-frequency systems. Fog is problematic for time-of-light systems because it scatters light. It reflects the returning light signals so they arrive late and at strange angles. Trying to isolate a true signal in all the noise is too computationally challenging to do on the fly.
With the low-frequency systems, scattering light causes a slight shift in phase. This muddies the signal that reaches the detector. But with high-frequency systems, the phase shift is larger relative to the frequency of the signal. Scattered light signals over different paths will cancel each other out. The troughs of one wave will align with the crests of another. Theoretical analyses suggest that this cancellation will be widespread enough to make identifying a true signal easier.