A new color filter developed for electronic imagers is designed to triple the amount of light captured, significantly enhancing the low-light performance of smart phones, tablets, and other devices.
Electronic imagers measure intensity. Within the wavelength band of a given detector, the goal is to achieve the flattest possible spectral response. That’s helpful for achieving monochrome images, but isn’t very useful when it comes to capturing a scene in color. The most common approach to produce a color picture from an electronic imager involves overlaying the detector with a Bayer filter, an ordered array of RGB pixels that maps to each detector pixel. Incident light shines through the filter onto the image sensor, which records the scene in separate colors on adjacent pixelsthe red spectral content, the green, the blue, and the green again (to more closely mimic human visual response). Interpolating signal from the adjacent pixels yields the final color image. It’s a good solution, but it can introduce artifacts at high-contrast interfaces. The bigger problem is that it is ultimately a subtractive process. The red filter, for example, absorbs all other wavelengths. That’s fine in a photon-rich environment, but in low-light situations, every bit of input is precious.
“If you think about it, this is a very inefficient way to get color because you’re absorbing two thirds of the light coming in,” Menon says. “But this is how it’s been done since the 1970s. So for the last 40 years, not much has changed in this technology.”
Menon has developed a combined hardware /software solution to address the problem. It starts with replacing the conventional absorptive color-filter array with a transparent diffractive-filter array (DFA). The DFA consists of a multilevel structure lithographically etched into a fused-silica substrate. Incident light passes through the DFA, where the micro-structured surface diffracts it into a wavelength-dependent intensity distribution (see figure 1). The detector array captures the data; further computational steps refine the results, producing a color image.
The approach allows the device to image a scene over 25 or more spectral channels simultaneously, boosting sensitivity by as much as 3.12 times compared to that of conventional color digital cameras.
The new filter can be used for any kind of digital camera, but Menon is developing it specifically for the smart phone market. He has launched a company, Lumos Imaging, to commercialize the technology and is reportedly in negotiations with multiple manufacturers. Look for the technology to appear in commercial products in about three years.
That’s only the start, however. The technology can be used with digital imagers in a range of applications such as hyperspectral imaging for industrial and agricultural uses, astronomy, and situational awareness for self-driving cars.
“In the future, you need to think about designing cameras not just for human beings but for software, algorithms and computers,” Menon says. “Then the technology we are developing will make a huge impact.”