A new system developed by researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Google employs machine learning to apply automatic, professional-quality retouching to images – and is so energy-efficient that it can run on a cellphone.
It’s also fast enough to display retouched images in real-time, so a photographer can view the final version while still framing the shot. It can even speed up existing image-processing methods; in tests, the system produced results nearly identical to those of a new Google high dynamic range (HDR) algorithm in about one-tenth of the time.
An earlier project from the MIT researchers was designed to send a low-resolution version of an image from a cellphone to a web server, which would respond with a “transform recipe” that could be used to retouch the high-resolution image on the phone. Google heard about that work, and joined forces with Michaël Gharbi, an MIT graduate student in electrical engineering and computer science, to try a machine-learning-based approach instead that could speed up the process.
To that end, the researchers trained their system on 5,000 images, each retouched by five different photographers. They also trained on thousands of pairs of images produced by image-processing algorithms like the HDR one mentioned above.
One challenge the researchers addressed was the limitations of low-resolution images, which are employed for most of the image processing to reduce time and energy consumption. Because past attempts at guessing the values of omitted pixels to increase resolution (also known as “upsampling”) have not worked well in practice, the team allowed the output of their system to be a set of formulas for color modification – rather than an image. These formulas were applied mathematically to individual pixels in the high-res image.
When compared to a machine-learning system that required around 12 gigabytes of memory to process images at full resolution, the new system required only around 100 megabytes. Each modification, moreover, takes up only about as much memory space as a single digital photo, so a cellphone could theoretically be equipped to process images in a range of styles.
The system is being presented in early August 2017 at the SIGGRAPH digital graphics conference.