Computer scientists from the University of Nottingham and Kingston University have solved a complex problem that has defeated experts in vision and graphics research. They have developed technology capable of producing 3D facial reconstruction from a single 2D image to create the 3D selfie.
The new app produces a 3D model that shows the shape of a user's face a few seconds after they have uploaded a single color image. So far 400,000 users have created their own 3D selfies on their website.
The technique is not completely perfect yet, but this is the breakthrough computer scientists have been looking for.
It has been developed using a Convolutional Neural Network (CNN), an area of artificial intelligence (AI) that uses machine learning to give computers the ability to learn without being explicitly programmed.
The research team, led by Dr. Yorgos Tzimiropolous, trained a CNN on a dataset of 2D pictures and 3D facial models. With this information, the CNN was able to reconstruct 3D facial geometry from a single 2D image. It can take a good guess at the non-visible parts of the face.
Dr. Tzimiropoulos said, "The main novelty is in the simplicity of our approach which bypasses the complex pipelines typically used by other techniques. We instead came up with the idea of training a big neural network on 80,000 faces to directly learn to output the 3D facial geometry from a single 2D image."
Producing 3D images from 3D input is a significantly difficult problem. Current systems require multiple facial images and face several challenges, like dense correspondences across large facial poses, expressions and non-uniform illumination.
The technique demonstrates some of the advances possible through deep learning—a form of machine learning that uses artificial neural networks to mimic the way the brain makes connections between pieces of information.
Dr. Vasileios Argyriou, from Kingston University’s Faculty of Science, Engineering and Computing said, “What's really impressive about this technique is how it has made the process of creating a 3D facial model so simple."
Besides the more standard applications, like face and emotion recognition, this technology could be used to personalize computer games, improve augmented reality (AR), and allow people to try on accessories, like glasses, online.
It could have some medical applications, like simulating the results of plastic surgery or helping people understand medical conditions like autism or depression.
The results of this study will be presented at the International Conference on Computer Vision (ICCV) 2017 in Venice in October 2017. To learn more about this, visit the paper’s site here.