Researchers from Cornell University created an earphone that can track facial expressions and turn them into emojis. The earphone observes the contour of the user’s cheeks and translates it into emojis or silent speech commands.
Captured video of a user's facial expression (left), with a 3D model predicted by C-Face. Source: Cornell University
The ear mounted device, called C-Face, analyzes emotions without having to hold cameras in front of the user’s face. C-Face is made of two mini RGB cameras positioned below the ear on a headphone or earphone that record changes in facial contours caused by face movement. The device can capture facial expressions even when the user is wearing a mask. To analyze expression, C-Face creates avatars in virtual reality (VR) environments that display the feeling on the user’s face.
Once the images are captured, they are reconstructed with computer vision and a deep learning model. An artificial intelligence (AI) network constructs the user’s contours into expressions with 2D raw data. The AI translates images of cheeks into 42 facial points. These facial points are shapes that represent the shapes and positions of the mouth, eyes and eyebrows, the areas most affected by changes in expression.
These 42 feature points can be translated into eight emojis, including natural, angry and kissy face. They can also be translated into eight silent speech commands that control music devices, like pause, play or skip. Directing devices with facial expressions could be useful for quiet environments or shared workspaces where someone might not want to speak out loud. The emoji translation could help VR collaborators to communicate.
Investors could gather valuable information about student engagement during online lessons. It could be used to direct a computer system, like a music device, with just facial cues. This system is simpler and less obtrusive than tee current ear mounted wearable technology for tracking facial expressions.
Currently, the device is limited by its small battery capacity. But the team is working to create a device with a higher battery capacity.
This technology will be presented at the Association for Computing Machinery Symposium on User Interface Software and Technology taking place Oct. 20-23, 2020.