Researchers from Binghamton University State University of New York have developed a new technology that allows users to interact in virtual reality using only mouth gestures.
The increase of affordable virtual reality head-mounted displays provides users with realistic immersive visual experiences. But head-mounted displays occlude the upper half of a user’s face, which prevents facial action recognition from the whole face. In order to combat this, Binghamton University Professor of Computer Science Lijuin Yin and his team created a new framework that interprets mouth gestures as a medium for interaction within virtual reality in real-time.
Yin’s team tested this application on a group of graduate students. When a user put on the head-mounted display, a simple game was displayed. The objective of the game was to guide the avatar around a forest and eat as many cakes as possible. Players had to select their movement direction using head rotation and move using mouth gestures. They could only eat a cake by smiling. The system could describe and classify the mouth movements and it achieved high correct recognition rates. The system has been demonstrated and validated through a real-time virtual reality application.
"We hope to make this applicable to more than one person, maybe two. Think Skype interviews and communication," said Yin. "Imagine if it felt like you were in the same geometric space, face to face, and the computer program can efficiently depict your facial expressions and replicate them so it looks real."
The tech is still in the prototype phase, but Yin believes that this technology is applicable for many fields.
"The virtual world isn't only for entertainment. For instance, healthcare uses VR to help disabled patients," said Yin. "Medical professionals or even military personnel can go through training exercises that may not be possible to experience in real life. This technology allows the experience to be more realistic."
A paper on this research was presented at the IEEE International Conference on Multimedia and Expo.