Industrial Electronics

Developing artificial intelligence for gesture recognition

14 August 2020
Using machine learning and artificial intelligence and sensors that are skin-like and wearable on the skin may lead to future robotics or gaming systems. Source: NTU Singapore

Scientists from Nanyang Technology University, Singapore (NTU Singapore) have created an artificial intelligence system that recognizes hand gestures using skin-like electronics and computer vision.

The development could lead to the technology being adopted in surgical robots, health monitoring equipment and in gaming systems. Wearable sensors allow the AI to recognize the skin’s sensing ability, which has hampered previous AI gesture recognition systems, NTU Singapore said.

But even with wearable sensors, gesture recognition via AI has been hindered by low quality of data from the sensors due to the poor contact use and the effects of visually blocked objects, as well as poor lighting. Additional challenges come from mismatched datasets in the integration of visual and sensory data, which leads to slower response times.

NTU Singapore created a data fusion system to combat these challenges that uses skin-like stretchable sensors made from single-walled carbon nanotubes and an AI approach that resembles the way skin senses and vision are handled.

The AI system combines three neural network approaches: Machine learning for early visual processing; a multilayer neural network for early somatosensory information processing; and a sparse neural network for visual and somatosensory information together. The result was that the system could recognize human gestures more accurately than existing systems.

"Our data fusion architecture has its own unique bioinspired features which include a man-made system resembling the somatosensory-visual fusion hierarchy in the brain,” said Chen Xiaodong, professors from the School of Materials Science and Engineering at NTU. “We believe such features make our architecture unique to existing approaches."

Works in poor environments

The transparent, stretchable strain sensor adheres to the skin but cannot be seen in camera images. The AI system was tested using a robot controlled through hand gestures and guided through a maze.

NTU was able to guide the robot through the maze with zero errors through gesture recognition, compared to six recognition errors made by visual-based systems. The system was tested under poor conditions including low noise and low lighting and the system worked effectively in the dark with a recognition accuracy of 96.7%.

"The secret behind the high accuracy in our architecture lies in the fact that the visual and somatosensory information can interact and complement each other at an early stage before carrying out complex interpretation,” said Wang Ming, professor form the School of Materials Science & Engineering at NTU Singapore. “As a result, the system can rationally collect coherent information with less redundant data and less perceptual ambiguity, resulting in better accuracy."

The next steps include looking to build a virtual reality and augmented reality system based on the AI system for use in areas where recognition and control are needed such as entertainment technologies and rehabilitation in the home.

The full research can be found in the journal Nature Electronics.

To contact the author of this article, email PBrown@globalspec.com


Powered by CR4, the Engineering Community

Discussion – 0 comments

By posting a comment you confirm that you have read and accept our Posting Rules and Terms of Use.
Engineering Newsletter Signup
Get the GlobalSpec
Stay up to date on:
Features the top stories, latest news, charts, insights and more on the end-to-end electronics value chain.
Advertisement
Weekly Newsletter
Get news, research, and analysis
on the Electronics industry in your
inbox every week - for FREE
Sign up for our FREE eNewsletter
Advertisement
Find Free Electronics Datasheets
Advertisement