Communication barriers between American Sign Language (ASL) users and non-signers can be removed with a wearable sign-to-speech translation system designed by researchers from Chongqing University, China, and the University of California Los Angeles (UCLA).
The device is a glove equipped with stretchable sensors and a wireless printed circuit board. The lightweight, low cost electrically conducting yarns forming the sensors detect and interpret finger and hand motions made to signify letters or words. Electrical signals from the sensors are transmitted to the wrist-worn circuit board, which in turn wirelessly transmits the signals to a smartphone app for translation into speech.
The system, which was trained using a machine learning algorithm to recognize ASL, interprets about one word per second from a vocabulary of 660 signs. A recognition rate of up to 98.63% and a recognition time of less than 1 second have been demonstrated.
Relative to wearable systems previously developed for this purpose, the glove-based sign-to-speech translation system is less bulky and more comfortable for the user. UCLA researchers have filed a patent for the technology, which may be commercialized when faster translation times and a more expansive vocabulary are realized.