Researchers from Georgia Tech have developed a wearable technology that translates American Sign Language (ASL) and controls text or mobile apps by following the user’s hand movements. The new system is called FingerPing.
The FingerPing system consists of a thumb ring and a watch. The thumb ring produces acoustic chirps that then travel over the user’s hand to the receivers on the wristwatch. The sound waves are manipulated by the hand movements, which is how the wristwatch detects which of the commands the user is trying to access. The gestures don’t have to be big. A simple tap of a finger or a small hand pose is sufficient.
FingerPing recognizes 22 microfinger gestures that can be programmed to different commands, including stopping or playing music on a device with a single finger swipe or, eventually, translating ASL. FingerPing uses the 12 hand bones to follow gestures. For example, FingerPing can recognize the numbers 1-10 in ASL.
"The receiver recognizes these tiny differences," said Cheng Zhang, the Ph.D. student in the School of Interactive Computing who led the research. "The injected sound from the thumb will travel at different paths inside the body with different hand postures. For instance, when your hand is open there is only one direct path from the thumb to the wrist. Any time you do a gesture where you close a loop, the sound will take a different path and that will form a unique signature."
This system has many applications but the most promising one is for ASL translation. It can be difficult for deaf people to interact with people who don’t know ASL. Currently, there are some systems that translate ASL in real time, but these systems consist of bulky cameras. The cameras can be cumbersome to carry around and other people may feel uncomfortable with having a camera pointed at them at all times. Most people probably don’t want to carry a camera around at all times. Because the FingerPing system consists of only a thumb ring and a watch, it is much less obnoxious to carry around, leading to the possibility that more people would use the translation system.
The research on this new system was presented at the 2018 ACM Conference on Human Factors in Computing Systems (CHI).