Audio and Video

Video: AI-powered backpack helps visually impaired navigate

25 March 2021

The University of Georgia and Intel Corp. have developed an artificial intelligence (AI)-powered, voice-activated backpack that helps the visually impaired navigate and perceive the world around them.

The backpack can detect common challenges such as traffic signs, hanging obstacles, crosswalks, moving objects and changing elevations while running on a low-power, interactive device.

“Last year when I met up with a visually impaired friend, I was struck by the irony that while I have been teaching robots to see, there are many people who cannot see and need help,” said Jagadish K. Mahendran, of the Institute for AI at the University of Georgia. “This motivated me to build the visual assistance system with OpenCV’s Artificial Intelligence Kit with Depth (OAK-D), powered by Intel.”

The goal is to help what the World Health Organization (WHO) estimates is about 285 million people that are visually impaired. Currently, visual assistance systems for navigation are limited and range from GPS systems to smartphone apps or camera-enabled walking stick solutions. Intel said these systems lack the depth perception needed for independent navigation.

How it works

The system is housed inside a small backpack that contains a host computing unit. A vest jacket conceals a camera and a fanny pack holds a pocket-size battery pack capable of providing about eight hours of use. The Luxonis OAK-D spatial AI camera can be attached to a vest or fanny pack and then connected to the computing unit in the backpack. Three tiny holes in the vest provide viewports for the OAK-D that is attached to the inside of the vest.

The OAK-D camera runs on Intel’s Movidius VPU and the Intel Distribution of OpenVINO toolkit for on-chip edge AI inferencing. It can run neural networks that allow for accelerated computer vision functions and a real-time depth map from its stereo pair as well as color information from a single 4K camera.

A Bluetooth-enabled earphone interacts with the visually impaired individual via voice queries and commands and the system responds with verbal information. As the person moves through the environment, the system gives audible queues about common obstacles such as tree branches, pedestrians or street signs as well as upcoming crosswalks, curbs, stairs or driveways.

To contact the author of this article, email PBrown@globalspec.com


Powered by CR4, the Engineering Community

Discussion – 1 comment

By posting a comment you confirm that you have read and accept our Posting Rules and Terms of Use.
Re: Video: AI-powered backpack helps visually impaired navigate
#1
2021-Apr-06 5:05 PM

Could be a very good thing for visual impaired. One thing with the voice instructions would be to keep commands as short as possible. For example saying "2 o'clock" takes up a lot of valuable time when there are a lot of things to describe in front of you. Having the ability to make up your own descriptive words/sounds/letters and put them in the feedback instructions/commands could be a help in making a good command language

Engineering Newsletter Signup
Get the GlobalSpec
Stay up to date on:
Features the top stories, latest news, charts, insights and more on the end-to-end electronics value chain.
Advertisement
Weekly Newsletter
Get news, research, and analysis
on the Electronics industry in your
inbox every week - for FREE
Sign up for our FREE eNewsletter
Advertisement