Robots scare, mystify and entertain. They lend us a hand (or arm, as the case may be), eyes and mobility that make our work more efficient, precise and safe. Robots are on the front line of many jobs too dangerous for humans and they perform tasks mirroring each of the five human senses. Sensors are especially adept at providing robots the use of touch, hearing, vision and movement, using algorithms that sense the environment, providing feedback for ever-greater accuracy and performance.
The Ins and Outs of Accuracy
Robots excel at repetitive motion, gripping and following a specific path or track within a controlled lab or factory environment. A major challenge for most robots, however, is their operation in settings where they must perceive and adapt. To this end, a rapidly growing segment of robotics/sensing—artificial intelligence (AI) also known as deep learning, is designed to mimic the neural circuits of the human brain.
Deep learning, whereby layers of artificial neurons process overlapping raw sensory data, enables robots to become proficient at recognizing patterns and categories based on the vast amount of input experienced. Siri, Google Street View and IBM’s jeopardy-winning Watson are well-known examples.
Watson is to AI-based robotics what the iPhone is to mobile communications. It was the first cognitive system that went head-to-head against two of the greatest human Jeopardy. Watson “learned” such natural language nuances as puns, synonyms, homonyms and slang. Without a connection to the Internet during play, or having questions in advance, the robot responded based on what it “knew,” drawing from a large set of unstructured data. In this case, machine learning, statistical analysis and natural language processing were harnessed and Watson came up with possible responses and the ability to rank its confidence in the accuracy of the response. This process took a whopping three seconds.
IBM has not stopped with Watson. In August 2014, the company introduced SyNAPSE, a cognitive chip with the potential to transform mobility and solve challenges in vision, audio and multi-sensory fusion at incredibly low power levels. Designed to integrate brain-like functions into devices where computation is constrained by power and speed, the chip was created using a brain-inspired computer architecture powered by 1 million neurons and 256 million synapses. At 5.4 billion transistors and an on-chip network of 4,096 neurosynaptic cores, it is the largest chip built by IBM so far, yet it only consumes 70mW—an important factor for its intended use in distributed sensing and supercomputing.
The current robots learn motor tasks through trial and error—the realm of AI. UC Berkeley, for example, innovated reinforcement learning for robots so that a machine can complete a variety of tasks without pre-programmed details. Using the same software that encodes how the robot can learn is used with all of the tasks given to it, eliminating constant reprogramming.
Driving Sensors
It may seem like sensor-based robotic and autonomous vehicle development is new. Not so; it was a full 10 years ago, at The Grand Challenge launched by DARPA that Stanley, the Stanford University robot won the competition. The robot, created by Stanford with experts from Volkswagen of America, Mohr Davidow Ventures, Intel and others, was a test of high-speed road finding, obstacle detection and avoidance in desert terrain. Wheel speed, steering angle and GPS system data were sensed automatically and communicated via a CAN bus interface to a computer system. No manual intervention was allowed for the robots—they had to drive themselves—and did.
All major car manufacturers are currently testing some kind of autonomous cars. However, regulatory hurdles still exist and customers, other than early adopters who insist on bragging rights, will not buy the cars until they are sure they are safe.
Just go to CES, held every January, and soak in new sensor-based automotive autonomous technologies. Advances in processors and sensors are resulting in the widespread integration of computer vision. Object recognition is moving from identifying objects by such small features such as edges and corners, to training robots to become proficient in obstacle identification using advances machine vision.
An example of the latter is MIT’s Cheetah robot that can run 29 mph and also jump. The second iteration of the robot “sees” with an onboard LIDAR visual system that uses laser-based reflections to map the terrain. New algorithms enable it to take real-time obstacle detection data from the laser sensor and use the information to gauge obstacle’s distance and height. Another algorithm adjusts the robot for a jump that will clear the obstacle safely.
To see just how fast autonomous vehicles might catch on, the executive council of Dutch ministers approved the running of two driverless shuttles in the Dutch city of Wageningen beginning in December 2015. Each will carry up to eight riders, 6km from a train station to the university on public roads at up to of 50km/h. The shuttles will operate completely autonomously, without safety drivers present and will be monitored remotely.
Following the Money
There are two important trends aiding in the growth of robot use, and by extension, the ongoing and rapid advancement of sensor technology. In 2013, a robot rang the closing bell at NASDAQ, highlighting the creation of the first robotics stock index. ROBO-STOX attracted $54 million in just 2.5 months that was invested into 77 stocks globally. And the companies are not just automotive related—electronics robotics first gained on their automotive cousins and then passed them in 2013.
The second trend is reshoring—finding ways to be more competitive in manufacturing through the use of robots. There is growing evidence that the U.S. is reversing the loss of manufacturing and its associated jobs. Replacing overseas workers with cost-effective factory robots in the U.S. is passing the break-even point.
These safe and simple robots are collaborative and cheap. A robot made by Rethink Robotics, for example, costs $22,000, have shed their cages, and can be trained to work along side their human counterparts. As robots shrink, they can handle more complex and intricate tasks. Cost is the key to collaborative robots. With base models selling for a low price tag, for the first time, small and medium size businesses can get into the game.
So, if robots are less expensive than human workers offshore, will it mean a net loss of jobs in the U.S.?
Estimates to date point out that replacing employees with robots will result in a manufacturing workforce that is a few million workers smaller within the next 10 years. However, expectations are that factory payrolls will increase, based on an expanding U.S. economy and the growing inclination of manufacturers to relocate at least some production back to the U.S.
They are not Perfect
Advances rapidly continue. The use of miniature sensors is offering greater precision and more stable detection points even at fluctuating temperatures. They are yielding better results in detecting very small targets invisible to larger sensors. Presence sensing is used in robotics to implement dynamic collision avoidance. Contrast sensors are helping robots to locate parts or avoid unwanted collisions without the use of conventional vision systems.
Still, for all of the progress to date, accidents occur. Recently, a robot at a Volkswagen plant in Germany killed a 21-year old contract technician while installing a robot. He was struck in the chest by the robot and pressed against a metal plate, dying from his injuries. A Volkswagen spokesman stressed that the robot was not a new generation lightweight collaborative robot that works side-by-side with workers on the production line without safety cages.
Across the proverbial board, it will be the combination of integration, AI for smarter diagnostics and higher-level robotic languages, coupled with using the exactly right application interface that will ensure continued innovation.