Writing computer code is essential for programming a robot. Or at least it has been until now.
A team at the University of California Berkeley is using artificial intelligence and deep reinforcement learning techniques to make industrial robots teachable in a decidedly more human way. The idea involves using a VR headset to virtually guide a robot through a task, in the same way one might move the arms of a puppet, and then letting the robot take it from there.
The new project draws on the groundwork laid by BRETT, a robot that represented a deep reinforcement learning breakthrough for UC Berkeley in 2015. The acronym, believe it or not, stands for Berkeley Robot for the Elimination of Tedious Tasks.
“Right now, if you want to set up a robot, you program that robot to do what you want it to do, which takes a lot of time and a lot of expertise,” said Pieter Abbeel, a professor of electrical engineering and computer science who is currently on leave to work on turning his vision into reality. Along with three of his students, Abbeel has launched a startup, Embodied Intelligence Inc., which has so far raised $7 million in seed funding.
“With our advances in machine learning, we can write a piece of software once — machine learning code that enables the robot to learn — and then when the robot needs to be equipped with a new skill, we simply provide new data,” Abbeel said.
That data is akin to the sort of training you’d provide to a human worker and the savings in time is significant. A robot can be trained in a day, in contrast to the weeks to months typically required to write new computer code for reprogramming.
“Since the robot simply mimics the hand motion that’s tracked by VR, a person without any special training can make the robot do the right thing right from the beginning,” said Peter Chen, one of Abbeel’s students. “The robot will keep learning and after a while the robot says, ‘I got this, I can do this task on my own now.’”
“It completely changes the turnaround time because the amount of data you need is relatively small,” added Abbeel. “You might only need a day of demonstrations from humans to have enough data for a robot to acquire the skill.”
As demonstrated in a recently-published paper, the team used a $1,000 VR headset and hand-tracking software to train a robot to coordinate its arms with its vision, and learn new skills as complex as inserting a peg into a hole.
“When we perform a task, we do not solve complex differential equations in our head,” observed Rocky Duan, another of Abbeel’s students. “Instead, through interactions with the physical world, we acquire rich intuitions about how to move our body, which would be otherwise impossible to represent using computer code.”
The technique can work with robots already in warehouses and manufacturing plants around the world.
“This is an amazing capability that we just developed here at UC Berkeley, and we decided we should put this into the world and empower companies still using techniques that are many years behind what is currently possible,” Abbeel said. “This will democratize access to robotic automation.”