Robots need a little help as they learn to function in the physical world, like a toddler learning to navigate through the world. Rice University has a program that exists to gently guide robots toward the most helpful, human-like ways to collaborate on tasks.
Rice engineer Marcia O’Malley and graduate student Dylan Losey have refined their method to train robots by applying gentle physical feedback to machines while they perform tasks. The goal is to simplify the training of robots expected to work efficiently with humans.
"Historically, the role of robots was to take over the mundane tasks we don't want to do: manufacturing, assembly lines, welding, painting," said O'Malley, a professor of mechanical engineering, electrical and computer engineering and computer science. "As we become more willing to share personal information with technology, like the way my watch records how many steps I take, that technology moves into embodied hardware as well.”
"Robots are already in our homes vacuuming or controlling our thermostats or mowing the lawn," she said. "There are all sorts of ways technology permeates our lives. I already talk to Alexa in the kitchen, so why not also have machines we can physically collaborate with? A lot of our work is about making human-robot interactions safe."
According to the researchers, robots adapted to respond to physical human-robot interaction (pHRI) traditionally treat these interactions like disturbances and resume their original behaviors when the interactions end. The Rice researchers have enhanced pHRI with a method that allows humans to physically adjust a robot’s trajectory in real time.
At the heart of the program is the concept of impedance control, which is literally a way to manage what happens when push comes to shove. A robot that allows for impedance control through physical input adjusts its programmed trajectory to respond but returns to its initial trajectory when the input ends.
The Rice algorithm builds upon this concept, as it allows the robots to adjust its path beyond the input and calculate a new route to its goal, something like a GPS system that recalculates the route to its destination when a driver misses a turn.
Losey spent the majority of last summer in the lab of Anca Dragan, an assistant professor of electrical engineering and computer sciences at the University of California, Berkeley, testing his theory. He and other students trained a robot arm and hand to deliver a coffee cup across a desktop and then used enhanced pHRI to keep it away from a computer keyboard, but also low enough so the cup would not break if dropped.
The goal was to deform the robot’s programmed trajectory through physical interaction.
"Here the robot has a plan, or desired trajectory, which describes how the robot thinks it should perform the task," Losey wrote in an essay about the Berkeley experiments. "We introduced a real-time algorithm that modified, or deformed, the robot's future desired trajectory."
In impedance mode, the robot consistently returned to its original trajectory after an interaction. In learning mode, the feedback altered the robot’s state at the time of interaction and also how it proceeded to the goal, according to Losey. If the user directed it to keep the cup from passing over the keyboard, for instance, it would continue to do so in the future.
"By our replanning the robot's desired trajectory after each new observation, the robot was able to generate behavior that matches the human's preference," said Losey.
10 Rice students further tested this with O’Malley lab’s rehabilitative force-feedback robot, the OpenWrist, to manipulate a cursor around obstacles on a computer screen and land on a blue dot. The tests first used standard impedance control and then impedance control with physically interactive trajectory deformation, an analog of pHRI that allowed students to train the device to learn new trajectories.
The results showed trials with trajectory deformation were physically easier and required significantly less interaction to achieve the goal. The experiments demonstrated that interactions can program otherwise-autonomous robots that have several degrees of freedom, in this case flexing an arm and rotating a wrist.
One current limitation that pHRI cannot yet modify is the amount of time it takes a robot to perform a task, but this is on the Rice team’s agenda.
"The paradigm shift in this work is that instead of treating a human as a random disturbance, the robot should treat the human as a rational being who has a reason to interact and is trying to convey something important," Losey said. "The robot shouldn't just try to get out of the way. It should learn what's going on and do its job better."
The paper on this research was published in IEEE Explore.