Collaborative robots, or cobots, will take on many more tasks in the near future. These devices already work alongside humans in manufacturing environments and are expected to play an increasingly large role in areas like personal care in hospitals and nursing homes.
A robot has to collaborate safely and effectively for a human to feel comfortable working alongside the machine. Such robots are adapted to respond to physical human-robot interaction (pHRI). If a human nudges a robot – often an arm – while the robot is moving, the robot will change its trajectory slightly to accommodate the move, but it will “forget” this change the next time it performs the same operation.
A Rice University graduate student, Dylan Losey, sees this sort of interaction, known as impedence control, as an opportunity for the human to teach the robot, helping to shape its movement in real time. Working with Anca Dragan at the University of California at Berkeley, he is developing an impedence control algorithm that allows the robot to calculate a new way to achieve its goal after a human operator corrects its movement.
The initial test for Losey and other students in the Berkeley lab was teaching a robotic arm and hand to deliver a cup of coffee across a desk without spilling the contents onto a computer keyboard but keeping the cup low enough to prevent it from breaking if dropped. The experiment tested the robotic arm’s response to physical input from the experimenters.
“Here the robot has a plan, or desired trajectory, which describes how the robot thinks it should perform the task,” Losey wrote in an essay about the Berkeley experiments. “We introduced a real-time algorithm that modified, or deformed, the robot’s future desired trajectory.”
In learning mode – when Losey tested the new software – the robot used the new trajectory based on the physical corrections the experimenters gave it the first time it attempted the task. Continuing to tweak the arm and hand resulted in motion refinements that the experimenters “nudged” into the program.
Additional experiments at Rice, in Marcia O’Malley’s lab, asked a force-feedback robot dubbed the OpenWrist to move a cursor around obstacles on a computer screen and land on a blue dot. The tests that the otherwise-autonomous OpenWrist robot completed demonstrated that this type of machine can also interact favorably with the learning-mode software.
Additional research will address the amount of time the robot needs to perform a task.
“The paradigm shift in this work is that instead of treating a human as a random disturbance, the robot should treat the human as a rational being who has a reason to interact and is trying to convey something important,” Losey said. “The robot shouldn’t just try to get out of the way. It should learn what’s going on and do its job better.”
The research, funded by the National Science Foundation, has been published in the Proceedings of Machine Learning Research and in an essay Losey published on Rice’s Mechatronics and Haptic Interfaces Lab’s website.