Roboticists are trying to teach robots how to learn in the same way babies do – by exploring their movements, grabbing items, pushing, and imitating.
University of Washington developmental psychologists and computer scientists have demonstrated that robots can "learn" much like a child—by gaining knowledge through exploration, watching a human perform a task and then determining how best to carry out that task on its own.
"You can look at this as a first step in building robots that can learn from humans in the same way that infants learn from humans," says Rajesh Rao, a UW professor of computer science and engineering."If you want people who don't know anything about computer programming to be able to teach a robot, the way to do it is through demonstration -- showing the robot how to clean your dishes, fold your clothes, or do household chores. But to achieve that goal, you need the robot to be able to understand those actions and perform them on their own."
In the team’s paper, the UW researchers show their new probabilistic model geared toward at solving a common robotics problem – building robots that learn by observation and imitation.
The roboticists worked with UW psychology professor AndrewMeltzoff, whose research has shown that children as young as 18 months can predict the goal of an adult's actions and develop alternate ways of reaching that goal themselves.
Children develop these intention-reading skills, in part, through self-exploration so they can determine how their actions have an effect on objects.
"Babies engage in what looks like mindless play, but this enables future learning. It's a baby's secret sauce for innovation," says Meltzoff. "If they're trying to figure out how to work a new toy, they're actually using knowledge they gained by playing with other toys. During play they're learning a mental model of how their actions cause changes in the world. And once you have that model you can begin to solve novel problems and start to predict someone else's intentions."
The team took the information they gathered about babies and used it to develop machine learning algorithms that allow a robot to explore how its own actions result in different outcomes. Then it uses that learned probabilistic model to infer what a human wants it to do and complete the task. They’re even working on getting a robot to “ask” for help if it can’t quite carry out a task.
The team conducted two robot tests using this method of learning: a computer simulation experiment in which a robot learns to follow a human's gaze, and another in which an actual robot learns to imitate human actions involving moving toy food objects to different areas on a tabletop.
"If the human pushes an object to a new location, it may be easier and more reliable for a robot with a gripper to pick it up to move it there rather than push it," says Michael Jae-Yoon Chung, a UW doctoral student in computer science and engineering. "But that requires knowing what the goal is, which is a hard problem in robotics.”
Initial experiments involved learning how to understand goals and imitate simple behaviors. Next, the team plans to explore how they can use these models to get robots to learn more complicated tasks.
"Babies learn through their own play and by watching others," says Meltzoff, "and they are the best learners on the planet – why not design robots that learn as effortlessly as a child?"