A team at the University of Hertfordshire in the UK has developed a new concept called Empowerment, in order to help robots protect humans and keep themselves safe.
Robots are quickly becoming common in households and workplaces. Robots are interacting with humans in unpredictable situations. Self-driving cars are just one example of the robots that are currently in use. Self-driving cars need to keep the driver and other occupants safe, while also protecting the car from damage. Current robot development trends point toward robots becoming the new caretakers for the elderly. If this becomes reality, the robots will need to adapt to complex situations and respond to their owner’s needs.
Recently, people like Stephen Hawking have warned about the dangers of artificial intelligence (AI), which has sparked a public discussion on the morality of robotics.
The idea of "intelligent" machines, like robots, turning on human owners is not new. In 1942, Isaac Asimov, a science fiction writer, proposed three laws that govern how robots should interact with humans. These laws state that a robot should not harm a human or allow a human to be harmed. Asimov's laws also try to ensure that robots obey orders from humans and protect their own existence, as long as this doesn’t involve harming humans.
These laws were meant to be well-intentioned, but they are often open to misinterpretation. This is especially true as robots don’t understand nuanced and ambiguous human language. Asimov’s stories are full of examples where robots misinterpreted the laws, which led to disastrous consequences.
One problem with these laws is that the concept of "harm" is difficult to explain clearly and is very context-specific. If the robot doesn’t understand "harm," it is almost impossible for them to avoid causing it. "We realized that we could use different perspectives to create 'good' robot behavior, broadly in keeping with Asimov's laws," says Christoph Salge, another scientist from the study.
To help with this problem, the team developed a concept called Empowerment. The team decided that rather than trying to make the robot understand complex ethical questions, the robot should always be seeking to keep their options open. "Empowerment means being in a state where you have the greatest potential influence on the world you can perceive," explains Salge. "So, for a simple robot, this might be getting safely back to its power station, and not getting stuck, which would limit its options for movement. For a more futuristic, human-like robot this would not just include movement, but could incorporate a variety of parameters, resulting in more human-like drives."
The team mathematically coded the Empowerment concept so it can be adopted by a robot. The researchers originally developed Empowerment in 2005 and a recent key development helped them expand the concept so the robot also seeks to maintain a human’s Empowerment. "We wanted the robot to see the world through the eyes of the human with which it interacts," explains Daniel Polani, professor of artificial intelligence at Hertfordshire. "Keeping the human safe consists of the robot acting to increase the human's own Empowerment."
Researchers aimed to teach the robots to maintain human Empowerment, not to oppressively protect humanity. The Empowerment concept could power robots that follow the spirit of Asimov’s three laws, from self-driving cars to robot butlers. The study on Empowerment was published in Frontiers in Robotics and AI.