Computers have been challenging humans in games for years. Since the early days of online chess, computers have been able to keep up with humans in many other zero-sum games. But researchers are setting out to teach computers to cooperate with humans, instead of competing with them.
BYU researcher and computer science professors Jacob Crandall and Michael Goodrich have created a new algorithm with MIT and other international universities that could teach machines compromise, making cooperation with humans possible and even more effective than among humans.
"The end goal is that we understand the mathematics behind cooperation with people and what attributes artificial intelligence needs to develop social skills," said Crandall, "AI needs to be able to respond to us and articulate what it's doing. It has to be able to interact with other people."
For the study, researchers programmed the machines with an algorithm named S# and ran them through a variety of two-player games to see how well they would cooperate in certain relationships. The team tested machine-machine, human-machines, and human-human interactions. In most instances, machines programmed with S# outperformed humans in finding compromises that benefit both parties.
"Two humans, if they were honest with each other and loyal, would have done as well as two machines," Crandall said. "As it is, about half of the humans lied at some point. So essentially, this particular algorithm is learning that moral characteristics are good. It's programmed to not lie, and it also learns to maintain cooperation once it emerges."
Researchers encouraged the machine’s ability to cooperate by programming them with a range of “cheap talk” phrases. In tests, if human participants cooperated with the machine, the machine might respond with a phrase like “sweet. We are getting rich!” or “I accept your last proposal.” If the participants tried to betray the machine or back out of a deal with them, they may be met with some type of trash talking, like “Curse you!”
Regardless of the game or pairing, cheap talk doubled the amount of cooperation. When machines used cheap talk, their human counterparts were often unable to tell if they were playing with a human or a machine.
The researchers hope that these findings could have long-term implications for human relationships.
"In society, relationships break down all the time," Crandall said. "People that were friends for years suddenly become enemies. Because the machine is often actually better at reaching these compromises than we are, it can potentially teach us how to do this better."
The paper on this research was published in Nature Communications.