Acquired Electronics360

Home Appliances

Algorithm Could Teach AI to Compromise Better than Humans

19 January 2018

Computers have been challenging humans in games for years. Since the early days of online chess, computers have been able to keep up with humans in many other zero-sum games. But researchers are setting out to teach computers to cooperate with humans, instead of competing with them.

BYU professor and lead author Jacob Crandall. Source: Jaren Wilkey/BYUBYU professor and lead author Jacob Crandall. Source: Jaren Wilkey/BYU

BYU researcher and computer science professors Jacob Crandall and Michael Goodrich have created a new algorithm with MIT and other international universities that could teach machines compromise, making cooperation with humans possible and even more effective than among humans.

"The end goal is that we understand the mathematics behind cooperation with people and what attributes artificial intelligence needs to develop social skills," said Crandall, "AI needs to be able to respond to us and articulate what it's doing. It has to be able to interact with other people."

For the study, researchers programmed the machines with an algorithm named S# and ran them through a variety of two-player games to see how well they would cooperate in certain relationships. The team tested machine-machine, human-machines, and human-human interactions. In most instances, machines programmed with S# outperformed humans in finding compromises that benefit both parties.

"Two humans, if they were honest with each other and loyal, would have done as well as two machines," Crandall said. "As it is, about half of the humans lied at some point. So essentially, this particular algorithm is learning that moral characteristics are good. It's programmed to not lie, and it also learns to maintain cooperation once it emerges."

Researchers encouraged the machine’s ability to cooperate by programming them with a range of “cheap talk” phrases. In tests, if human participants cooperated with the machine, the machine might respond with a phrase like “sweet. We are getting rich!” or “I accept your last proposal.” If the participants tried to betray the machine or back out of a deal with them, they may be met with some type of trash talking, like “Curse you!”

Regardless of the game or pairing, cheap talk doubled the amount of cooperation. When machines used cheap talk, their human counterparts were often unable to tell if they were playing with a human or a machine.

The researchers hope that these findings could have long-term implications for human relationships.

"In society, relationships break down all the time," Crandall said. "People that were friends for years suddenly become enemies. Because the machine is often actually better at reaching these compromises than we are, it can potentially teach us how to do this better."

The paper on this research was published in Nature Communications.

To contact the author of this article, email Siobhan.Treacy@ieeeglobalspec.com


Powered by CR4, the Engineering Community

Discussion – 0 comments

By posting a comment you confirm that you have read and accept our Posting Rules and Terms of Use.
Engineering Newsletter Signup
Get the Engineering360
Stay up to date on:
Features the top stories, latest news, charts, insights and more on the end-to-end electronics value chain.
Advertisement
Weekly Newsletter
Get news, research, and analysis
on the Electronics industry in your
inbox every week - for FREE
Sign up for our FREE eNewsletter
Advertisement

CALENDAR OF EVENTS

Date Event Location
04-07 Jun 2018 Boston, MA
06-08 Jun 2018 Los Angeles, CA
18-22 Jun 2018 Honolulu, Hawaii
12-16 Aug 2018 Vancouver, Canada
11-13 Sep 2018 Novi, Michigan
Find Free Electronics Datasheets
Advertisement