AI Is Better at Compromise Than Humans, Finds New Study

Let’s face it. Ever since artificial intelligence was first dreamed up, humans have been afraid of the day that our AI overlords take over. But some researchers have been testing the ability of artificial intelligence to not just compete with humanity, but also to collaborate.
 
In a study published on Thursday in Nature Communications a team of BYU computer science professors, Jacob Crandall and Michael Goodrich, along with colleagues at MIT and other universities, created an algorithm to teach machines cooperation and compromise.
 
The researchers programmed the machines with an algorithm called S# and ran them through a number of games with different partners — machine-machine, human-machine, and human-human — to test which pairing would result in the most compromises. Machines (or at least the ones programmed with S#), it turns out, are much better at compromise than humans.
 
But this might say more about “human failings”, lead researcher Jacob Crandall tells Inverse. “Our human participants had a tendency to be disloyal — they would defect in the midst of a cooperative relationship— and dishonest — about half of our participants chose to not follow through on their proposals — at some point in the interaction.”
 
Machines that had been programmed to value honesty, on the other, were honest. “This particular algorithm is learning that moral characteristics are good. It’s programmed to not lie, and it also learns to maintain cooperation once it emerges,” says Crandall.
 
Additionally, the research found that certain cooperation strategies were more effective than others. One of these was “cheap talk”, simple verbal signals that respond to a given situation, like “Sweet. We are getting rich!” or “I accept your last proposal.” or, to express displeasure, “Curse you!” “You will pay for that!” or even “In your face!”
 
Regardless of what type of game was being played or who was playing, cheap talk doubled the amount of cooperation. It also humanized the machines, with human players often unable to tell if they were interacting with a machine or human.
 
Because the focus of this research was on the testing the S# algorithm, one shortcoming of the research is that it does not take into account cultural differences among humans that might affect how different human populations might use or interpret strategies like cheap talk, or how much human players might engage in deception or disloyalty.
 
Presciently, in the final short story of Isaac Asimov’s famous book 1950 book I, Robot — “The Evitable Conflict” — AI overlords do in fact, take over the planet. But, because they’re so reasonable, and programmed with some unbreakable ethical rules, it’s a good thing for the human race. And, as this study shows, because human interactions are messy and complicated, maybe, we can trust machines. Assuming of course, if they have been programmed for honesty.