Does AI threaten humanity?

Stephen Hawking has joined a roster of experts worried about what follows when humans build a device or write some software that can properly be called intelligent. Such an artificial intelligence, he fears, could spell the end of humanity. Similar worries were voiced by Tesla boss Elon Musk in October.He declared rampant AI to be the "biggest existential threat" facing mankind.
 
He wonders if we will find our end beneath the heels of a cruel and calculating artificial intelligence. So too does the University of Oxford’s Prof Nick Bostrom, who has said an AI-led apocalypse could engulf us within a century.
 
Google’s director of engineering, Ray Kurzweil, is also worried about AI, albeit for more subtle reasons. He is concerned that it may be hard to write an algorithmic moral code strong enough to constrain and contain super-smart software. Many films, such as The Terminator movies, 2001, The Matrix, Blade Runner, to mention a few, pit puny humans against AI-driven enemies.
 
More recently, Spike Jonze’s Her involved a romance between man and operating system, Alex Garland’s forthcoming Ex Machina debates the humanity of an android and the new Avengers movie sees superheroes battle Ultron – a super-intelligent AI intent on extinguishing mankind. Which it would do with ease were it not for Thor, Iron Man and their super-friends.
 
Even today we are getting hints about how paltry human wits can be when set against computers who throw all their computational horsepower at a problem. Chess computers now routinely beat all but the best human players. Complicated mathematics is a snap to as lowly a device as the smartphone in your pocket.
 
IBM’s Watson supercomputer took on and beat the best players of US TV game show Jeopardy. And there are many, many examples of computers finding novel and creative solutions to problems across diverse fields that, before now, never occurred to us humans. The machines are slowly but surely getting smarter and the pursuits in which humans remain champions are diminishing.
 
But is the risk real? Once humans code the first genuinely smart computer program that then goes on to develop its smarter successors, is the writing on the wall for humans? Maybe, said Neil Jacobstein, AI and robotics co-chairman at California’s Singularity University.
 
"I don’t think that ethical outcomes from AI come for free," he said, adding that work now will significantly improve our chances of surviving the rise of rampant AI. What we must do, he said, is consider the consequences of what we were creating and prepare our societies and institutions for the sweeping changes that might arise.
 
"It’s best to do that before the technologies are fully developed and AI and robotics are certainly not fully developed yet," he said. "The possibility of something going wrong increases when you don’t think about what those potential wrong things are."