As the Powerful Argue, Might Superintelligence Arise on the Fringes?

Elon Musk and Stephen Hawking have admitted they are concerned about artificial intelligence. While brilliant, neither are AI researchers. This week Bill Gates was also voicing concern, even as a chief of research at Microsoft said advanced AI doesn’t worry him. It’s a hotly debated topic. Why?
 
In part, it’s because tech firms are pouring big resources into research. Google, Facebook, Microsoft, and others are making rapid advances in machine learning, a technique where programs learn by interacting with large sets of data.
 
But it’s here that a critical distinction should be made. Machine learning is what’s called ‘narrow artificial intelligence’. Machine learning programs that can identify discrete features in images, for example, are being used to analyze images of tissue for the presence of cancer. Those Amazon and Netflix recommendation systems are a form of narrow AI. Google search learns from its interactions with users to improve search results.
 
The debate Musk, Hawking, and Gates are wading into is about the future of AI (just how futuristic is also controversial) when general AI emerges. General artificial intelligence would match and then (maybe very quickly) exceed human intelligence. It is, in fact, an old and oft-recurring debate with newly fresh legs.
 
In his book Superintelligence, released last year, Nick Bostrom argues that there are good reasons to believe artificial superintelligence could be very alien, very powerful, and as it seeks to achieve its goals, could wipe human beings out.
 
Bostrom goes on to say that AI, ironically, may offer the best safeguard. We aren’t smart enough to train an AI, but it could train itself. “The idea is to leverage the superintelligence’s intelligence, to rely on its estimates of what we would have instructed it to do.”