Drones turned into missiles, fake videos manipulating public opinion and automated hacking are just three of the threats from artificial intelligence in the wrong hands, experts have said. The Malicious Use of Artificial Intelligence report warns that AI is ripe for exploitation by rogue states, criminals and terrorists.
Those designing AI systems need to do more to mitigate possible misuses of their technology, the authors said. And governments must consider new laws.
The report calls for:
Policy-makers and technical researchers to work together to understand and prepare for the malicious use of AI
A realisation that, while AI has many positive applications, it is a dual-use technology and AI researchers and engineers should be mindful of and proactive about the potential for its misuse
Best practices that can and should be learned from disciplines with a longer history of handling dual use risks, such as computer security
An active expansion of the range of stakeholders engaging with, preventing and mitigating the risks of malicious use of AI
Speaking to the BBC, Shahar Avin, from Cambridge University’s Centre for the Study of Existential Risk, explained that the report concentrated on areas of AI that were available now or likely to be available within five years, rather than looking to the distant future.
Particularly worrying is the new area of reinforcement learning where AIs are trained to superhuman levels of intelligence without human examples or guidance.
"It is often the case that AI systems don’t merely reach human levels of performance but significantly surpass it.
"It is troubling, but necessary, to consider the implications of superhuman hacking, surveillance, persuasion, and physical target identification, as well as AI capabilities that are subhuman but nevertheless much more scalable than human labour."