Controlling Artificial Intelligence

Anthropologist Beth Singler at the University of Cambridge studies our relationship with AI and robotics and believes that we should be more critical of those making decisions about how AI is used. This is an extract from her interview (https://www.newscientist.com/article/2303858-beth-singler-interview-the-dangers-of-treating-ai-like-a-god/).

How far away are we, and how will we know when we’ve made a machine that has the

same level of intelligence as we do?

It comes down to really what we conceive of as intelligence and how we describe success in AI. So, for a long time since the very conception of the term artificial intelligence, it’s about being very good at doing simple tasks, bounded tasks in a very simplistic domain. And then over time, those domains become more complicated, but still, it’s about being successful.

So, the whole history of playing computer games, for instance, all the way from the simple boards of tic-tac-toe and chess, all the way up to Go and Starcraft II is developmental, but it’s still framed around success and failure. And we need to ask, is that actually what we think intelligence is? Is intelligence being good at games of that nature?

Do you think we fear AI too much?

I think there’s a certain healthy level of fear when it comes to the applications of AI that could lead to understanding what’s going on with it, being critical of it, trying to push back against this non-transparency, identifying who’s behind the scenes and making decisions about how AI is being used.

What is your hope for the future of AI?

I would like to see the technology used in appropriate and fair and responsible ways, and I

think that’s quite a common desire and we’re seeing more and more pushes towards that. My concerns are more about human involvement in making the decisions about how AI is used than AI of running away and becoming this disastrous thing in itself.