No, A ‘Supercomputer’ Did NOT Pass The Turing Test For The First Time And Everyone Should Know Better

The week’s news in tech was flooded with a story about how a chatbot passed the Turing Test for "the first time," with many buying every point in the story and talking about what a big deal it was.
 
Except, almost everything about the story is bogus and a bunch of gullible reporters ran with it, because that’s what they do. First, here’s the press release from the University of Reading, which should have set off all sorts of alarm bells for any reporter. Here are some quotes, almost all of which are misleading or bogus:
 
The 65 year-old iconic Turing Test was passed for the very first time by supercomputer Eugene Goostman during Turing Test 2014 held at the renowned Royal Society in London on Saturday. 
 
‘Eugene’, a computer programme that simulates a 13 year old boy, was developed in Saint Petersburg, Russia. The development team includes Eugene’s creator Vladimir Veselov, who was born in Russia and now lives in the United States, and Ukrainian born Eugene Demchenko who now lives in Russia. 
 
[….] If a computer is mistaken for a human more than 30% of the time during a series of five minute keyboard conversations it passes the test. No computer has ever achieved this, until now. Eugene managed to convince 33% of the human judges that it was human.
 
Okay, almost everything about the story is bogus. Let’s dig in:
 
It’s not a "supercomputer," it’s a chatbot. It’s a script made to mimic human conversation. There is no intelligence, artificial or not involved. It’s just a chatbot.
 
Plenty of other chatbots have similarly claimed to have "passed" the Turing test in the past (often with higher ratings). Here’s a story from three years ago about another bot, Cleverbot, "passing" the Turing Test by convincing 59% of judges it was human (much higher than the 33% Eugene Goostman) claims.
 
It "beat" the Turing test here by "gaming" the rules, by telling people the computer was a 13-year-old boy from Ukraine in order to mentally explain away odd responses.
 
The "rules" of the Turing test always seem to change. Turing’s original test was quite different anyway.
 
As Chris Dixon points out, you don’t get to run a single test with judges that you picked and declare you accomplished something. That’s just not how it’s done. If someone claimed to have created nuclear fusion or cured cancer, you’d wait for some peer review and repeat tests under other circumstances before buying it, right?
 
The whole concept of the Turing Test itself is kind of a joke. While it’s fun to think about, creating a chatbot that can fool humans is not really the same thing as creating artificial intelligence. Many in the AI world look on the Turing Test as a needless distraction.