Alan Turing was a British scientist and a pioneer in computer science. During World War II, it was Turing who developed a machine that would break the German Enigma code. He also laid the groundwork for modern computing and theorized about artificial intelligence. That was some 70 years ago, folks. Turing developed a “test” which stated that if a judge can’t tell which of two hidden entities is a human and which is artificial, the machine has “passed” the test. Both of those happened so many years ago. So you’re probably wondering why I’m writing about it now? Well doesn’t it amaze you how far we’ve come in those last 70 years? Even just from the perspective that Turing speculated about machine learning.
The question I have for you today is – is it time to retire the Turing Test? In August, Mitsuku, an animated chatbot that calls itself “an artificial life form living on the net” won the Loebner Prize’s Turing Test competition for the third time since 2013. The Turing Test has long been the benchmark for artificial intelligence developers over the years. It has helped us see advances to things like self-driving cars, speech processing, and image recognition. As AI and machine learning advance, the challenge of a machine imitating a human has become easier. Which makes you wonder if we should still be using Turing as the test?
Think about AlphaGo. We wrote about it a few weeks ago. And then think about AlphaGo Zero. Does the Turing test still hold up when it comes to these technologies? Steve Worswick, developer of the Mitusku bot states:
“I believe that the Turing test goal of trying to achieve a human level of intelligence was a noble goal in its day, but computers are capable of doing so much more than a human, especially with memory and information retrieval.”
Some suggest that the Turing test, in its day was practical and simple. But it is more of an inspirational idea than a literal interpretation of understanding machine learning or intelligence. Are we being too hard on Turing? Building software that can pass a Turing Test is extremely challenging. For instance, Loebner prize competitors face 20 questions ranging from current events like, “what do you think of Trump?” to more difficult ones requiring an understanding of context like “I was trying to open the lock with the key, but someone had filled the keyhole with chewing gum, and I couldn’t get it out. What couldn’t I get out?”
Kai-Fu Lee, former head of Google China simply suggests that the Turing Test needs updating. He believes “there should be a cyborg with human skin, human vision, human speech, and human language. The test should judge the humanness or naturalness of the cyborg with all the above skills. One could add the naturalness of the skin, hair, eyes, eye-movements, body language, and more.”
Let’s back this up a bit. We’ve gone from bots that can beat humans at games (Go, for example) to cyborgs looking “natural”. Which means, the Turing Test would be whether or not a human could judge the difference between an actual human and a cyborg. What do you think about that? I think this “update” is a little far-fetched and should include some other things. Don’t you? Perhaps this is years away, and we won’t be retiring the Turing Test anytime soon. What is clear is that the way in which we test machine learning has changed considerably over the last 70 years, and it’s only a matter of time before it changes again.