I’m re-blogging (see below) an interesting expose and critique of the Turing Test, called “The Turing test doesn’t matter” by Massimo Pigliucci. Although I agree with author’s final conclusion that we need to move beyond the Turing test in one way or another (and I think that Hofstadter’s ideas about self reference may be a good starting point), I am frustrated by one aspect of the post. The author is not only critical of, by also condescending about the fact that Turing chose a behavioral criteria for determine the intelligence of a machine: “Turing was interested in the question of whether machines can think, and he was likely influenced by the then cutting edge research approach in psychology, behaviorism , ” “This isn’t Turing’s fault, of course. At the time, it seemed like a good idea.”
“But there are deeper reasons why we should abandon the Turing test and find some other way to determine whether an AI is, well, that’s the problem, is what, exactly? […]
Here are a number of things we should test for in order to answer Turing’s original question: can machines think? […]
- Computing power
- Memory “
“to determine whether an AI is, well, that’s the problem, is what, exactly? ” Exactly!!! Turing couldn’t figure what intelligence or sentience were, and how can you test for something that hasn’t been properly defined?
What Pr Pigliucci doesn’t seem to get is that Turing chose external behavior as the sole criteria for his test, not because he was unaware of the above suggested criteria and was simply following the prevailing winds of his time, but exactly because there was no operational definitions for Intelligence, Sentience and Self-Awareness at the time (and there are likely no definitive ones now either – I will have to do my research). The other two – computing power and memory – were trivial when it comes to computers vs humans.
I’ll throw in my two cents when it comes to potential avenues for going beyond the Turing Test:
- Testing if a computer (program) can detect paradigm shifts: Thanks to the advances in machine learning, computers are becoming almost as good as humans as learning from past examples how to classify and predict future events. So far, however, computers cannot detect when the information that they were trained with is no longer relevant for the case at hand, whereas in many situations, humans can.
- Somebody should come up with a rigorous definition of sentience (again Hofstadter should be a good starting point) and then design experiments to detect or measure it.
You probably heard the news: a supercomputer has become sentient and has passed the Turing test (i.e., has managed to fool a human being into thinking he was talking to another human being [1,2])! Surely the Singularity is around the corner and humanity is either doomed or will soon become god-like.
Except, of course, that little of the above is true, and it matters even less. First, let’s get the facts straight: what actually happened  was that a chatterbot (i.e., a computer script), not a computer, has passed the Turing test at a competition organized at the Royal Society in London. Second, there is no reason whatsoever to think that the chatterbot in question, named “Eugene Goostman” and designed by Vladimir Veselov, is sentient, or even particularly intelligent. It’s little more than a (clever) parlor trick. Third, this was actually the second time that a chatterbot passed…
View original post 2,075 more words