Turing, the legendary AI prophet
(Excerpt from the upcoming book “AI: how did we end up in this mess?”)
This post is part of a book I’m writing about the winding evolution of AI. Last year, I taught an “Introduction to AI” graduate course, where I discussed how AI’s main sub-disciplines (Symbolic Logic, Natural Language Processing, Expert Systems, Machine Learning, and more) became important and later forgotten. Still, all contributed to the advancement of AI—but also to considerable confusion about what artificial intelligence really is.
So, I decided to write a short book about how AI became what it is today (yes, it’s a mess!). Instead of writing the whole thing and later publishing it, I will “work in public” by showing my progress to my readers and–hopefully–getting some helpful feedback.
So here we go. The first chapter of the forthcoming book will be called “The AI Prophets.” It shows how the ideas of brilliant people like Alan Turing and Norbert Wiener greatly impacted the about-to-be-born discipline of AI. I call them “The Prophets” because they kind of “announced” something that was about to come, which was, of course, the advent of Artificial Intelligence.
The main “AI Prophet”: Alan Turing.
Alan Mathison Turing, a British mathematician born in 1912 in London, was what some have called a “Renaissance man.” This expression exalts a person's many varied interests by comparing them to great Renaissance geniuses like Leonardo da Vinci, whose knowledge and interests spanned from sculpture and painting to anatomy and engineering, not forgetting botany and physics.
To give you an idea of how diverse his interests were, consider his work on Cryptanalysis (which includes code-breaking; more on this below), Computability (about what is in principle possible to compute), Mathematical Biology (whatever it is; I don’t pretend to be familiar with it), and Machine Intelligence (the name “Artificial Intelligence” didn’t exist yet).
His mind was attracted to so many endeavors that I wonder how he didn’t get distracted and lost between them.
Perhaps you, the reader, are already familiar with Alan Turing, as he was portrayed in several movies, such as The Imitation Game, a very enjoyable (though tragic) 2014 film starring Benedict Cumberbatch and Keira Knightley. The only problem with this film is that it gets everything wrong, confounding the Enigma machine with the Turing machine, among many other misleading errors.
However, perhaps the point of the mentioned film was the personal tragic story of Alan Turing. You know, he served at the Bletchley Park government headquarters, working on breaking the secret codes that the Germans used to hide critical military communications.
Alan Turing's work had a profound impact on the development of AI. His contributions include:
Deciphering the Enigma Machine code: During World War II, Turing played a crucial role in breaking the Enigma code used by the Germans in their secret communications.
Inventing the Turing Machine (which, of course, Turing didn’t call this way): This theoretical model of computation helped give a concrete form to the notion of computable functions (more on this below).
Solving the “Entscheidungsproblem” (what a word, isn’t it? It could be translated as the “decidability problem,” which itself is pretty intricate). Turing proved that there are mathematical problems that cannot be solved by any algorithm, highlighting the limits of computation.
Proposing the First Undecidable Problem: Turing identified a specific problem that cannot be solved algorithmically, further demonstrating the limitations of computation.
Proposing that Machines Could Have "Intelligence": Turing challenged the notion that only humans can be intelligent, suggesting that machines could also exhibit intelligent behavior.
Proposing the "Imitation Game" (Turing Test): Turing devised this test to assess a machine's ability to exhibit intelligent behavior indistinguishable from human behavior.
Though contributions 5 and 6 are the best known to the public (and we’ll deal with them later), contributions 2, 3, and 4 were particularly important to me. I explain:
I taught a course on Automata Theory and Languages for nearly a decade at my university. After 10 semesters of teaching it, I became so familiar with the topic that I felt no book was a good fit for my course, so I decided to write my own.
I wrote the book “Automata and Languages: A Design Approach” (in Spanish), published by McGraw Hill in 2013—the final chapters of the book deal with the Turing machine and the limits of computation.
Alan Turing, the man
Turing was homosexual, which was banned at the time by the British laws. It was called “Gross Indecency.” His trial stemmed from his relationship with a young man named Arnold Murray. He was convicted and forced to suffer chemical castration—a hormonal treatment intended to suppress libido.
He was injected with synthetic estrogen for a year. The treatment caused significant side effects, including physical and emotional distress, which led him to commit suicide in June 1954.
I shared this world with Turing for barely three months, as I was born in February 1954.
What baffles me is that Turing’s contributions during the war were insignificant compared to the “gravity” of the accusations against him.
The Turing Test: machine intelligence
The most influential concept by Alan Turing (at least related to AI) was the "Imitation Game" he proposed in 1950, later known as the Turing Test, to assess a machine's ability to exhibit intelligent behavior indistinguishable from a human. The test involves a human judge engaging in a text-based conversation with both a human and a machine, unaware of their identities. If the judge cannot reliably determine which is the machine, the machine is considered to have passed the test. While groundbreaking for its time, the Turing Test has faced increasing criticism and is now widely regarded as obsolete for several reasons.
Emphasis on Deception
The Turing Test initially aimed to evaluate a machine's capacity to solve problems and demonstrate intelligent behavior. However, over time, the focus has shifted toward deceiving human judges rather than genuinely showcasing intelligence56. Participants in Turing Test-inspired competitions, like the Loebner Prize, often resort to “cheap tricks” and exploit loopholes to convince judges of their human-like nature. This emphasis on deception undermines the test's original intent and provides a misleading picture of a machine's cognitive abilities.
Eugene Goostman, a chatbot that gained notoriety in 2012, exemplifies this problem. By posing as a 13-year-old Ukrainian boy with limited English fluency, Eugene Goostman convinced many judges of its humanness despite relying on clever design and manipulating expectations. This incident highlighted the Turing Test's susceptibility to trickery and its inadequacy as a reliable measure of genuine intelligence.
Subjectivity and Anthropomorphism
Another major flaw of the Turing Test lies in its inherent subjectivity. The outcome relies heavily on the judges' personal opinions and interpretations, introducing inconsistencies and potential biases. Judges may unknowingly project human-like qualities onto machines, even if their responses are based on simple algorithms and pattern recognition.
ELIZA, a conversational program developed by Joseph Weizenbaum in the 1960s, demonstrated this phenomenon. ELIZA simulated a Rogerian psychotherapist, engaging in dialogue by rephrasing questions or offering generic statements. Surprisingly, many individuals, including Weizenbaum's secretary, attributed human feelings to ELIZA despite its rudimentary design8. This example showcases how the Turing Test's reliance on human judgment can lead to misinterpretations and overestimating machine intelligence.
Conflating Intelligence with Consciousness
The Turing Test solely focuses on observable behavior, neglecting the internal processes constituting consciousness and sentience. While a machine may convincingly mimic human-like conversation, this does not necessarily imply proper understanding, self-awareness, or the ability to experience emotions and subjective feelings.
Critics, such as Hector Levesque, argue that a valid intelligence test should assess a machine's capacity for reasoning, problem-solving, and common-sense understanding rather than relying solely on deceptive conversational skills. The Turing Test's failure to differentiate between outward behavior and internal cognitive processes contributes to its inadequacy as a comprehensive measure of intelligence.
The Need for Objective and Relevant Measures
The limitations of the Turing Test have prompted the development of alternative assessments that prioritize objective measurements and focus on specific aspects of cognitive ability. The Winograd Schema Challenge, proposed by Hector Levesque in 2012, is a notable example. This challenge requires participants to solve "Winograd schemas" (earlier proposed by Terry Winograd), which involve understanding pronoun references in sentences with subtle ambiguities. These schemas demand a deeper understanding of language, context, and common-sense reasoning.
Unlike the Turing Test, the Winograd Schema Challenge does not rely on deceiving judges but emphasizes a machine's ability to interpret and reason about language accurately. The availability of large datasets, such as the WinoGrande quiz set, comprising 44,000 questions, allows for standardized and objective evaluation of chatbot performance. This approach shifts the focus from superficial imitation to measurable cognitive capabilities.
Conclusion
Keep reading with a 7-day free trial
Subscribe to The Skeptic AI Enthusiast to keep reading this post and get 7 days of free access to the full post archives.