The Skeptic AI Enthusiast

The Skeptic AI Enthusiast

Why “AI can’t reason” Is a Bias

We humans are proud creatures

Rafe Brena, PhD's avatar
Rafe Brena, PhD
Dec 11, 2024
∙ Paid
1
2
Share

Image by the author using ChatGPT

Recently, the controversy about whether or not AI can reason has heated up. OpenAI’s o1 model, released a few months ago, was welcomed with a mix of reactions, ranging from “It’s just smoke and mirrors” to “A new paradigm of AI.”

AI’s reasoning capabilities (or lack thereof) appear to strike a sensitive chord in many of us. I suspect that admitting an AI can “reason” is perceived as a hit on human pride, as reasoning wouldn’t be exclusive to humans.

In the nineteenth century, arithmetic was considered an intellectual prowess (hey, when have you seen a cow add together two numbers?). Still, we had to get used to using calculators that were way more capable than us.

I have seen shocking statements going from “We are about to achieve Artificial General Intelligence” or “AI got to the level of a PhD” to radical dismissals of the reasoning capabilities of AI, like “Apple Calls Bullshit On The AI Revolution.”

In other articles, I have commented on how nonsensical the AGI claims made by fans of Elon Musk. In this piece, I examine the opposite end of the spectrum: people who claim AI can’t reason at all.

Gary Marcus, one of the most outspoken AI denialists (I don’t call them “skeptics”), says that AI could be great at pattern recognition but lacks the capacity for “genuine reasoning.”

Further, Marcus calls AI chatbots “glorified autocomplete,” adding a new term to the famous derogatory “stochastic parrots,” invented by Emily Bender in the early days of ChatGPT.

What is “genuine reasoning,” anyway? I try to answer that question below.

Even more prestigious thought leaders, like Noam Chomsky, have deemed AI incapable of “truly thinking,” arguing that it lacks an “understanding of meaning.” He also thinks AI will never compete with the human capacity for creativity and abstraction in thought.

In the remainder of this post, I’ll examine what “to reason” exactly means, how standard tests have been used (and abused), how AI “reasoning” differs from human reasoning, and finally, how we have trouble making sense of it all, to the point that our interpretation of “reasoning” gets the worst of our human preconceptions.

Can LLMs reason?

Immersed in this flood of radical opinions, for and against AI reasoning capabilities, how can we make sense of what is fact-based, not mere feelings or opinions? Of course, by taking a look at the evidence.

But what are the facts in this dispute? Notice that what counts as “facts” depends a lot on your definition of “reasoning,” especially when some add further qualification that it should be “truly reason.” For instance, Salvatore Raieli puts in his recent post:

“Can Large Language Models (LLMs) truly reason?”

Here, the critical term is “truly.” What is the difference between “reason” and “truly reason”? I suspect an anthropomorphism bias here, as if “truly reason” actually means “reason like us humans, who are the only true reasoners in this universe.”

I’d instead take “reason” as the cognitive capability to solve problems that are agreed to require reasoning. This includes mathematical reasoning, commonsense reasoning, language understanding, and inference.

There could be some circularity in this definition. Still, once we agree on a set of problems associated with capabilities, it becomes a matter of checking whether the AI system can solve them or not. The problem is, as I argue below, that current AI can solve one problem and then fail miserably at problems that look similar to those of us humans.

Notice that in using this definition, I distance myself from the famous “Turing Test,” where the goal was to deceive a bunch of human judges, making them think they were talking to a human. If you are unfamiliar with the Turing Test, look at my post “Why the Turing Test Became Obsolete?”

I’m also distancing myself from subjective views that AI should “reason like a human” if we want it to be intelligent. I think the expression “reason like a human” is vague, anthropomorphic, and useless.

In the last part of this post, I argue that modern AI doesn’t “reason like a human” at all; it’s actually a form of non-human or “alien” intelligence.

Finally, others claim that to “truly reason” is to “think in several steps” in what has been called “Chain of Thought” (CoT).

This idea, related to AI chatbots, started with Google Research’s 2022 paper “Chain of Thought Prompting Elicits Reasoning in Large Language Models.” The same idea, (well) implemented in OpenAI’s o1, led some to claim that it was “a new paradigm of AI.”

I won’t argue against using CoT in AI, like in o1 (the tests make the improvements crystal clear). Still, I’d say that reasoning is a cognitive capability that is not exclusive to multi-step reasoning.

Reasoning isn’t exclusive to “solving complex problems,” either (as Raieli stated in the post mentioned above). To me, the reasoning could be simple or complex, and there should be objective tests for each.

At this point, you can start to see why many believe “AI can’t reason:”

  • For some, AI doesn’t “truly” reason or “think like a human.”

  • Others believe AI should excel at “complex reasoning and problem-solving,” disregarding simpler reasoning forms.

  • Yet others dismiss any reasoning that is not a series of steps.

As in many matters, the devil is in the details, and the detail here is how you define the supposed “reasoning capabilities.” I’ve given my definition above. To me, these objections to AI reasoning capabilities are a form of bias because they manipulate what “to reason” means in the first place.

Now, let’s examine how reasoning can be verified and even measured.

Keep reading with a 7-day free trial

Subscribe to The Skeptic AI Enthusiast to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 R. Brena
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture