The Skeptic AI Enthusiast

The Skeptic AI Enthusiast

Share this post

The Skeptic AI Enthusiast
The Skeptic AI Enthusiast
3 Reasons Why AI Isn't Any Close to Consciousness

3 Reasons Why AI Isn't Any Close to Consciousness

"Is there a soul inside?"

Rafe Brena, PhD's avatar
Rafe Brena, PhD
Sep 11, 2024
∙ Paid
1

Share this post

The Skeptic AI Enthusiast
The Skeptic AI Enthusiast
3 Reasons Why AI Isn't Any Close to Consciousness
1
Share
Photo by Darius Bashar on Unsplash

I cringe every time I read the word “consciousness” in the same phrase as “Artificial Intelligence.”

This may look like a very particular phobia, but I didn’t have this phobia a few years ago (more precisely, before 2022 when ChatGPT went out). It all started with a few and then a deluge of YouTube videos, articles, comments, or questions implying that AI will be conscious next year or so.

One of the problems I see with people talking about consciousness in machines is that, in most cases, the idea comes from anthropomorphizing (sorry for the long word) machines and also AI. It goes as follows:

  • Humans are more intelligent than animals, aren’t they?

  • We are way more conscious than animals,

  • So, consciousness comes with intelligence.

It is not hard to see how this is flawed reasoning, putting in question the first premise.

This personal anecdote illustrates anthropomorphism:

Someone in my family made a remark about a dog named “Pum,” saying that Pum was very intelligent because of how she (Pum is a female) looked at you: She fixates her eyes on yours, like trying to say something. I asked when Pum had shown actual intelligence, like by doing a clever trick or something. But no, Pum had never done something clever; she just had a look in the eye.

So, a way of looking was taken as evidence of intelligence. A look that suggested thinking supposedly showed an inquisitive mind.

Wait a moment, but aren’t AI chatbots like ChatGPT, with its sometimes introspective, empathetic, and emotional, at least a little bit conscious? Even AI experts like Ilya Sutskever seem to believe this is possible:

Perhaps you remember that before the launch of ChatGPT, a then-Google engineer, Blake Lemoine, was charged with testing an early AI chatbot named LaMDA. After some interactions, Lemoine was certain that PaLM had a “sentient” mind; he said that “he recognizes a sentient mind when he sees one.” Lemoine declared that LaMDA was “a kind of misunderstood poor boy,” to the point that he wanted to take legal action to protect it (him?).

Even before that, as early as 1967, there was a conversational system called ELIZA, programmed by Joseph Weizenbaum, that simulated a psychiatrist of the Rogerian school consulting the user, and very often it questioned this one, turning answers into new questions, so it looked “empathetic” in a way familiar to those who have had psychological help. Well, many people interacting with ELIZA — including his secretary — attributed human feelings and even a soul to the contraption. Of course, by today’s standards, ELIZA interactions don’t look natural anymore.

So far, I’ve made the point that the story of attributing consciousness to computational systems is long and continues to the present.

But beyond the misconceptions I mentioned above, how can we be positively sure that an AI doesn’t have (or won’t have any time soon) consciousness?

Keep reading with a 7-day free trial

Subscribe to The Skeptic AI Enthusiast to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 R. Brena
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share