The Dilemma of Whether Artificial Intelligence Can Reason or Not
Does it repeat like a parrot or is it a new type of intelligence?
(Guest post by Horacio R. Granero—excerpt from his upcoming book)
What does it mean to have reasoning skills?
Humans are often puzzled by the answers provided by ChatGPT and similar generative AI systems. We wonder if we are not facing a new form of intelligence, different from the human one, handled with a kind of “experiential understanding,” perhaps similar to what we could call an “intuition,” whose engine provides probable answers to imprecise questions.
Generative AI tools sacrifice truth for intuition. They live in a space independent of reason and truth, but their intuition is mighty and often surprises us. ChatGPT's guidance may take me to a place that may or may not exist and match what I need—or not. This is why we should not fully trust the AI's instinct, but we should not rule it out either.
What is certain is that our “human pride” is at stake when we believe we are the only masters of this Universe, which is why our reasoning should be the only valid way to reason.
Is it appropriate to continue adhering to a vision “focused only on the human being” because we consider ourselves the sole masters of the universe? Or will we adopt a more modest and perhaps more realistic understanding of humans as wonderful and limited creatures that can interact with other forms of intelligence?
Today's AI can solve a problem and then fail miserably at problems that resemble us humans. The “Turing Test” aimed to trick a group of human judges into thinking they were talking to a human.
Others claim that "truly reasoning" is "thinking in several steps" in what has been called a "Chain of Thought" (CoT). This idea began with Google Research's 2022 paper “Chain of Thought Prompting Elicits Reasoning in Large Language Models.”
We humans have never had to deal with a similar form of intelligence in history. Conversations with AI, especially since GPT-4th, took us by surprise, and many didn't know what to do with the strange abilities of the new machines. While it won't always get it right (that's why we talk about intuition), however, the secret of this approach is the fantastic synergy between intuition and search engines, as was the experience I had recently with a vague question:
HRG: What is the name of the classic book that is about a jealous husband who murders his wife?
ChatGPT 4 The classic book you're referring to is probably William Shakespeare's "Othello." It is a tragedy that tells the story of Othello, a Moorish general in the service of Venice, who, driven by jealousy manipulated by his subordinate Iago, ends up murdering his wife Desdemona. It is one of Shakespeare's best-known plays about the dangers of jealousy and manipulation.
Some seventy years ago, calculators amazed us, followed by Deep Blue in chess and AlphaGo in Go. To add insult to injury, our general reasoning abilities are challenged by "pattern-to-scale" contraptions that act on more than one hundred forty-seven billion parameters.
Instead of treating these advanced systems as mere tools, we often project our own cultural biases, preconceptions, and even fears onto them, anthropomorphizing them to make them acceptable. Still, we should get used to, perhaps, accepting their entity to integrate research teams with humans.
Aristotelian-Thomistic conception of reasoning
Let's start by stating that reasoning is the fundamental cognitive process of drawing conclusions or making decisions based on the available information, logic, and analysis.
According to Aristotle, in his works “Topics” and “Later Analyticals,” he considered that reasoning can be divided into two types: deductive, that is, deriving specific conclusions from general principles, and inductive, when it is a generalization based on observations.
In “First Analytics,” he develops the concept of deduction or syllogism by establishing that a syllogism is reasoning in which, starting from general premises, a necessary conclusion is derived (major premise, minor premise, and conclusion). This deductive approach is considered central to his formal logic. It seeks to demonstrate specific truths from already accepted general principles.
Induction, which he calls epagogé, involves observing particular cases and reaching a general conclusion. According to Aristotle, obtaining first principles is essential since they cannot be proven deductively but are apprehended through observation and experience. He considers both types of reasoning complementary: Induction provides the general principles that serve as the basis for deductive reasoning. This relates to his theory of knowledge, where first principles are derived from sensible experience and then systematized by deduction.
Subsequently, St. Thomas Aquinas addresses reasoning in his commentary on the works of Aristotle (especially in the “Commentary on Aristotle's Logic”), where he considers reasoning as an activity of human reason, whose objective is to reach the truth, distinguishing two types: demonstrative or scientific reasoning, which starts from evident principles to obtain certain conclusions and has as its purpose the objective knowledge of the truth. Truth and dialectical or probable, based on probable premises, are used mainly in philosophical discussions where it is impossible to reach absolute certainties. In the Summa Theologica, on the other hand, he explains that for human beings, reasoning implies a process of analysis and synthesis, which consists of decomposing and recomposing concepts to reach conclusions.
In general, reasoning is supposed to be the process of solving complex problems or making decisions. It requires identifying the problem, breaking it down into subproblems, finding patterns, and then choosing the best solution.
Similarly, decision-making requires identifying problems and patterns and evaluating alternatives before choosing the best solution, which can be seen today in AI systems (LLMs), which can carry out reasoning and draw logical conclusions.
Using CoT (Chain of Thoughts) techniques allows the model to exploit its potential. However, in December 2024, other techniques surpassed those indicated in the academic works prepared.
Does AI simply parrot what it has been trained to do?
Some compare AI to the meaningless repetitions of a parrot, a “stochastic parrot,” a metaphor used in debates about the nature of specific artificial intelligence models, particularly language models such as deep learning.
This metaphor illustrates for those, such as Gary Marcus, who deny AI's ability to reason, the key aspects of how these systems work by production based on learned patterns (just as a parrot repeats words or phrases without understanding their meaning, language models would generate text based on statistical patterns learned from data, without an inherent understanding of the content) and "stochasticity" (controlled randomness) in the sense that they use probabilities to select the most likely words for text generation. This introduces a kind of artificial creativity that may seem fluid, but in reality, it is based on probabilistic calculations and would be nothing more than a simulation without understanding.
Thus, Noam Chomsky has considered AI incapable of “truly thinking,” arguing that it lacks an "understanding of meaning" and will, therefore, never compete with the human capacity for creativity and abstraction in thought.
On the other hand, Salvatore Raieli discusses the issue in his article "Can Large Language Models (LLMs) Really Reason?" in which, with an anthropomorphic bias, he considers “true reason” to actually mean “reason like us, humans, who are the only true reasoners in this universe,” and that the best LLMs of today possess cognitive abilities that are impossible to attribute to simple good luck or memory so that the expression "stochastic parrots" attributed to AI would be meaningless.
Do we find ourselves with the possibility of a reasoning capacity different from the human one?
Humans must recognize that there may be a different form of reasoning.
For a long time, it was suggested that only humans could reason. Today, this has been questioned, and it has been shown that primates, octopuses, and birds also exhibit basic forms of reasoning, such as making decisions or solving problems.
A few years ago, a Court in Mendoza, Argentina, considered that "it is undeniable that the great apes, including the chimpanzee, are sentient beings; therefore, they are subjects of non-human rights. Such categorization in no way distorts the concept wielded by the doctrine. The chimpanzee is not a thing; it is not an object that can be disposed of as one disposes of a car or property. Great apes are subjects of law with legal capacity and de facto incapable, since, according to the evidence produced in the present case, chimpanzees reach the intellectual capacity of a 4-year-old child."
In response to this statement, the Court indicated that great apes born in the wild are captured and sold, becoming an involuntary object of law; that is, they are involuntary actors in the theater of human rights, and for this reason, "recognizing great apes as subjects of rights is the best act of inclusion as involuntary actors in the field of law that human beings can perform, not as a narcissistic entity but because of the dignity of the human person, who stands as a being who knows that he feels and thinks, and as a thinking being who reacts and acts in the face of this great observable and evident phenomenon that animals are not things" concluding the tribunal who “Declares to the chimpanzee Cecilia, currently housed in the zoo of the Province of Mendoza, subject of non-human rights.”
Animals have traditionally been considered objects or movable things. That character was assigned to them by Article 2318 of the repealed Civil Code and is attributed to them today by Article 227. Civ. and Com. Recently, voices have been raised in the doctrine that recognizes animals as non-human persons or as subjects of rights, as sentient beings - not things - that possess rights and interests whose owners, companions, and/or caregivers must watch, guarantee, respect, and comply with the provisions of Article 41 of the National Constitution. in Laws 14.346, 22.421 and the different protective laws and various jurisdictional pronouncements. In this sense, animals – and much more so domestic animals – have been considered by law to be sentient beings who enjoy their own rights as part of the obligation to respect life and protect their integrity.
An alien intelligence?
Some AI systems, especially those based on deep learning, come to conclusions or solutions that are difficult to understand even for their own creators, especially since they are based on neural network programming. This "opacity" can make its reasoning seem different or foreign to human reasoning ("aliens" if we want to call it), since it does not share the human evolutionary framework, since AI is not limited by the biases and biological conditioning factors that affect humans, being able to address problems in ways utterly alien to our expectations.
On the other hand, AI could be said to have “superhuman” capabilities in specific domains, such as calculation, data analysis, or complex games (such as chess or Go); undoubtedly, AI surpasses humans and does so using unobvious strategies.
While we could initially say that both ChatGPT and similar conversational AI systems could be defined in principle as predictors of the next word in a text, trained with millions of texts produced by humans so that, after this conditioning, they produce text that "sounds" like a human, it seems that this is no longer the case.
Most denials of AI reasoning are based on an often related inaccurate assumption that “AI should reason like a human”. If not, it doesn't reason at all or count as reasoning. Others identify pattern matching with a complete inability to reason "authentically," even when, in most cases, the AI gives the correct result. It is like saying that anything done with pattern matching "does not qualify as reasoning."
But what if AI gives the correct answer in many, but not all, reasoning tests? What if AI was slowly developing an increasing proportion of accurate solutions to reasoning problems, regardless of whether they use pattern matching or not?
Why human and AI-generated reasoning are different
There are reasons why humans are perplexed, even bewildered, when we encounter a form of intelligence such as LLM-based AI that does not necessarily have to be identical to human AI, as explained by Rafe Brena, who listed the causes that, in his opinion, make this difference.
Human beings have feelings, and machines only pretend to have them. The ability to pretend to have feelings has been naturally incorporated into chatbots due to the vast data they ingested from human sources during the training phase. Most of the humans who generated the texts and voices used for training had, in fact, real emotions. We are immediately hooked when we hear a voice expressing an emotion. It is a reflex. We are programmed to capture emotions. Machines are getting better and better at pretending to have emotions, and we have to get used to this new reality instead of being deceived. Now that machine voices are no longer “robotic,” a lack of expressiveness is no way to differentiate people from robots. Machines are trained to trick us when they pretend to have feelings and are getting too good at it.
Humans understand or don't understand. Human beings have the "experience of understanding"; we feel something when we understand what we did not understand before. It's as if suddenly something makes all the sense. That's why it's depicted in cartoons as an electric light bulb that turns on. But machines don't work this way. Machines possess a "behavioral understanding" that differs from the "experiential understanding" of humans. Many would consider experiential understanding to be "true understanding," but that's unfair to machines because they don't have feelings of any kind, and experiential understanding is ultimately the feeling of understanding. Measuring comprehension on machines (via test benches) means that improvements in comprehension can be objectively demonstrated. Therefore, greater intelligence in machines no longer means that they are magically “getting” an idea but are gradually moving towards better scores on cognitive tests.
Machines do not hesitate; humans do. Some of us are baffled by AI's confidence in answering our queries, often only to find out that the answer was incorrect. Humans are guilty of taking machine prose as "safe." A different point is whether we can adjust the machines to show, in some cases, a lack of confidence in the result. After measuring system performance across a set of queries, supposedly irrelevant modifications were made, such as changing names and numbers or introducing irrelevant items, and then, performance decreased dramatically when queries were rerun. In similar situations, humans can almost always detect what is relevant and not, so they discard irrelevant elements. Machines struggle to do this, and while they do it well in many cases, performance is drastically affected.
Could a robot with generative Artificial Intelligence become civilly liable for its actions under the law?
If we could conclude that the existence of a unique AI reasoning can be considered viable, the question about the possible liability of robots equipped with such technology arises.
In Argentine law, the possibility of attributing liability to a robot with generative AI poses significant challenges from a legal point of view. A generative AI robot is an object or good (usually a non-registrable movable property) that operates under the control or supervision of a human or legal entity.
According to the Civil and Commercial Code of the Nation, only human and legal persons can hold rights and obligations (art. 22 et seq.). Therefore, robots do not have legal personality or legal autonomy and cannot be subject to rights or obligations, which excludes the possibility of being directly held responsible.
If a robot causes damage, the responsibility falls on the people or entities that own, design, or control it, according to the principles of the Civil Code. Article 1757 of the CCCN establishes that whoever creates a risk must be liable for the damages. If a robot with generative AI is considered a "risky thing", its owner or guardian (art. 1758) will be liable for the damage it causes, unless it proves the existence of an external cause (fortuitous event, fault of the victim or a third party). The autonomous operation of generative AI could be interpreted as an intrinsic risk that generates strict liability.
Article 1753 of the CCCN regulates the manufacturer's liability for defective products. If a robot with generative AI causes harm due to design, programming, or implementation flaws, the manufacturer could be liable if it fails to ensure a reasonable safety standard. This includes defects in training data, failures in algorithms, or inadequacy in risk mitigation measures.
To attribute liability, it is assessed whether the damage was foreseeable and whether the human (or the responsible entity) was at fault in using or supervising the robot. If the damage arises from improper or negligent use of the robot by the user, the liability falls on the latter. Still, if the robot generates harmful results due to unforeseeable autonomous decisions (the so-called "black box")." of the AI), it will be analyzed whether there were reasonable control or supervision measures.
Even in several countries, there is no specific regulation on AI liability. However, general principles of law and international regulatory frameworks can be used as a guide, such as the UNESCO Recommendation on the Ethics of Artificial Intelligence (2021), which emphasizes ensuring that AI technologies are designed and used responsibly and that developers and operators are held accountable. Under current Argentine law, a robot with generative AI cannot be directly responsible since it is not a subject of law. The responsibility lies with the humans or entities that design, control or use the robot. However, to address these technologies' legal and ethical challenges, it will be necessary to update the regulations and develop specific principles that complement the general provisions of the CCCN. Although some projects give robots an “electronic personality,” they still generate doctrinal discussions about it.
Encouraging us to a conclusion
AI is no longer a mere tool but a partner to drive innovation and accelerate the pace of discovery.
Large language models (LLMs) have demonstrated an impressive ability to help researchers in various domains by answering scientific questions. For example, the so-called Virtual Laboratory, developed in 2024, is a research collaboration between AI and humans to conduct sophisticated and interdisciplinary scientific research. This collaboration exemplifies a new paradigm in using AI for scientific research, moving from AI as a simple tool (e.g., AlphaFold to predict structures) to AI as an active partner collaborating with human researchers.
This Virtual Lab is based on a multi-agent architecture, where different LLM “agents” with various specialties (computational biology, immunology, machine learning, etc.) discuss and propose scientific solutions. At the same time, the human researcher guides the high-level discussions and oversees the practical implementation. Experience with the laboratory designing nanobodies for SARS-CoV-2 demonstrates that it is possible to carry out complex and interdisciplinary projects quickly and efficiently through close interaction between humans and AI agents. Despite the technical challenges inherent in the current state of LLMs (such as the lack of access to very recent literature or the need for careful “notice engineering”), the human-AI team managed to produce 92 experimentally validated nanobodies, showing promising results and opening avenues for future developments.
Ultimately, this collaborative approach highlights three main points:
Interdisciplinary collaboration: The interaction of agents with different backgrounds leads to comprehensive solutions encompassing multiple perspectives.
Complementary roles: The human researcher provides context and makes strategic decisions, while AI agents suggest, critique, and refine ideas.
Flexibility and continuous improvement: The Virtual Lab architecture can be adapted to different scientific projects and replace the underlying LLM with more advanced versions, allowing for more robust solutions as the technology improves.
Summing up, this experience set a precedent for how collaborative science could evolve: AI is no longer a mere tool and becomes a partner to drive innovation and accelerate the pace of discoveries in various fields.
A new form of non-human intelligence
It is not true that AI should "reason like a human," which could even be a limitation. In reality, I believe we would be dealing with a form of non-human intelligence—which some call "alien." Although it is still an indirect reflection of human capabilities, its foundations are in our programming and training.
Non-human intelligence could be defined as the ability to process information, learn, and make decisions autonomously but without the biological limitations of the human brain.
It would be "another form of intelligence" emerging within our environment, although it differs from human intelligence in many fundamental ways and can manifest itself in different forms. It can be, for example, artificial systems based on non-human biological networks or combinations of these, forms of intelligence observed in other species, or a combination of biological and technological elements (augmented or bio-hybrid intelligence) simulating human intelligence.
Generative AI, such as language models (GPT, DALL-E), has capabilities that can be considered rudimentary forms of intelligence since they learn complex patterns in large amounts of data, can generate creative content (texts, images, music) that were previously attributed exclusively to humans and make algorithmic decisions to "reason" about questions or problems.
Although they lack awareness or intentionality, their ability to produce content that simulates human cognitive processes raises questions about whether they should be considered an incipient form of non-human intelligence. In the future, advanced AI networks, combined with advanced technologies, could eventually exhibit emerging behaviors that exceed the capabilities of individual human bits of intelligence, such as interconnected networks of smart devices or AI models of collaborative skills that develop skills not foreseen by their programmers.
Therefore, Generative AI could be considered intelligent but with important nuances, given that generative AI has no consciousness, emotions, or intentionality. It simulates human intelligence but does not truly understand, learns from previous data, but does not create new knowledge in the human sense of the term, being able to combine the information in ways that seem innovative.
In conclusion, generative AI is an important step towards new forms of non-human intelligence, although it is still limited. The actual emergence of more advanced non-human intelligence could depend on future advances in areas such as machine learning, quantum computing, and bioengineering. These developments would not only expand the possibilities of intelligence but could redefine what it means to "be intelligent."
The debate remains open.
—
Horacio R Granero is a Lawyer (1969), Doctor in Legal Sciences (summa cum laude) (2001). University professor since 1970 and Emeritus Professor in 2018. He served as Academic Secretary of the Faculty of Law, Director of the Graduate Department, and founder in 1997 of the Postgraduate Program for Lawyers Specialized in High Technology Law (UCA). In 1987, he founded Albremática, the publishing company behind elDial.com, where he serves as President and leads the Sherlock-Legal Artificial Intelligence Division. He is a member of the Institute of Law and Informatics of the Public Bar Association. Dr. Granero is also a guest professor at various universities in Argentina and abroad, having participated in seminars and published works in his area of expertise. He is the author of several books in his field. In 2024, he advised the Science and Technology Commission of the National Chamber of Deputies on Artificial Intelligence regulation and delivered lectures on the subject at the Complutense University of Madrid. hrg@graneroabogados.com.ar