LLMs reveal not that they are human like, but that humans are machine like

In 2022, Blake Lemoine, an engineer employed by Google claimed that LaMDA (Language Model for Dialogue Applications) a conversational AI built by Google, was actually sentient and essentially to be considered as human. He based this on LaMDA being able to express clear emotional statements. Such an ascription of sentience is not novel, but goes back further in the annals of computing than one might expect. What it tells us flips the story about human and machine on its head. 

In the mid-1960s, at the Massachusetts Institute of Technology, computer scientist Joseph Weizenbaum developed a program called Eliza, which would go on to become one of the most iconic early examples of artificial intelligence. Designed to simulate a conversation between a patient and a psychotherapist, Eliza used a simple pattern-matching technique that rephrased users’ statements into questions, giving the illusion of understanding. For example, if someone typed, “I feel sad,” Eliza might respond with, “Why do you feel sad?” The program’s most famous script, known as “Doctor,” was deliberately vague, reflective, and open-ended.

What astonished Weizenbaum was not the technical performance of the program, by his own admission, it was relatively simplistic, but the reactions it provoked. During testing of Eliza’s “Doctor” script, Weizenbaum allowed various people to interact with the program. Among them was his secretary, who sat down to converse with Eliza as if it were a real psychotherapist. Despite knowing it was a computer program, she reportedly became emotionally engaged in the conversation and eventually asked to be left alone in the room with Eliza so she could continue privately

The legacy of Eliza endures not only in the history of computer science but also in ongoing debates about AI. The phenomenon came to be known as the “Eliza effect”, the tendency to mistakenly attribute understanding or emotion to machines. As the story of Blake Lemoine shows, it remains relevant today, as more advanced systems like Gemini, Claude and ChatGPT produce text that can seem deeply human. Though the technology has evolved far beyond Eliza’s rudimentary scripts, the attribution of sentience to computer systems remains.

The Eliza effect is in fact much more generalized and a staple of human psychology. In an article I wrote two decades ago, I called this the Hyper Active Intentionality Detection Device or HIDD, which is a kind of cognitive bias along the lines of those identified by Amos Tversky and Daniel Kahnemann. The HIDD is a theoretical concept that describes a human tendency to attribute intentionality or agency to events, particularly those that are ambiguous or difficult to explain. This cognitive bias is thought to have evolved as a survival mechanism: in uncertain situations, it is generally safer to assume that a potential threat, such as a predator or enemy, exists than to overlook it. As a result, humans may be predisposed to perceive purposeful agents, intention and sentience behind events and occurrences, even when none exist. Consequently, a belief that LLMs are humanlike, the “Eliza effect,” can be attributed to this cognitive bias. 

It is therefore no surprise that LLMs like ChatGPT, Gemini and Claude are perceived to be humanlike, although the case for their humanlike qualities are better than anything that has come before them. 

LLMs are more advanced pattern recognition systems than Eliza was, but their essence are no less machine-like. The real surprising insight is therefore not that AI is humanlike but that humans are machine-like. Much of what we see as specifically human is actually just a mechanism, an algorithm, whether simple, as Eliza, or more complex as LLMs. We learn to solve problems by following algorithms, similar to how LLMs work. So, problem solving is not a distinctive or in any way special feature of humanity. 

The human-like qualities of LLMs are therefore based in human cognitive biases that are a deep rooted aspect of our thinking. Instead of machines behaving like humans, they reveal that humans behave more like machines than we would like to admit.

Photo by Mark Williams on Unsplash


Posted

in

by

Tags:

en_GBEnglish