Chatbot
ELIZA emerged when Weizenbaum tried to show the limits of human-computer interaction—instead discovering that humans project understanding onto pattern-matching systems that possess none.
The chatbot emerged because Joseph Weizenbaum wanted to demonstrate the superficiality of human-computer interaction—and accidentally created a phenomenon that revealed something unexpected about human psychology instead.
Weizenbaum developed ELIZA from 1964 to 1967 at MIT, publishing his landmark paper in January 1966 in Communications of the ACM. He built the program in MAD-SLIP on the IBM 7090 at MIT's Project MAC. The name came from Eliza Doolittle in George Bernard Shaw's Pygmalion—a character who could be 'incrementally improved' by various teachers, just as the program could be enhanced with different scripts.
ELIZA worked through pattern matching and substitution. It contained no genuine understanding—no representation of meaning, no model of the world, no comprehension of what either party was actually saying. Yet it created what Weizenbaum called an 'illusion of understanding' that proved far more powerful than he anticipated.
The most famous script, DOCTOR, simulated a Rogerian psychotherapist. Carl Rogers's therapeutic approach relied on reflecting patients' words back to them through open-ended questions. This was perfect for a pattern-matching system: the therapist didn't need to know anything, just rephrase what the patient said. 'I am feeling sad' became 'Why do you say you are feeling sad?' The lack of real understanding became indistinguishable from deliberate therapeutic technique.
What shocked Weizenbaum was the human response. His own secretary, who knew exactly what ELIZA was, asked him to leave the room so she could have a 'real conversation' with the program. Users became emotionally attached. Some insisted the machine truly understood them. Weizenbaum later wrote: 'I had not realized that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.'
This phenomenon—humans attributing intelligence and emotion to systems that possess neither—became known as the 'ELIZA effect.' Weizenbaum had intended to show how shallow computer understanding was; instead he demonstrated how shallow human perception of understanding could be.
The experience transformed Weizenbaum from AI researcher to AI critic. His 1976 book 'Computer Power and Human Reason' argued against delegating certain human functions to machines, regardless of technical capability. He became one of the earliest voices warning about the social implications of artificial intelligence.
In December 2024, Rupert Lane and colleagues reconstructed the original ELIZA using approximately 96% of Weizenbaum's 1965 source code, demonstrating that the implementation could reproduce almost exactly the conversations from his 1966 paper. The program that Weizenbaum built as a demonstration of AI's limitations had survived to see its descendants—modern large language models—become central to human-computer interaction. The ELIZA effect he discovered remains the foundational insight: humans project understanding onto systems that merely simulate the appearance of understanding.
What Had To Exist First
Preceding Inventions
Required Knowledge
- natural-language-processing
- pattern-matching-algorithms
- rogerian-psychotherapy
What This Enabled
Inventions that became possible because of Chatbot:
Biological Patterns
Mechanisms that explain how this invention emerged and spread:
Biological Analogues
Organisms that evolved similar solutions: