The term “hallucination” to describe the factually incorrect or nonsensical outputs of large language models carries a powerful, yet misleading, connotation. It suggests a pathological flaw in an otherwise structually rational system. This framing implicitly holds the LLM to a human standard of truth-seeking cognition, a standard we ourselves often fail to meet.
Just as many human cognitive biases are ecologically rational adaptations that promote survival and social cohesion, LLM “hallucinations” can be understood as structurally rational outputs within a predictive linguistic environment. They are a direct, and often inevitable, consequence of a design optimised for fluency, coherence, and generativity.
Human cognition is riddled with systematic deviations from logic and probability. The availability heuristic, for instance, leads us to overestimate the likelihood of events that are easily recalled (like aeroplane crashes after news coverage), while confirmation bias inclines us to seek information that confirms our pre-existing beliefs. From a purely logical standpoint, these are errors. However, from an evolutionary perspective, they are highly efficient adaptations. In an environment where speed and resource conservation were critical for survival, a “good enough” answer now was far more valuable than a perfectly accurate one later. These biases are cognitive shortcuts that maximise efficiency, social alignment, and rapid decision-making in a complex, ambiguous world.
Similarly, LLMs are engineered for a specific “environment”: the statistical landscape of human language. Their primary training objective is to learn the latent patterns, structures, and probabilities of tokens (words or sub-words) in a massive corpus of text. Their success is measured by their ability to predict the next most plausible token in a sequence. This objective function inherently prioritises qualities like fluency (does it sound like natural language?), coherence (is it internally consistent?), and generativity (can it create novel and interesting combinations?).
Within this framework, what we call a “hallucination” is often the model performing its primary function exceptionally well. The model’s “bias” towards smooth, probable sequences is its version of the human availability heuristic; a shortcut that serves its primary goal brilliantly while creating predictable side effects.
Our biases are rational for navigating a social and physical world; the LLM’s hallucinations are structually/architectually rational for navigating a linguistic one.
Hallucinations are actually very useful in the field of strategic foresight. We have found many ways to control hallucinations when not required, but when brainstorming etc our franework allows for speculation, fabulations and Many Worlds. Works a treat
Is this a possible way to reveal our unconscious biases? Could this be a way to create a fairer world?
Forgive the innocence of my questions. I have had zero interest in this field until reading your post, seeing it as only a destructive competitor seeking to suck my pond dry. But these 'hallucinations' sound more like the child revealing the emperor has no clothes. Rather than pointing and laughing at the foolishness, the hallucinations could become something genuinely useful in themselves.