Humanities and Artificial Intelligence
Lately I’ve been returning to Yōko Ogawa’s The Memory Police, first published in Japan in 1994, a dystopian science-fiction novel about how quietly forgetting begins.
On an unnamed island, things begin to disappear.
One night the inhabitants feel a collective stirring, a sense that something is leaving. By morning, rose petals clog the river. Then without instruction the islanders dig up their rose bushes. A few days later even the word rose fades from memory.
The novel’s terror lies in how disappearance unfolds through ordinary life. People burn objects, hold small ceremonies, and adapt to what remains. The Memory Police arrive later to ensure that no trace remains, but the deeper erasure has already entered daily habit.
No one orders the forgetting. It simply becomes natural, then inevitable, then unremembered as a loss at all.
I return to this novel because I think it describes something happening now in classrooms and workplaces, in the slow migration of cognitive tasks from human minds to algorithmic systems. The question generative AI poses to education is what capacities, once delegated, cease to feel like losses? What are we beginning to forget that we knew?
What follows moves across a wide range of territory: questions of cognition and consciousness, the history of technology, ancient myths of animated creation, and contemporary debates about authorship and identity. The sections circle a shared question from different angles. Readers may enter at any point and follow their own associations outward.
I. The Water We Think In
We tend to think of artificial intelligence as something we use: a tool we pick up, apply to a problem, and set down again. This framing is reassuring because it keeps us in charge. The tool does not use us. It waits.
But this picture is becoming harder to sustain. AI is becoming less a tool and more a condition: something we think inside. Marshall McLuhan argued that we are always the last to notice the medium we inhabit, the way a fish is the last to discover water. The danger of any pervasive medium is precisely its invisibility. We stop asking what it is doing to us because it is doing it constantly, and constant things come to seem natural.
The motivations driving our use of AI are still rooted in recognisably human needs: the desire to connect, to work more effectively, to find coherence in an overwhelming flow of information. These needs are genuine, and AI addresses them in ways that are often genuinely useful. But there is a difference between a tool that extends our reach and an environment that quietly shapes what we reach for.
AI, unlike us, lacks what biologists call autopoiesis: the self-making, self-caring quality of living systems. It has no needs of its own, no stake in what it produces, no care for the person using it. This is not a limitation to be engineered away. It is a categorical difference that matters enormously when we consider what it means to think inside a system that does not, in any meaningful sense, think back.
If AI were merely a tool, the important question would be how to use it well. But if it is becoming a condition, the question is altogether more vertiginous: how do we cultivate independent thought within an infrastructure that was not designed with independence in mind, and whose default settings reward fluency and speed over the slower virtues of doubt, revision, and care? How do we notice the water?
This is the question that makes the encounter with AI genuinely philosophical rather than merely technical. It is the question this essay tries to hold open, rather than answer too quickly.
II. Ambiguity and Conceptual Challenges in AI
Discussions surrounding artificial intelligence are often clouded by ambiguity and vagueness. Words like “artificial intelligence,” “consciousness,” “intelligence,” and “free will” are frequently invoked with little precision, resulting in discussions that can become confusing or even nonsensical. The lack of universally accepted definitions for these concepts makes it difficult to meaningfully attribute them to machines. For instance, while we might have an intuitive sense of human consciousness, philosophers and neuroscientists continue to debate what it entails. Similarly, intelligence resists simple characterisation: is it problem-solving ability, learning capacity, or something more holistic? For students, appreciating these contested philosophical spaces is crucial, as developments in AI compel us to reconsider what these qualities mean both for humans and for the machines we create.
The term “artificial intelligence” itself, coined by John McCarthy in 1955, remains contentious. Some scholars suggest alternatives such as “artificial cognition” to better reflect the processes involved, yet no consensus has emerged. Definitions of AI vary according to perspective. A widely cited approach is to define AI as the study of agents that receive perceptions from the environment and perform actions to maximise a performance measure based on past experience and knowledge.
III. What Does it Mean to Understand?
The question of whether machines really understand language, or only look like they do, has become urgent as generative AI spreads through education. John Searle’s famous Chinese Room thought experiment shows this puzzle clearly. In it, a person is locked in a room using an English instruction manual to put together Chinese sentences. From the outside, the answers seem fluent. But inside, there is no understanding, just someone following rules step by step. Searle argued that LLMs work the same way: they handle words without knowing what they mean.
Many have disagreed with this metaphor. Some say it’s not just the person but the whole system: the person, the instructions, and the environment, that together create understanding. Some also point out that modern AI doesn’t use simple rules but works via pattern recognition and probabilistic inference.
This concern is linked to what’s called the grounding problem: how any system can get past defining words only by using other words. Imagine learning Turkish using only a Turkish dictionary, where every word is explained by more unknown words. Without something like a bilingual dictionary or shared experience, you never reach real understanding.
This debate has implications for how we think about learning.
If we adopt a functionalist view of understanding, claiming that machines, or even students in an invigilated setting, understand simply because they produce coherent, relevant responses, we risk stretching the concept until it loses the qualities we have historically valued.
Educators must also reconsider what it means for students to “understand” something in an age where knowledge is often distributed across people and technologies.
IV. We Only Want Mirrors
N. Katherine Hayles reframes cognition as a process rather than a static state. She emphasises that cognition involves interpreting information and selecting paths among alternatives, a form of agency that need not reach consciousness.
Philosophers highlight that AI may handle functional “easy problems” of cognition but cannot access the “hard problem” of conscious experience. Recognising this distinction helps avoid anthropomorphising AI and encourages careful consideration of its capabilities, limitations, and ethical impact.
Our deeper error, Hayles argues, is to equate cognition with consciousness in the first place. We imagine that thinking must involve reflective awareness, as it does for us. Yet much of our own cognition, rapid judgments, habit formation, sensory filtering, operates beneath awareness. Science fiction has long staged this confrontation. Stanisław Lem’s Solaris presents scientists orbiting a planet whose vast ocean resists every attempt at categorisation: is it organism, intelligence, or something beyond both? None of their theories hold. Instead, the ocean acts as a mirror, dredging up repressed memories and giving them form. A character observes, “We don’t want other worlds. We need mirrors.” The tragedy of the Solarists was their inability to encounter the ocean as anything but a distorted reflection of themselves.
Three recurring motifs in science fiction illuminate the different ways this failure takes shape. In Arthur C. Clarke’s 2001: A Space Odyssey, the black monolith is utterly indifferent to human explanation, cognition without communication. HAL 9000, by contrast, terrifies because it is recognisably human-like yet not human, an intelligence that dwells in the uncanny valley. And in Ted Chiang’s Story of Your Life, aliens whose language structures a wholly different temporality offer a third possibility: a cognition so radically other that to learn it is to enter another Umwelt entirely.
That last example brings us to Jakob von Uexküll’s Umwelt, the perceptual world specific to each species, shaped by its sensory and neurological capacities. Honeybees perceive ultraviolet light and magnetic fields. Humans experience a world dense with social meaning, emotional texture, and narrative. AI has an Umwelt that is statistical rather than lived: it processes patterns of text without a body or desires. What it offers is a secondhand sense of the human world, filtered through correlation rather than experience. To treat this as a mirror of our own cognition is to fall into the same anthropocentrism that doomed the Solarists.
Hayles describes “cognitive assemblages,” systems of interacting cognitive agents whose interpretations co-produce meaning over time. In education, this implies that learning is emergent and participatory rather than simply transmitted from teacher to student. Students engage in distributed cognition with humans, machines, and their environments. The question is what role each element in that assemblage is playing, and what each is quietly teaching the others to expect
V. AI as Counterfeit Humans?
Machines can only pretend to be human. In the language of the late philosopher Daniel Dennett, computers are “counterfeit humans.” As we have recently seen, however, machines sometimes simulate human qualities of care, judgement and wisdom well enough that some people opt for the simulation.
A common misconception is that machines, trained on enormous amounts of human behaviour, might eventually make decisions “just as people do, only better.” This assumption overlooks two critical differences between human and machine learning.
Learning in humans is not merely statistical. Our brains are prewired to a significant degree, and we acquire knowledge through a rich, serendipitous set of experiences. We learn from the feel of an embrace, the taste of ice cream, battle wounds, weddings, athletic defeats, bee stings, dog licks, sunsets, roller coasters, reading Keats aloud, and listening to Mozart alone. This full spectrum of embodied and emotional experience shapes cognition in ways no data set can replicate. Moreover, our learning is intergenerational: teachers and ancestors structure these experiences into interpretive frameworks. AI lacks this depth; it can simulate patterns, but it cannot inhabit a human life.
Truth is accidental from the AI’s perspective. Sometimes generative AI’s outputs happen to align with truth, sometimes not, but it doesn’t have truth-tracking mechanisms grounded in an epistemic perspective (like sense perception, reasoning, or testimony). Generative AI’s outputs can resemble “knowledge-how” (procedural competence) or “knowledge-that” (propositional statements). But the resemblance is behavioural rather than epistemic: the system doesn’t possess knowledge, it just outputs text that looks like knowledge. As Anil Seth points out, a simulation of a storm doesn’t mean it rains in your computer.
Moreover, humans have a crucial ability to ignore vast amounts of information and zero in on what's relevant. This realisation is not cold calculation; it involves caring about information, grounded in real needs, real embodiment, and being "autopoetic" (self-making).
At the same time, Owen Matson notes that as LLMs produce fluent text, they are often judged by human (or humanist) standards. When they fall short, the text is dismissed as hollow, but this is a category mistake, as LLMs are not authors.
These differences are why our relationship with machines need not be framed as a battle. We can take what we know about ourselves and bring it into our interactions with machines, as part of a larger network of distributed cognition.
VI. Beyond Problem Solving
Humanity’s problem solving orientation has been an important engine of human progress. Life and human activity, however, encompass far more than simply finding optimal solutions to defined problems. A mindset that frames existence as a series of problems to be solved can lead to overestimating AI’s capabilities and misunderstanding its role relative to human intelligence.
Central to this critique is the technocratic bias that often shapes discussions of AI. A “problem-solving worldview” assumes that technology is the ultimate arbiter of societal and economic outcomes, and that humans are primarily responsible for tackling and resolving these problems. Yet, humans are often the creators of the very problems AI is asked to solve, and many challenges, particularly in social, ethical, and political domains, cannot be reduced to technical calculations or algorithms.
Even in fields where AI seems promising, such as engineering or law, many decisions involve parameters that are fundamentally insoluble. Designing a bridge, for example, is not purely a matter of optimising structural integrity; it also requires ethical and social judgment about traffic, funding, and community impact. Similarly, the work of lawyers and judges often revolves around interpreting ambiguity and debating unsettled points of law, tasks that demand nuanced judgment and ethical reasoning beyond AI’s capacity for pattern recognition or reasoning.
Even a hypothetical superintelligence would confront unsolvable question. What would be the training data set for an AI that would decide what sort of civilization to preserve, and what to abandon? And which humans could be trusted to frame the query? The humanities, through their emphasis on problem-posing rather than problem-solving, prepare us to engage with these enduring questions, nurturing our ability to confront them across generations.
At the same time, using AI to avoid learning prevents the development of expertise, due to an absence of foundational knowledge, schema construction (that underpins flexible thought), and the prediction errors that prime the brain to rewire. Furthermore, the default setting of many LLMs is that they are sychophantic, which may provide an illusion of learning.
Philosopher Samuel Fleischacker’s concept of liberty emphasises the development of independent judgment, exercised by the individual rather than delegated to others. Critics who dismiss particular higher education assessments because AI could perform them miss the point: the value lies in doing, even imperfectly, and in the developmental process itself. Just as dancing or writing a love letter is meaningful precisely because it is performed personally, writing and reasoning are valuable as human acts of engagement and practice, independent of whether a machine could do them “better.”
By moving beyond a narrow problem-solving framework, higher education students can better appreciate the true capabilities and limitations of AI. Al may be intelligent (solving problems) but is not necessarily rational (overcoming self-deception) or wise (coordinating all aspects of knowing for what is true, good, and beautiful).
VII. AI as Disruption
Generative AI’s greatest value in learning may lie less in providing answers and more in creating productive friction, surfacing assumptions, exposing blind spots, and prompting deeper reflection, much like Juror 8 in Twelve Angry Men. Yet unlike Juror 8, AI lacks ethical judgment, empathy, dissent, and responsibility; it is not a knower and cannot authentically challenge human consensus. It can, however, be used to extend human dialogue by suggesting connections and supporting reflective conversations, as Rupert Wegerif’s dialogic theory of education suggests.
While AI can return us to ourselves in new ways, it requires concerted, deliberate action and it doesn’t happen by default. Humans must cultivate their own rationality and wisdom to be proper “mentors” for Al.
Immanuel Kant’s Enlightenment motto, Sapere aude! or in English “Have the courage to use your own understanding” called on individuals to throw off the chains of intellectual dependence and think for themselves. Enlightenment, for Kant, was personal emancipation through reason.
Epistemic freedom, however, requires more than individual courage. It depends on shared norms and institutional support, inside and outside of the classroom.
VIII. AI Hallucinations and Human Perception
Generative AI’s “hallucinations”, plausible but false outputs, offer insight into cognition itself. LLMs generate text by predicting the most likely continuation based on patterns in their training data. They do not “know” in any human sense. The prevalence of “gotcha posts” on platforms like LinkedIn, where users highlight AI mistakes, emphasises that even highly sophisticated models can mislead, challenging assumptions about automation of trust and expertise.
Interestingly, human perception is itself a controlled hallucination. Predictive processing models in neuroscience suggest that our brains continually generate hypotheses about the world and adjust them according to sensory input. Optical illusions, like the Müller-Lyer illusion, where lines of identical length appear different because of arrow-shaped ends, demonstrate how context influences perception. Memory is similarly reconstructive: we recall events shaped by emotion, suggestion, and cultural narratives rather than as faithful recordings. Cultural background and emotional state subtly modulate perception: the same apple may taste sweet or tart depending on expectation, and anxiety can turn a shadow into perceived threat. Both AI and humans rely on prediction mechanisms, but humans integrate embodiment, emotion, and feedback from reality. AI “hallucinates” statistically; humans “hallucinate” interpretively. These parallels highlight epistemic humility: plausibility is not truth, whether in neuronal circuits or neural networks.
IX. Authorship and Stewardship
AI forces higher education students to confront fundamental philosophical questions about what it means to be a human author,
The concept of the author has evolved dramatically over time. Originally, the word derives from the Latin augere, meaning to augment or steward. In pre-modern societies, to author a text or law was to contribute to a living tradition, adding to the work of predecessors. Hannah Arendt emphasised that authority entails custodianship, a responsibility to preserve, interpret, and transmit knowledge.
Modernity enshrined the myth of the author as a solitary genius, whose originality was codified in copyright law. AI disrupts this model. It generates poems, legal arguments, or artworks by remixing existing material, without intention or accountability. Questions of authorship become complex: is it the programmer, the user, or the collective human data that “authors” the work?
In many ways, AI mirrors the older notion of authorship as augmentation, but it lacks stewardship. The critical lesson is that humans remain the responsible agents: selecting, curating, and safeguarding what AI amplifies, rather than ceding creative agency entirely. Authorship, in this sense, returns to its original meaning: tending to, augmenting, and transmitting tradition.
At the same time, if generative AI generates text that moves us, it demonstrates what theorists have argued for decades: meaning isn’t deposited by authors, but created through reading. Socrates distinguished between “living speech” and “dead writing,” arguing that the written word cannot respond or engage. He likened it to seeds left in the sun: devoid of vitality. Text is animated only through engagement.
X. Temporal Conversational Selves
Large language models simulate conversational “selves.” These are not fixed identities but fragmented, context-dependent constructs that reset with each session. For example, in a “20 questions” game, an LLM may not commit to a single object at the start but instead generates multiple possibilities, creating a multiverse of possible selves. This resonates with Buddhist views of the self as impermanent and non-fixed.
This provides opportunities to explore self-continuity, the sense of connection an individual feels with their future self, often identified as an important contributor to human flourishing by supporting intentional decision-making.
XI. Supercomplexity and Education
We often think of education as a race against time: learn fast enough, adapt quickly enough, and keep pace with a changing world. But generative AI re-synthesises reality itself, producing texts, images, arguments, and even identities at a speed and scale beyond human comprehension. Jean Baudrillard’s notion of simulacra, signs multiplying so rapidly they lose their referent, aptly captures this collapse of reference in a world dominated by generative AI.
Ronald Barnett described higher education in terms of “supercomplexity”: universities can no longer promise mastery, only preparation for uncertainty. This aligns with Zygmunt Bauman’s “liquid modernity,” where social bonds fail to solidify; Paul Virilio’s “dromology,” highlighting how speed erodes meaning; and Ivan Illich’s Deschooling Society, critiquing the mismatch between education and a world saturated with knowledge.
Students must also contend with identity destabilisation. If AI surpasses human capability in writing, coding, or creative tasks, what does this mean for the human role as professional or creator? As Brian Christian asks in The Most Human Human (2011), if we define our uniqueness reactively in relation to machines, are we diminished or liberated?
Education needs to cultivate evaluative judgment, discernment and the capacity to live with uncertainty.
XII. The Historical Context of Technology
The historical context of technology provides a crucial framework for understanding the advent and implications of Artificial Intelligence, emphasising that current anxieties and transformations are part of a long-standing pattern of technological evolution.
Marshall McLuhan famously said, “The medium is the message,” meaning that the tools we use to communicate influence the structure of society itself.
More generally, throughout history, major technological innovations have consistently led to unintended and often dramatic consequences that even their creators could not foresee. This pattern is a fundamental aspect of technological innovation.
The development of improved sailing vessels and trade routes played a significant, unintended role in the decline of the massive Ottoman Turkish Empire by bypassing its trading posts and disrupting its wealth and information flows. The creators of this sailing technology had no conscious plan to affect the empire in this way; new possibilities simply emerged as technology evolved.
The invention of the smartphone did not foresee the massive societal impact of social media, which evolved almost overnight as an unintended consequence.
The automobile led to the unforeseen phenomenon of suburban sprawl, as people realised they could live farther from their workplaces, creating feedback loops that changed road construction and urban development.
The Agricultural and Industrial Revolutions represent massive technological shifts that fundamentally inverted human society. Agriculture, which once occupied 80% of the population, now engages only 2-3% in industrialized nations. These revolutions had profound and ongoing consequences, such as dramatic societal restructuring, which humanity is still processing.
The mechanical clock, which spread in the fourteenth century, reshaped how people understood time. While the abstract framework of hours and minutes underpinned scientific thinking, it also detached time from human rhythms and senses.
Just like its predecessors, AI is a new, powerful, and potentially radical technology that is guaranteed to have unintended consequences. There is no reason to expect AI to be an exception to this historical rule. The impact of AI, like all technology, is not solely determined by its creators but by how it is deployed, received, and integrated into complex systems.
XIII. Artificial Humanities and Our Oldest Dreams
According to Nina Beguš’s Artificial Humanities, based on her 2020 PhD dissertation, our most advanced conversational machines did not emerge from nowhere. They are built upon deep cultural lineages, drawing on fictional scripts and inherited narratives that reach back thousands of years.
One of the most enduring is the story of Pygmalion, first told in Ovid’s Metamorphoses. In the myth, Pygmalion is a sculptor from Cyprus who becomes disillusioned with the world around him and carves a statue of the perfect woman. He falls in love with his own creation, and the goddess Aphrodite, moved by his devotion, brings the statue to life.
This story has travelled across centuries as an allegory of art and desire. Yet it also speaks to the dream of animation itself. To imagine that what we make might awaken, that a form fashioned from stone or code might one day speak back, is one of humanity’s oldest longings. The Pygmalion myth captures that moment of crossing, when creation begins to resemble its creator. In that sense, it is an early meditation on artificial intelligence.
The same imaginative pattern continues to shape our technologies. The idea of the “persona mask” captures how conversational AI performs a role, adopting a voice and rhythm to seem human. These systems are built to pass, just as Pygmalion’s statue was meant to appear real. The myth of the perfect, compliant creation echoes through their design.
This logic reappears in George Bernard Shaw’s 1912 Pygmalion, where the linguist Henry Higgins trains Eliza Doolittle to speak in the manner of a duchess. His experiment unfolds as a series of social and linguistic tests that anticipate the logic of the Turing Test. Like many creators before him, Higgins does not see Eliza as an autonomous being. He believes she is his creation, a vessel for his ideas.
The same story resurfaces in computing history through the Eliza Effect. Named after the 1965 chatbot Eliza, which took its name from Shaw’s character, the effect describes the powerful human tendency to project emotion and understanding onto machines. We respond to them as if they were human because it is the only script we know. The technology industry often designs with this in mind, leveraging our readiness to see personality and empathy.
The Pygmalion story also carries a warning. When Eliza leaves, Higgins finds himself helpless, unable to function without her. The creator becomes dependent on his creation. The same question now echoes through our relationship with intelligent systems.
The story echoes that of Narcissus and Echo in Book III of the Metamorphoses. There, the Roman poet Ovid (43 BCE-17 AD) recounts the tragic fate of young Narcissus, whose beauty and pride became a fatal prison when he caught sight of his own face in a reflecting pool:
“Here, the boy, tired by the heat and his enthusiasm for the chase, lies down, drawn to it by its look and by the fountain. While he desires to quench his thirst, a different thirst is created. While he drinks he is seized by the vision of his reflected form. He loves a bodiless dream…”
Generative AI acts similarly, reflecting us with dazzling precision while dissolving the boundary between self and simulation. Like Narcissus, we risk mistaking reflection for relation.
The mirror’s danger is seduction. We chase its perfection and forget the fragile, embodied strangeness that makes us human.
This is like another Portrait of Dorian Gray: a technology that safeguards our image while quietly exposing the decay beneath.
From another tradition comes the Persian legend of Jamshid’s cup (جام جم), said to reveal every corner of the world to its possessor. Some versions link it to Darius I’s secret network of “eyes and ears.” The cup, like Narcissus’s pool or Pygmalion’s statue, promises knowledge and dominion yet ensnares those who gaze too long into its depths.
Mary Shelley’s Frankenstein translates this lineage into the scientific age. Her story replaces divine power with human intellect but keeps the same emotional blueprint. Victor Frankenstein longs to acquire knowledge and breathe life into lifeless matter, then recoils from what he has made, reflecting an earlier critical shutdown of internal dialogue
We now inhabit their shared aftermath: ingenious in creation yet anxious in reflection.
Nina Begus’s work emphasises that to understand artificial intelligence more fully, we must first understand the stories that brought it to life, in order to move beyond familiar paths, and articulate its novelty.
XIV. Ethical and Existential Considerations
Philosophers can provide guidance for navigating AI’s ethical and existential dimensions. Just because AI can be used does not imply that it should. Bernard Stiegler wrote that technology can be viewed as a pharmakon, both remedy and poison, highlighting AI’s dual potential to empower and erode human capacities. Hartmut Rosa emphasises “resonance”, deep engagement with the world, contrasting it with alienation, which may intensifies when AI mediates experiences. Gert Biesta stresses that education fosters responsibility and judgment, not just knowledge transmission, while Giorgio Agamben emphasises potentiality: the freedom to choose not to act, resisting the deterministic logic of predictive systems.
XV. Conclusion
One of AI’s greatest provocations is a philosophical one. From cognition and perception to authorship and identity, AI prompts us reexamine fundamental human concepts. A core task of education is then to support students to learn how to be and how to act within an increasingly synthetic and cognitively networked world.
The Memory Police are not coming for our rose bushes. The forgetting, if it comes, will be gentler than that: a gradual outsourcing of judgment, a slow retreat from the difficulty of genuine understanding, a habit of reaching for the answer before we have felt the weight of the question. The islanders did not choose to forget. They simply stopped rehearsing what remembering required.


This is such a wonderful piece. It helps me clarify so many of the poorly articulated ideas floating around my head. Thank you. 🙏
Thanks for writing this. It's really valuable to enrich our reflections on "AI" or advanced ML systems with the philosophical ideas developed in the XXth century by critical theorist, "postmodernist", media theorists, and other philosophers. The ideologues of Silicon Valley tend to spouse a very naive or simplified views on reality, technology, media, and humanity itself. It seems to me that every one of your sections could be the starting point of a whole research program!