The Costs of Metacognitive Offloading
For centuries, Western culture trusted in humanism, the belief that reason and education would civilise humanity. The recipe seemed simple: immerse people in great books and culture, and they would emerge enlightened. Yet history proved humanism a fragile tether. The disasters of the 20th century showed that education does not prevent systemic violence or atrocity. The dream that culture alone could tame humanity collapsed.
This crisis leaves us with a pressing question: if old humanist methods are imperfect, what now safeguards us?
The danger cannot be solved by more knowledge. Information, after all, is morally ambivalent. The Nazis’ technical and academic expertise only sharpened their destructive methods.
Philosopher Hannah Arendt (1971) reflecting on the trial of Adolf Eichmann, identified the root of such evil in a “curious, quite authentic inability to think.”
The deeper problem, then, is thoughtlessness: the failure to reflect, to step back and ask, “What am I doing?”
In Dialectic of Enlightenment (1944), Theodor Adorno and Max Horkheimer argued that modern reason had become reduced to what they call “instrumental reason.” By this they meant a form of rationality concerned only with efficiency, calculation, and control: the ability to find the best means to an end.
If humanism erred in believing culture automatically civilises, our error is believing machines can carry our responsibility for metacognition. Metacognition includes the monitoring of performance, making decisions about what to do next, and gauging how well one understands a task—skills that sit on top of the actual learning task itself.
We risk becoming modern Victor Frankensteins: brilliant accumulators of knowledge, capable of creating powerful technologies, but suffering from a critical shutdown of internal dialogue.
Students and citizens alike need to be startled out of passivity, compelled to ask: What assumptions underlie this output? How could I see this differently? Interestingly, some custom GPTs and prompts have been designed to support this very work.
Metacognition is no luxury. To outsource it, whether to clichés, institutions, or algorithms, is to drift back into ordinary thoughtlessness, with all its devastating consequences. Generative AI may offer unprecedented power, but without reflexive thinking, it risks becoming, as Ana Ilievska (2024) argues, another portrait of Dorian Gray in the attic, reflecting back our own worst failings. The lesson of Frankenstein for the age of Generative Al is that the technology is more likely to reveal us than destroy us.


Jumping in because of the topic and my current job (teaching AI and Ethics to HS seniors in a public school Honors English course). I don't have a hard stat on this yet, but generally speaking, each student I teach yearns for concrete wisdom. They get pretty quickly that knowledge is something a robot can replicate, synthesize, and then produce some type of output much faster then their own biological abilities allow. They also see the hardware offerings from big tech and immediately feel an intuitive discomfort at something like Meta's new neural band/smart glasses combo using AI to project, detect, visualize, collect, and retain (for lack of a more precise term) "brainwave data". But they also look at so many adults age 22 and up, the louder ones spread across the public sphere, and are skeptical about humanity and our capacity for civil peace. It's easier as a young person to fragment into like- minded lament, offload psychological and social burdens onto a reduced, flattened enemy...and then, in some type of quickly accepted exhaustion, offload metacognitively. This is year two of teaching this course, and across three sections of this course, the amount of students who desire minimal to no AI in their lives after high school has just about tripled when I ask that question. In one year. Just adding this for context, I appreciate your work and am adding encouragement to keep writing what you write.
The final line rings so true:
The lesson of Frankenstein for the age of Generative Al is that the technology is more likely to reveal us than destroy us.