Humane Learning
in a
Machine Age
A Professor’s Resolutions
Dr. Ben Reinhard
“I wish it need not have happened in my time,” said Frodo.
“So do I,” said Gandalf, “and so do all who live to see such times. But that is not for them to decide. All we have to decide is what to do with the time that is given us.”
As we enter the year of Our Lord 2025, those educators still interested in the pursuit of wisdom may well sympathize with Tolkien’s bewildered hobbit: we may have at last reached the end of our world, and what comes next is anybody’s guess.
A crisis in the humanities is of course nothing new: liberal education has long been on unstable ground in western universities, the soil eroding and the water table rising. But now, with the advent of generative artificial intelligence, we seem to have come to the final deluge, which threatens to sweep away the last embankments and to render all educational policies and procedures obsolete. Certainly traditional studying could soon be a thing of the past: lectures can be recorded, broken down, and reassembled for a study guide. More seriously still, paper writing — long and justly the touchstone achievement of liberal arts study — has become almost meaningless. Any student can now ask one program to write his paper for him, another to remove the telltale signs of Machine composition, and a third to recast the finished product “in his own voice.” When counterfeit currency becomes widespread, the value of the real thing crashes. We are left with an academy in an unprecedented depression.
Mainstream academic responses to these developments have been quick, varied, and utterly ineffective. Those familiar with the nationwide conversation will know that the more unscrupulous and unserious members of my profession welcome the revolution with enthusiasm — Unleash your students’ creativity! Create equitable access! Eliminate unnecessary drudge work; focus on higher skills! That the drudge work being eliminated is learning itself seems to trouble them not at all. Others, with better sense but slower reflexes, scramble to counteract the rise of artificial learning while maintaining the structures of the modern academy: their efforts include banning phones, watches, and earbuds from testing environments; running every piece of student writing through artificially intelligent plagiarism detectors; and (how I wish I were joking) using computers to track student eye movements during examinations. And so, in an attempt to combat AI learning, they subject all and sundry to an AI surveillance state.
Eliminate unnecessary drudge work; focus on higher skills! That the drudge work being eliminated is learning itself seems to trouble them not at all.
For all this, the latter group deserves some sympathy: at least their ends are defensible, even if their means are ineffective and border on totalitarian. But they fail to get to the root of the problem. In the first place, they expend great effort and sacrifice much (including anything like a trusting relationship of teacher and student) to defend the status quo; they do not question whether the status quo is worth defending. In the second, they focus almost entirely on what teachers can impose upon the students: they do not ask what teachers can and should take on themselves.
There is perhaps another way. Properly understood, every crisis is a call to conversion: and in any case, the New Year is a time for reflection and resolution. With this in mind, I offer three educator-focused resolutions in the face of the AI revolution.
1. All of my courses will feature 100% human-generated content. I will make no use whatsoever of generative AI in the design, delivery, or assessment of my courses. All course materials — every lecture, exam, and assignment, right down to the smallest quiz — will be my own creation, adapted to the needs of the community of students under my instruction.
None of this should appear radical: that humanities courses will be AI-free should go without saying. Unfortunately, it needs to be said. I know too many educators who have made their peace with the Machine, using it (for instance) to craft their lesson plans: you’d be amazed, I’ve been reassured, just how good the plans are! And I’m sure generative AI can be expedient. I do not doubt for a second that a decent program could make a passable lesson plan; with enough off-the-cuff improvisation and charisma, the students might not even be aware of the bait-and-switch. I would certainly save countless hours by outsourcing my lessons to the robots.
There is only one problem: to do so would make a mockery of the office of the educator and betray the trust of my students and their parents. If the professor doesn’t believe that Vergil or Milton or Wordsworth are worthy of his time, attention, and reflection, why should the student? And time is exactly what any authentic teaching absolutely requires. The teacher, as St. Thomas tells us “leads the pupil to knowledge . . . in the same way that one directs himself through the process of discovering.” We teach our students by recreating our own learning process. Because of this, lesson planning is the heart of teaching: what happens later, in the classroom, is very largely performance. A teacher who uses AI lesson plans is merely a front-man for the Machine.
A teacher who uses AI lesson plans is merely a front-man for the Machine.
There is a practical corollary to the principled stand as well: the AI-counterfeited skill will quickly atrophy; the teacher who relies on fake lesson plans will soon lose the ability to create real ones. And this brings me to my second resolution:
2. I will ensure that my own learning is authentic, integrated, and human-scaled. This means, among very many other things, a return to physical media. This will probably be the hardest resolution to keep: while I have never been tempted by generative AI, I have (like most of my generation) become far too reliant on the cheap, disintegrated, and practically infinite information available on the internet.
This must change. In my two decades of academic study, I have consumed far more information than previous generations of professors encountered in their entire careers, but I retain far less of it. Because of this, what I have fleetingly “learned” cannot shape my thinking in moments of reflection, and I cannot rely on it in a pinch (say, when a student asks a question in class). And this says nothing of the Internet’s impact on my attention span. And so: if I really need to know (for instance) when Walter Raleigh was born, I can reach for a book.
This commitment will place compounding demands on my time. In the first and most obvious place, the time required for information retrieval will increase exponentially; as a consequence, more time and attention will have to be given to careful course planning. I will need to have a clearer sense of where my course is going if I have any hope of mastering the necessary information in time; failing that, I will have to admit ignorance before my students. Finally, I will have to expend more time and energy on memorization, as information forgotten will not be so easy to retrieve.
I am not at all sure that this is a bad thing. True learning, like a proper pot of chili, simply cannot be rushed: the ingredients need time to simmer together. Memory, reflection, and rumination are prerequisites for learning, not obstacles to it. If nothing else, the repeated walks to the university library should be good for my physical and mental health; I may even accidentally bump into some students when I do so. And thus to the third point:
3. Above all else, I will meet the AI crisis with humane response in my courses. The evils of technocracy cannot be solved by technocratic means. Not that this will stop the techies from trying. There is no shortage of companies attempting to profit from the academic integrity crisis. Indeed, in addition to old standbys such as Turnitin and Respondus, the AI companies themselves have gotten into the game. “Poacher-turned-gamekeeper” is insufficient to the blackguardly audacity on display here. We are forced to grope for absurd analogies: say, a scientist responsible for the creation of a global pandemic later attempting to sell the world on a dubiously effective cure.
A pox on all of it. I will not treat my students as suspects to be interrogated or prisoners to be surveilled. Instead, I will rely on activities — class discussion, in-class writing and revision, and so on — less susceptible to artificial plagiarism and more conducive to the growth of true academic fellowship in my courses. The goal is not to create an AI-proof classroom (a likely impossible task), but instead a community in which such plagiarism is less tempting.
We will have to move more slowly on this model: to read fewer works, but read them more deeply; to write less, but invest more in what we create. This seems to me an acceptable tradeoff. As we enter the machine age, the goal of introductory humanities coursework is no longer to teach close reading or the Western literary tradition, but something more fundamental: how to be human.
Will I be able to keep these resolutions — and, more importantly, will any of them work? I’m not very optimistic, but I’m game for the challenge nonetheless. After all, it is not as though the American educational establishment was in prime health before the AI revolution.
Already in the 1950s, Russell Kirk observed that educational industrialization and standardization and egalitarianism had succeeded only in producing “Cow College” and “Behemoth U” and the “multiversity”; subsequent decades of bureaucratic interference, ruthless competition for tuition dollars, and business-driven pursuit of efficiency have created an American academy deformed beyond even Kirk’s wildest nightmares.
This is the intellectual establishment that has produced — and subsequently been consumed by — artificial intelligence. If it fails, it fails. By contrast, the real ends of education — the cultivation of intellect, the inculcation of right sentiment, and the formation of virtue — remain untouched. Considered in this way, the AI revolution can be viewed not as an all-destroying deluge but as a purifying fire: an invitation (or perhaps a demand) to return to truly humane education.
It is an attempt well worth making, and those educators and institutions who truly commit to it may find more than moral victories. To take only one example: my own institution, inspired by its Franciscan charisms, has recently adopted the core values of Encounter, Conversion, and Community. This seems to me to be in exactly the right vein. These are fundamentally human activities, irreplicable by the Machine — and precisely the things that our alienated, anxious, and distracted world needs. We cannot of course be certain of success. It may be that the Machines and their servants will triumph: it may be that we have finally reached the abolition of man foreseen by C. S. Lewis so many years ago. But we can be certain that, if any education survives the dawning robot age, it will be the humane kind.
And at least it will be fun while it lasts. So here’s to 2025 and our best attempts at redeeming the times. After all, as Gandalf observed, nothing else is asked of us.