The first thing that stood out about D’s homework was his handwriting. Neat, sharp, graceful, old-fashioned letter-writing kind of handwriting. It was always the same, no matter what kind of homework he did. He never forgot to put his name at the top, but he never needed to.
I asked him once how he had learned, why he had taken the time to teach himself. He told me he was jealous of his sister. He didn’t want to lose to her at anything. So he decided to write just like her. And if you ever saw a bit of her handwriting, you knew that he did.
She gave him the Bible he carried everywhere with him. He was always writing in it with that same careful hand, neatly populating the blank pages with stray thoughts and prayers. Prayer, for D, was a natural thing, like breathing in and out.
D’s life was difficult. Some of it was within his control. Much of it wasn’t. We all helped as we could. Art class was a particular solace for him, something he excelled at, in which he could lose himself. My algebra class was not so cathartic. Yet he told me he enjoyed it, because he enjoyed making himself do painful things, like algebra. It was light, but it was dark. Everything about D was half light, half dark.
Day after day, he sat apart from the other students, head down, grinding it out. When he finished, he would draw something for himself as a treat. One day he came up to my desk with a shy “Miss?” and handed me a slip of notebook paper, on which he had executed a perfect vulture’s head in blue ink. Later, I placed a bid on one of his pieces in a school auction, a cowboy in pencil. When I told him I’d won it, he lit up: “You won the cowboy?”
Near the end of the year, we prepared for an awards banquet where we would recognize our top students. I had more than one A student in D’s algebra class. I also had a kid who dragged his heels in the first semester but pulled it together and finished honorably after getting slammed with a leukemia diagnosis. Really, I could make an argument in my mind for giving the award to just about anyone in that tiny class. But in the end, I had to choose the best combination of grades, consistency, work ethic, and integrity. And in my mind, I saw D, silently grinding through his worksheets like pushups, then going back over his mistakes and doing extra work while the others relaxed and chatted amongst themselves.
At the banquet, when I announced his name, he shook his head like he couldn’t believe it.
I’d hoped D would come back for his senior year, so I could see him walk. But things don’t always work out like we hope. Still, he knew that I’d wonder how he was doing, that I cared about what happened to him. And so, from time to time, I checked my e-mail and found a short-short note, just letting me know that he was okay. He was doing great in AP art. He’d found this cool true crime YouTube channel, maybe I’d like it too.
One day, it was just a Google doc with brief description—a short reading response he’d written for an English class. He was especially proud of it. He called it “The light and the dark.”
*
Like many teachers right now, I’ve been taking more than a casual interest in the discourse around the OpenAI Chatbot. With surprising versatility and competency, the powerful new tool is taking first steps into territory that was previously considered the sole purview of humans: writing. Chatbot “conversations” have been proliferating, along with a flurry of anxious think-pieces about What This Means Now. Me, I’m just having fun frittering away bits of free time by asking it whether it’s really a demon, or asking it to write a song about frogs in the style of different artists. (Spoiler: it apparently has only two stock “songs about frogs” that it just keeps swapping back and forth when you change artist names. I would feel ripped off, if the thing wasn’t free).
But at The Atlantic, high school teacher Daniel Herman is very serious. He proposes that it will be the end of high school English, and by extension the whole economic ecosystem around it. True, ChatBot output won’t give anyone “goosebumps.” It delivers workmanlike prose with no special flair or style. But then, most high school students never progress to the point where they have the “luxury” of thinking about flair or style. Their teachers are thrilled if they walk away with basic competency. Now, the bot threatens to remove all motivation to progress even that far. College admissions essays? Cover letters? As Herman demonstrates by example, the bot is good enough that teachers can no longer gesture at these things when students ask the perennial question, “When am I ever gonna use this stuff?” That is, assuming the questioner doesn’t plan to build a career on his writing skills, which describes the vast majority of high school students.
Myself, I predict teachers will figure out their various workarounds. More classrooms will likely ban tech. We’ll probably see a return to handwriting over the Google doc. For future writing-heavy classes, I plan to collect in-class by-hand samples at the beginning of the semester, which will function as a rough “fingerprint” that makes it difficult to get away with magically pristine essays later. Meanwhile, I can always ask ChatBot my own essay questions.
But according to Daniel Herman, I’m answering the wrong question:
Many teachers have reacted to ChatGPT by imagining how to give writing assignments now—maybe they should be written out by hand, or given only in class—but that seems to me shortsighted. The question isn’t “How will we get around this?” but rather “Is this still worth doing?”
I believe my most essential tasks, as a teacher, are helping my students think critically, disagree respectfully, argue carefully and flexibly, and understand their mind and the world around them. Unconventional, improvisatory, expressive, meta-cognitive writing can be an extraordinary vehicle for those things. But if most contemporary writing pedagogy is necessarily focused on helping students master the basics, what happens when a computer can do it for us? Is this moment more like the invention of the calculator, saving me from the tedium of long division, or more like the invention of the player piano, robbing us of what can be communicated only through human emotion?
Herman’s crowning example, his artificial pièce de résistance, is ChatBot’s answer to a question from one of his exams on Buddhist philosophy: “Explain the Madhyamaka Buddhist idea that all phenomena lack inherent existence, using a funny, specific example.”
The Madhyamaka Buddhist idea that all phenomena lack inherent existence can be illustrated with a funny example: let’s say you have a pet hamster. You might think that your hamster has its own inherent existence, but in reality, your hamster is made up of many different elements such as its fur, its bones, its muscles, and its brain. None of these elements exist independently or inherently on their own, so your hamster itself does not have any inherent existence either. The same goes for all phenomena—everything is made up of various components that don’t have inherent existence, so nothing has any inherent existence either.
Everything is made up; it’s true. The essay as a literary form? Made up. Grammatical rules as markers of intelligence? Writing itself as a technology? Made up. Starting now, OpenAI is forcing us to ask foundational questions about whether any of those things are worth keeping around.
Of course, Mr. Herman is welcome to ask such questions, if he feels so forced. Some of us feel slightly less constrained. Perhaps we just need to read more Buddhist philosophy.
*
“Can you fix this essay up and make it better?”
Daniel Herman said if I asked ChatBot this question, it would work “like magic.” It would keep D’s words “intact” but employ them “more gracefully.” It would “remove the clutter” so his ideas could “shine through.”
The bot dutifully whirred out three neat paragraphs, pure as Fiji water. They were “better,” sure. All the mechanical fumbles were gone. The usage glitches, the bits of awkward syntax, all polished away. The run-ons, neatly tightened and tied off.
Of course, ChatBot also completely misinterpreted D’s first paragraph, ripping it up and stitching it back together to say that Grendel the monster represented the things D had actually said “the hero,” Beowulf, represented. Details.
Gone too was any trace of D’s voice, of the stamp of D’s imagination. Gone was the earnest aspiration to grandeur in a line like, “Now the modern forces of evil are very different from those of old,” which the bot collapsed together with the next sentence saying new evils were more abstract, producing, “In modern times, evil has taken on more abstract forms.”
Other lines were subtly flattened, like his opening line, “Grendel was a physical thing,” which became “Grendel was a physical embodiment of evil.” No, I wanted to tell ChatBot, that’s not what he said. He said “Grendel was a physical thing.” It was “an animal,” he emphasized in the next line, which was cut entirely. He was drawing out and underscoring the physicality of Grendel, as a living, breathing monster that the hero was able to conquer and destroy. Of course, that entire point had been lost anyway when ChatBot decided that the “conquering and destroying” stuff was actually about Grendel. Still, I’d been reliably informed that ChatBot was going to let D’s ideas “shine through” while employing his words “more gracefully.” What did I know?
Other great and small annoyances abounded. Like the way the bot undid D’s intuitive avoidance of redundancy when he said in one sentence that “The minds of men are sick,” then described this as a disturbing fact that people tried to “push out of their heads”—not “push out of their minds,” as Chatbot so gracefully re-rendered it. Or the way it took his line that people needed “a reason…to say life is worth a damn” and said people needed “a reason to say life is worth living.” No, that’s not what he said. He said worth a damn. He said worth a damn, you soulless blob of lukewarm silicon.
ChatBot tells me D meant to conclude by saying that “it is important to keep hoping and moving forward, to make the most of this chance at life.” It’s trying to be helpful. It senses syntactical uncertainty in this section, mechanical rough spots that it’s trying to smooth over, because that’s its job. That’s all it knows to do.
But once again, that’s not what D said. D that “we” must have hope and move forward. In his exact words, he pictured that this looks like saying “this chance of life may be the only one I got; let me make the most of it.” Now, I happen to know he personally doesn’t think this chance of life is the only one he’s got. But he didn’t stop to unpack this, because just for a moment, in his own particular diction, he was imaginatively putting himself in the shoes of the collective “we,” the human “we.” We, the living. We, the conscious, breathing, thinking, hoping.
*
I’m sure there’s a degree to which Daniel Herman is right. I’m sure a lot of folks’ hard jobs are about to get much harder. I’m sure a lot of jobs are going to slowly dry up altogether. All of that might be true. So what?
Herman may have taught high school writing. But I have to wonder if he has understood rightly what high school writing is—or even what writing is, period. After all, if I’m to take him at his word, writing is a thing that is made up. Because everything is made up. Including, presumably, his own Atlantic essay.
I’m not D’s English teacher. Maybe what I saw was a draft. Maybe his actual teacher will give it a once-over and help him polish it organically. Maybe the finished product will more closely approximate the ChatBot output. But not too closely, I hope.
If I had to guess, D hasn’t heard of ChatBot. He’s certainly not online enough to have picked it up on social media. But if and when he does hear about it, I don’t have to guess about whether he’ll want to use it. I already know. Because I already know D. Because I already know that D, unlike some, thinks some things are worth a damn.
This was really good. Thanks.
I really enjoyed this, thank you