I first heard about ChatGPT on December 5th, 2022 from a friend. I visited OpenAI’s website, messed around with the technology for about 10 minutes and quickly thought, “This will probably be the most disruptive technology since the printing press.” It was just one week later that I first received an assignment (an entire written midterm) from a student that had been entirely plagiarized using ChatGPT. I was not surprised. As soon as I understood the technology, I knew the temptation would be overwhelming for some students.
I’ve heard and read a lot of reasons given for why using a large language model to generate one’s homework is wrong, bad, or foolish.
It’s dishonest.
It doesn’t write very well (i.e. won’t get you a good grade).
It generates a lot of falsehoods (i.e. won’t get you a good grade).
It’s a shortcut that will result in a lack of learning and overreliance on the technology.
You’re giving away your data.
Similarly, I’ve heard and read plenty of reasons for why these large language models are bad in general.
They’re an existential threat to humanity.
The creation of the models requires the exploitation of content moderators (whose lives are permanently scarred as a result).
The impact on electricity use and water consumption is concerning.
It will cause (and is already causing) a massive uptick in the spread of falsehoods and broad social confusion.
These all seem to me like good reasons (perhaps of varying strength) for thinking AI-use is bad.
Take a closer look and you’ll notice every reason given falls under one of two types.
AI use transgresses the moral law (e.g. by being dishonest or exploiting others).
AI use results in a bad consequence (e.g. by getting you a bad grade or by degrading our environment)
These are both good kinds of reasons to avoid something. If something breaks an important moral rule or results in a bad consequence (for you or for everyone), you probably shouldn’t do it. But, importantly, there can be deeper reasons for not doing something. For example, if I tell you, “don’t do that, because it’s wrong,” that holds weight, but probably not as much weight as if I say, “don’t do that if you want to be happy in life.”
I propose that using AI to write your papers is bad because it will make you unhappy.
Now, if I’m taking seriously the exercise of writing this to students, I need to consider an objection right away, before I even explain what I’m saying. “What are you talking about, G-C?”, a student is likely to respond. “I’m not going to be very happy if I have to slave away at this paper you’ve assigned me, or if I get a bad grade, or…”, etc.
The problem with this understanding of happiness, as anyone can recognize with a few moments reflection, is that it takes a far too immediate view of it, one that’s informed by pleasures and pains that are coming up within the next week, rather than by the state of one’s character over a whole life. As any ancient person would’ve been able to articulate (but as Aristotle certainly did articulate), a happy life by definition has to possess a certain completeness, not subject to the vicissitudes of change in circumstance. This is why both Plato and Aristotle (following Socrates) understand happiness to be a result of a good education and a good character. It’s not a fleeting feeling, but the stable, blissful state of character one possesses through discipline and a commitment to goodness.
To see this, a student just needs to consider what they want their life to look like at 60, or at 90 (rather than next week when this paper they’re working on right now is due). Invariably, students who are asked this question imagine the kind of completeness I’m talking about: they have a large family and a connected community of loving relationships, financial security, living rooted in a place they love, making (and having made) meaningful contributions to the world.
The question is, what kind of person must one become to have this kind of life. Any idiot can see that cultivating a life of vice is a bad bet for achieving the goal of ending up with a happy life. But a student with some insight will notice that even if a vicious character (the Tony Montanas and Walter Whites of the world) actually ends up with the family, community, security, and home they want, they won’t be able to find any meaning in it. This is because of what Eleonore Stump refers to as “psychic fragmentation,” what the Existentialist philosophers have often called “alienation” (Wandering in Darkness). People who act unjustly to get a pleasurable outcome may get exactly what they were aiming for—a big house, lots of money, and other material goods—but not what they truly desire, because no one can find happiness that way.
In Plato’s dialogue Gorgias, Socrates makes exactly these points to his interlocutor Polus. Socrates defends the surprising claim that it’s better to suffer injustice than to do it. What Socrates has in mind is that the state of the soul that results from perpetrating injustice (especially if unpunished for it) is the worst thing that can happen to a person. Importantly, this whole conversation occurs in the context of Socrates’s argument that oratory is not a craft but a knack. The difference is that a craft is a systematic body of knowledge about a particular domain, and the craftsmen understands the natures and causes of things relevant to that domain, and how to reliably produce good outcomes for the things on which he labors. Socrates’s examples of crafts are things like medicine, shipbuilding, or horse training. Each of these requires systemtized understanding—wisdom—about how to produce reliably good outcomes for the objects in its domain. A knack, by contrast is unsystematic, holds no understanding, and doesn’t reliably produce any good outcomes, but merely imitates crafts by stimulating pleasure in its subjects. Socrates’s example: medicine is a craft, while pastry-baking is a knack that imitates it. Justice is a craft, while rhetoric is a knack that imitates it. These imitative knacks are called forms of “flattery” by Socrates, since they give the appearance of wisdom and care by merely stimulating temporarily pleasing outcomes. The main worry here is that something like rhetoric (or “oratory”) will be used for injustice, since the person who practices it has no understanding of how to use it for good. But if they do, they’ll likely use it to perpetrate injustice, the worst thing they could possibly do.
This points to the heart of why generative AI hurts in the long-run: it’s a way of producing writing without understanding. Writing is a craft. When we “write” the paper with AI, we do so without any understanding of the natures of words, ourselves, or other people, or how to connect these things reliably to produce good outcomes. Aping writing with generative AI is an automated knack. It’s flattery, not wisdom.
I think we can say more to connect this point back to a happy life, though. When shown a video of an AI-generated “zombie-like” humanoid animation, promoting prospect of using generative AI to “draw like humans do,” Hayao Mizazaki had the harshest critique imaginable:
“Whoever creates this stuff has no idea what pain is…I am utterly disgusted…I would never wish to incorporate this technology into my work at all. I strongly feel that this is an insult to life itself.”
And later: “I feel like we are nearing to the end of times. We humans are losing faith in ourselves.”
For Miyazaki, one of the truly great artists of the last century, this goes beyond merely bad craftsmanship. It is bad craftsmanship, but as a truly great artist knows, the significance of that is deep, and connects to our actual lives and humanity. When we sell out our capacity to make, we’ve sold out our actual humanity, our birthright. We can only do this if we’ve lost a sense of trust in ourselves, or if, like the unjust actor, what we see fit to do is not what we truly desire.
This raises an important question. Twice now I’ve raised the distinction between what a person “sees fit to do” and what they “truly desire”, a distinction Socrates himself raises in the Gorgias. What someone sees fit to do is the immediate action they see as worthy of pursuit. What someone desires is what they want and may not even be able to articulate. Sometimes these come apart. That is, sometimes what someone sees fit to do is not what they actually want. See this video for a prime example (thanks to my friend Daniel for sharing). I’m claiming that when we use AI to avoid writing, we’re doing what we see fit to do, but not what we actually desire. The question here is this: what’s the deep desire that’s being unmet when we do this?
The general answer, which has already been raised, is our desire for happiness, but there’s a specific contribution that writing makes to happiness, and it’s a contribution that Miyazaki alludes to when he laments that “We humans are losing faith in ourselves.” Writing is an opportunity to gain self-knowledge and self-trust. Of course this doesn’t stand out to the high school student who is being given the assignment of writing an essay on the Hundred Years War or their interpretation of T.S. Eliot’s Four Quartets, but every such prompt is an opportunity to dig down deep, to ask oneself, “Self, what do you think about this difficult question?” and to wrestle with the question until an answer is found and some self-knowledge is gained.
For adolescents especially this is of vital importance. The essential job of the adolescent qua adolescent is the job of coming to an understanding of the self, of an answer to the question “who am I?” distinct from one’s parents and other elders in the community, even if a good and healthy answer makes appeal to traditions in its formation. Every time someone in the throes of that difficult journey abdicates their duty, they risk never actually arriving at an answer.
We reside in an age that wants adolescence to start earlier and end never. All the tools of distraction we’re continually surrounded by try to grab our attention before we ever even get busy with the job of understanding and coming into contact with ourselves. This is the threat of humanity collectively “losing faith” in ourselves. If we don’t even see fit to know ourselves, how can we place any kind of trust in ourselves? The end result of this direction of living is a culture made up of people who have no awareness of who they really are, no ability to sit and be in touch with their own thoughts, and thus no capacity for enjoying the good life at 60 or at any age.
This seems to me to be a wholly distinct kind of reason for avoiding AI like the plague, especially as a young person who should be about the hard business of coming to a true self-understanding and a healthy self-trust. It’s not about external consequences or rules (although those make for fine reasons too). It’s ultimately about happiness.
I am a full time high school educator and parent. In order to do the writing I do here, I work on this publication between 4:00-5:00am most days. I write because I love to, and I won’t put a paywall up on my work. If you appreciate what you’ve read, it would encourage me if you left a note or shared it with a friend. If you’d like to show appreciation with monetary support, please consider giving to my church’s second building fund by clicking below, selecting “Give to Second Campus,” and entering an amount.
Works Cited
Stump, Eleonore. Wandering in Darkness: Narrative and the Problem of Suffering, Oxford University Press, 2010.
I haven't used AI yet. I feel it might dehumanize me. Isn't it insulting to ask another human to spend their time reading AI generated words? If I were to ask my readers to read artificiality, I'd be dehumanizing them. I took a class on how to use AI, mostly just to see what I might learn. The man (a technocrat) teaching it said he was about to bring his teenage daughters to the lake. He asked AI what teenage girls like to eat at the lake. It spat out predictable answers (wraps, chips, sandwiches, etc) and he was genuinely amazed at its capacity. I was truly shocked at his incapacity. I thought it was sad that he didn't have the social skills to just ask his daughters what they wanted. Instead, he asked a machine. It would never occur to me to use AI in that situation. Most of the situations in that class where AI was used were in regards to creativity, communication, and socializing - the very things that make us human.
Thank you for this piece. Even though it feels inevitable that the techbros will impose AI upon the world no matter what, I'm glad for every time I hear someone speak up about resistance. It's good to know you're not alone in wanting no part of this. I hope you are able to make some inroads into the young minds you're mentoring and help them avoid the AI trap (I do think it can be avoided, if one develops sufficient discipline and self-efficacy; it's just that opportunities to do so will be even more scarce, moving forward into a world that's ever more driven by algorithms and AI).