There are a million reasons to quit the government schools.
I suppose ChatGPT is one of them.
Quick AI redo -—
This fall marks the first time in nearly 20 years that I’m not returning to the classroom. Throughout my career, I taught writing, literature, and language, primarily to university students. My decision to leave was largely influenced by the rise of large language models (LLMs) like ChatGPT.
As historian Lynn Hunt has pointed out, writing is “not the transcription of thoughts already consciously present in [the writer’s] mind.” Instead, writing is a process deeply intertwined with thinking. During my graduate studies, I spent months trying to piece together my dissertation, ultimately realizing that writing was the key to solving the puzzle. Writing is hard work; it can be intimidating. Unfortunately, many of my students were no longer willing to endure that discomfort, drawn instead to the ease of AI.
In my last position, I taught academic writing to doctoral students at a technical college. Many of my graduate students, particularly those studying computer science, were well-versed in the mechanics of generative AI. They recognized its limitations—acknowledging that LLMs can hallucinate, create false citations, and are not reliable for original research. Despite this understanding, many students still relied heavily on generative AI, with some openly admitting to using ChatGPT to draft their articles after outlining their research.
As an experienced educator, I employed best practices in teaching. I structured assignments, explored ways to integrate AI into my lessons, and designed activities to highlight its limitations. I reminded students that ChatGPT can misinterpret text, produce biased information, and often doesn’t generate high-quality writing. Still, they continued to use it.
In one class activity, students wrote a paragraph, revised it with ChatGPT, and compared the output to their original text. However, most students lacked the skills to analyze the nuances of meaning or evaluate style effectively. One PhD student even remarked, “It makes my writing look fancy,” when I pointed out weaknesses in the AI-generated text.
Students also relied heavily on paraphrasing tools like Quillbot. Effective paraphrasing requires a deep understanding of the material, and recent cases of “duplicative language” show that it’s a challenging task. It’s no surprise that many students are tempted by AI-powered paraphrasing tools, which often result in inconsistent writing styles, fail to prevent plagiarism, and allow students to bypass genuine comprehension. Such tools can only be beneficial if students already possess a solid grasp of writing.
Outsourcing their writing to AI deprives students of the chance to engage deeply with their research. Ted Chiang succinctly captures this in his article on art and generative AI: “Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.” The myriad decisions writers make—regarding syntax, vocabulary, and style—are as crucial as the underlying research itself.
While generative AI can serve as a democratizing tool, especially for non-native English speakers whose writing often contains grammatical errors, it can also alter vocabulary and meaning, even when the intent is simply to correct grammar. My students struggled to recognize these subtle shifts. I found it challenging to convey the importance of stylistic consistency and developing their unique voices as research writers.
The issue wasn’t identifying AI-generated text. Each semester, I asked students to write in class, allowing me to establish a baseline for comparison. I could easily distinguish between their writing and that generated by AI. While I was familiar with AI detectors that claim to identify AI-generated content, these tools are often unreliable. AI-assisted writing can be easy to spot but difficult to prove.
Consequently, I spent countless hours grading work I suspected was AI-generated. I pointed out flaws in arguments and noted stylistic quirks typical of ChatGPT, leading me to give more feedback to AI than to my students.
So I decided to quit.
The best educators will need to adapt to the presence of AI, and in many ways, this adaptation can be positive. Teachers should shift away from rote assignments and simple summaries, finding ways to foster critical thinking and help students understand that writing is a tool for generating ideas and clarifying methodologies.
However, these lessons require students to be willing to embrace the discomfort of uncertainty. They need to trust in their cognitive abilities as they write and revise. Unfortunately, with few exceptions, my students were not prepared to venture into that discomfort or stay there long enough to discover the transformative power of writing.