Abstract
The writing classroom has been dramatically affected by the introduction of large language models (LLMs) such as ChatGPT. Facing immense pressure to adapt to a challenging and changing educational landscape, some writing instructors have chosen to adopt LLMs as a teaching tool. However, perceiving this technology as a potential "partner" in the writing process involves various problematic assumptions about chatbot capabilities. For one, it requires a certain degree of confidence in LLMs, which are incapable of understanding information and produce output conforming to the Frankfurtian definition of "bullshit." While some scholars have suggested chatbot interactions may still be educational in a fictional capacity, this argument presumes that students understand the limits of AI veracity, and it is inconsistent with the way LLMs are actively marketed as trustworthy personas, despite their tendency to "hallucinate." Given the current abundance of misleading AI hype and marketing, this essay argues that using LLMs in the writing classroom may be counterproductive to students' development as writers and thinkers. As such, writing educators are invited to challenge the popular narrative of AI techno-determinism and embrace their power to steer this technology's course in higher education.