Catching the Technological Lightning
A. P. Lamberti
When I presented at MnWE on my use of ChatGPT in a service-learning writing course, an audience member expressed initial surprise that I would use AI at all. Although they went to on to explain that my reasoning ultimately became clear to the audience, the question remained with me. Why are writing instructors using AI? Perhaps more helpfully, how might we consider artificial intelligence in teaching and learning beyond its practical applications?
My teaching career now is long enough to be littered with technological innovations that, upon their arrival, were widely heralded as “disruptive,” “innovative,” etc. Some have persisted (LMS platforms) while others appear to have faded (where did all the MOOCs go?). As a teacher of professional communication, I have needed to act as an early adopter to keep up with industry trends and standards. At the same time, I need to combat misconceptions that workplace writing is largely an instrumental, skills-based endeavor—the implication being that its disciplinary content knowledge is theoretically thin at best. The result is an instructional challenge: incorporating tech tools into a curriculum, yet in a manner obviously meaningful to both colleagues and students.
Defining what AI means, however, feels like a different question than when I’ve pondered it with other technologies. Our authority to shape its future direction seems to be a fast-disappearing opportunity. Granted, there is a great deal of hand-wringing about future apocalyptic AI scenarios, but it nonetheless is true that AI is distinctive from other tech. Its particularly complex use of algorithms, capacity to autonomously make decisions, lack of transparency regarding its internal processes, and rapidity of evolution already have distinguished its latest forms from even its own earlier versions.
The differences play out in their curricular suggestions for the writing classroom. For one, ethical tool use when writing long has been addressed by instructors and their curricula. We often work with students to develop information literacy, especially regarding digital content. Suspect information is cast in the writing classroom as unusable, if not disturbing. Certainly AI-generated information can be suspect as well. The challenges to understanding AI’s internal decision-making processes, though, suggest that we now might work with writing students to take more of an advocacy stance, demanding clear explanations and accountability from the technology and the information it produces.
AI’s difference also can be felt in its implications for privacy and security. Again, privacy issues in writing curricula are not new, often discussed within the context of publishing work. Students in my editing class study the history of U.S. copyright law and its overlap with public domain and definitions of authorship. Still, the large data models that drive so many AI platforms, with their wholesale web-scraping methods, raise the stakes for student writers. We might consider asking our students to attempt tracking the origins of certain AI-generated content and see if they can identify specific parties who are affected by those models. What does it mean for a minor child’s security, for instance, when an AI image generator deepfakes their photo into a different visual?
My writing pedagogy has needed to respond to AI even as I attempt to recover from its future shock. Meanwhile, the tech’s differences from other tools have complicated my understanding of what it means to teach mediated writing, both functionally and philosophically. At some point, I anticipate that AI’s evolution will reframe the question posed during my MnWE presentation, to instead ask whether its adoption is an option at all. During this time of nebulous until-then, my hope is to take advantage of my mere human agency and guide its role in the classroom.