Skip to main content Skip to secondary navigation
Main content start

"The Future of Writing": PWR Lecturers Confront ChatGPT and the New Twitter

Throughout this academic year, a huge source of conversation, debate, and possibility among PWR Lecturers has been the question of where we go next as writing teachers. It has sometimes seemed like it’s all anyone can talk about. With the pandemic and Zoom classrooms finally behind us, but with the new challenges of generative and potentially disruptive AI software such as ChatGPT abundantly at our doorstep, what does the future look like for writing and rhetoric studies, and especially for our pedagogy? 

On May 1, 2023 three PWR Lecturers, Nissa Cannon, Harriett Jernigan, and Chris Kamrath, participated virtually in a hybrid event that sought to engage this question head on, “The Future of Writing Symposium,” hosted by the University of Southern California’s Writing Program and Annenberg School of Communication and Journalism. Their panel, organized by Dr. Kamrath and titled “Teaching Writing Amidst Technological Upheaval: From Disruptive AI (Chat GPT) to Disintegrating Platforms (Twitter),” opened a vibrant conversation attended by several PWR Lecturers as well as many colleagues from other institutions. Together, participants and attendees explored the challenges as well as the potentialities of teaching writing amid our brave new technologies and media ecologies. 

Harriett Jernigan began the conversation with her talk “This hits a little different: Chat GPT and Linguistic Identity,” which focuses on an assignment from the rhetorical analysis unit of her PWR 1 course “The Rhetorics of Ethnic and Racial Identity.” In the assignment, Dr. Jernigan asks students to participate in an “experiment” to try to “develop an understanding of Chat GPT’s intercultural competence” by explicitly engaging with the system on topics of ethnic and cultural identity, and then reflecting on its responses. Students developed prompts such as, “Use the voice of a black woman to explain climate change,” or “Tell me a story about an immigrant.” Not surprisingly, students in Dr. Jernigan’s class found that the large language model still has considerable limitations when it comes to responding appropriately to prompts requiring cultural competence. The assignment encouraged students to be more critical generally about their interactions with the model, and to challenge its often stereotypical or otherwise problematic assumptions about marginalized peoples. It also emerges from Dr. Jernigan’s broader interests in algorithmic justice and ethical AI. With PWR Faculty Director Adam Banks, she is also a recent recipient of a Stanford HAI grant to support the development of a BlackRhetoricsGPT LLM for scholarly use (more on this amazing project in the PWR Newsletter’s September issue!)

Chris Kamrath’s presentation “Teaching in Threads, or asking students to write for/on a rapidly decaying platform” addressed the problem of asking students to rethink the role of Twitter, specifically, in the context of the rapidly shifting landscape for microblogging platforms. In recent years, journalists have taken to publishing opinion content simultaneously in legacy and other conventional publications as well as in threaded Twitter essays, and Dr. Kamrath has long encouraged students to engage these emergent forms. This work has been become much more complicated recently, however, with Elon Musk’s takeover of Twitter, and the technical and political disruptions that have precipitated for users of the app. In light of these dramatic changes, many users—in particular many scholars—have migrated to other platforms, such as the decentralized servers of Mastodon. In a digital landscape where we cannot take the stability of a given platform for granted, Dr. Kamrath encourages his students to seriously consider choice of platform as an important factor in establishing one’s digital identity.  

Finally, in “Asking the Right Questions: ChatGPT in the Research Writing Classroom,” Nissa Cannon presented an assignment she uses in her PWR 2 class to show the possibilities and potential pitfalls of deploying ChatGPT (in its present form) to create a literature review on a research topic. In Dr. Cannon’s activity, students are first prompted to assess the baseline credibility of ChatGPT on a topic of interest. They ask the system to survey the field, and to identify a given number of sources. Then, students are asked to check the reliability of the sources and their provenance: Can they find the article? The author? The book it’s listed in? In many cases, according to Dr. Cannon, this is where the chatbot falls short. In fact, in the example she provided during her presentation, modeling a research survey on sources pertaining to the silent film actress Theda Bara, every single source identified by ChatGPT had some major flaw in its accuracy.

Slide from Dr. Cannon's presentation showing false sources generated by ChatGPT.
Slide from Dr. Cannon's discussion showing flawed ChatGPT-generated sources.

While this exercise certainly makes an impression on students regarding the limitations of ChatGPT’s utility as a research tool, Dr. Cannon frames it as a generative learning experience, an opportunity to “learn from ChatGPT’s mistakes” as well as to consider “what is ChatGPT getting right?” Ultimately, the exercise becomes a reassurance for students–as it became for those of us in the audience as well–that even in the context of AI-assisted research, the judgment of human researchers remains essential. At least for now. 

 

Lead image Matrix background shared under a Creative Commons license. 


 

More News Topics