Skip to main content Skip to secondary navigation
Main content start

Teaching with AI: Resources, Fears, Reflections, Invitation

westworld robot in laboratory
Westworld robot preparing to write RBA?

When OpenAI released ChatGPT in November of 2022, I wondered how and, perhaps more importantly, when this development would show up in my classroom. Now, I have an answer. 

A little over two years after OpenAI’s release, I’m seeing a major shift in student uptake: drafts of PWR’s major required assignments are being composed with significant help from LLMs. My personal observations are anecdotally supported by Stanford students who have noticed widespread use of AI for writing, reading and homework, as well as colleagues who’ve pulled me aside to tell me their stories. 

I was (I admit) pretty terrified that I would not be able to identify AI generated work. (So much for twenty plus years of writing instruction, I thought!) And, as I hope everyone reading this article knows, I have no "proof" positive that the papers I’ve identified were composed entirely by AI. While Stanford does offer an AI detector called iThenticate, plenty of research suggests such detectors should be used with caution. Moreover, even if detectors were 100% reliable, challenging student(s) directly would go against nearly every instinct I have as a teacher. That said, IYKYK. And I'd be happy to discuss the ways in which I’ve found AI-generated writing differs from student work. In brief, I posit that papers read more like extended summary than argument. 

Over President's Day, I read a book called Teaching with AI by Jose A. Bowen and C. Edward Watson that has been helpful for me in thinking through how to leverage and handle AI in the writing classroom. (I never would have ordered this book had I not been encouraged by Drs. Norah Fahim and Jennifer Johnson to remotely attend the Language, Literature and Culture Symposium at Berkeley. Thank you lovely colleagues!)

The authors of Teaching with AI take a largely positive view of what AI can and will do for students in some contexts and essentially argue that students face a completely different set of realities than instructors do, including (among other things) the fact that nearly every workplace/ field will require them to engage with AI. The message no doubt many of them are getting is: you will never have to write an RBA again, but you will have to use AI in nearly every job going forward (as an example of this rhetoric, here’s a Forbes piece on how AI may transform the role of software engineer). I understand that there are a lot of reasons why students are (and possibly should be) tempted to use GPT and other LLMs to help them with their assignments. 

That said, Bowen and Watson do make suggestions about how to mitigate against ChatGPT. Many of these, you are already implementing, although I share them here:

  • discussing academic integrity
  • give a quiz or similar on your syllabus AI policy
  • give students a regret clause — 24 hours to withdraw work that they completed using "help" from chat GPT
  • reminders about academic integrity on every assignment
  • normalizing help
  • utilizing detection tools in class

From a pedagogical perspective, the authors suggest:

  • regular low-stakes assignments
  • in-class activities that promote engagement with reading and important concepts
  • reasonable workloads
  • flexibility with due dates
  • modeling academic integrity (or how to use AI in permissible ways)
  • teaching AI literacy*
  • and "better assignments and assessments" — a larger category that certainly bears further discussion.

*Defining AI literacy is challenging. Some suggest it means possessing skills and competencies to use AI effectively while others suggest it’s an understanding of how AI works. I’m wondering if a useful analogy might be provided by Aristotle’s rhetorica utens (applied rhetoric) vs rhetorica docens (theory of how rhetoric works). In this way, AI literacy would encompass both domains of knowledge.  

Again — I think the majority of us are implementing some of these strategies already. 

A recent talk I listened to by university assessment expert Philip Dawson reminded me, too, that our small class sizes where students have a degree of choice over what they research works in our favor and has likely blunted even wider adoption. Our interpersonal relationships with students (including our conferencing model), as Adam Banks suggested in our February 28th program meeting, is and will continue to be a key way we encourage and inspire students to grapple with meaning and language in powerful and important ways – even in this new era. At our first TTP meeting of the quarter, Erik Ellis and Lynn Sokei suggested similarly that encouraging students to connect personally and creatively with their research, which has its own intrinsic rewards, has the added benefit of discouraging AI use. This sentiment was shared by Mutallip Anwar, Meg Formato and Shay Brawn in the Curriculum Committee meeting when we discussed their experience teaching literacy narratives. 

However, given the tremendous temptation and availability of AI, I’m still thinking about ways to both discourage and address usage in the short and longer term.  

One thing I am considering for next quarter is a strategy I noticed during my Hume Tutoring hours. Students taking a bio ethics survey taught by Professor Magnus (HUMBIO 174) are required to submit on the final page of all written work an attestation about how they used AI. I’ve adapted the attestation a bit for my use in PWR. Note that according to the graduate student enrolled in Professor Magnus’s course, students’ work was not graded until every one submitted the AI attestation for the assignment. 

Hayden Kantor shared a similar strategy in an email to me on February 25th. He asks for a generative AI declaration in all reflection memos. He has shared this language: 

Disclose the use of any generative AI tools at any point in the TiC process, including in scaffolding assignments. In accordance with the syllabus, you should describe in a substantive fashion how you have engaged with the tool (what prompts, purposes, and exchanges) and what you learned from using it. If your reflection is not sufficiently detailed, I may return it to you.

Hayden indicated he’s started to accept some AI use because it is so common, but proposed his own “red line”: submitting text composed by generative AI.

This leads to my next point: you might be wondering if you are doing many of these things "right" and you are still getting content you believe has been generated by AI, what then? In the email, Hayden describes his own thought process when reading text he suspects might cross that red line: his main message is to make comments that show the AI produced text is not necessarily benefitting the argument or essay.

My approach is somewhat similar: try to show students the limits of what AI can do. Thus far, I’d add: Most of the AI generated writing I'm seeing does not fulfill the expectations of the assignment, so for now, I'm reading the essay drafts the way I would any other essay draft — with an eye toward improving the draft.

As a follow-on, I think because AI writing "sounds good", students themselves don't necessarily realize what AI is not doing well (or at all); thus, there's a mismatch between what students may perceive as "good writing" and what I perceive as "good writing" — this is a productive space to explore. 

Longer Term Ideas for Action 

My response to these students does not address questions of integrity (academic or otherwise) which is an issue. Bowen and Watson suggest that the best course of action would be to garner buy-in from the beginning of the quarter, with a robust discussion of values that are used to drive shared behavioral expectations and classroom policy. Additionally, some research suggests that we are entering a post-plagiarist future. I love how Sarah Eaton says that in a world where AI is doing more, humans may surrender some control over their product, but that we are still ultimately responsible for the work we produce. I like the idea of discussing ethics, but am still thinking about precisely when to do so and how to introduce the concept of a “red line” I personally cannot enforce. 

Second, I plan to do a bit more digging into AI policies (this is resource by Lance Eaton that tracks many, many AI policies), the best of which have robust rationale for what students are allowed to do and what they are not allowed to do, and significantly lean in to WHY the policy is the way it is. 

Third, I'm striving to find ways of incorporating some AI activities that still encourage students to struggle with language and complexity. I thought this argument was interesting, but I'm not sure yet about classroom applications. If you have ideas for on the ground activities, let me know! (Actually, if you have any activities that involve strategic use of AI that you like, please let me know! I want to hear more, please!). I’ve done some exploration and while many colleges promise a lot, there are not that many robust college-level teaching resources that apply directly to our pedagogy yet. 

Moving Forward Goals and Concerns

Based on my crash course/reading, I suspect many of us will see more, not less AI generated content. This "reality" brings up many points of discussion. I want to acknowledge that there are many, many ethical concerns I'm not even broaching here including labor, bias, stolen content, environmental impact…. Meg Formato in her email of March 3rd, introduced critiques of AI, and without engaging with most or even all of what she said there, I’d add that much of the b-llshit AI “think pieces” focus upon the notion of productivity. Asking students to consider what they are producing, for whom, and why are all good questions to ask.

In the UK, the Russell Group (the 24 R1 universities) have come together to create a shared vision "principles" on AI. American universities have not done anything similar as yet. Note that all of these principles are conditioned on the idea that faculty and teaching staff are AI literate. I'm still finding out what exactly that means, but I'm pretty sure I'm not AI literate and therefore cannot robustly teach AI literacy. I'm working on this, but got off to a slow start. 

What I'm reading suggests that in a very short time, AI will be part of everything we do in the writing classroom. If not the Singularity, then close. One of my students, in a formal class presentation, said that the single thing that characterized her college experience at Stanford (more than the Quad or her freshman dorm) was the API for ChatGPT. Her observation garnered a lot of laughs. I'm sure that there are fearful / apprehensive students out there who have steered well clear (just like me, this teacher!) but I am not sure how many hold outs there will be in one or two years. I doubt there will be (m)any in five. I’ve been wrestling, more philosophically, with what it means to teach writing in this new AI era. I hope we can make space to have these kinds of conversations in community soon.

More News Topics