On Norah Fahim’s new class, Writing and Representing Ourselves in the Time of GenAI
To Be or to Let b be the word in sentence that has the highest keyword rank if b exists1
What, really, is intelligence? How do we know when we encounter it? How should we judge whether it be “real” or “artificial?” These are no longer questions belonging only to late-night dorm-room speculations; they are pressing issues of personal, professional and societal import. Programs such as Chat-GPT are rapidly transforming both teaching and learning, even as they seem poised to upend a vast range of professions and activities. At this point a number of metaphors offer themselves as models of our current moment: genies escaping from bottles, Pandora opening an interesting-looking box—you get the picture. Clearly something new is abroad in the land, and we all will have to come to grips with it.
But how new is it, actually? George Lucas foresaw a version of AI in his first film (from 1971), the relentlessly dystopian THX 1138. The title character (in this thoroughly programmed world of the future, everyone’s name takes the form of an alphanumeric code-phrase) enters an automated “confession booth,” to bare his soul and have a nice heart-to-heart chat with a counseling program concealed behind a generic portrait of—Jesus?
And who exactly is this counseling program? Apparently it was inspired by ELIZA, a very early proto-AI system developed by Joseph Weizenbaum at MIT.2 ELIZA was named after Eliza Doolittle, the character in G.B. Shaw’s Pygmalion, who was learning to transform her social class identity through language and dialect acquisition. With ELIZA Weizenbaum sought to pass the challenge of the “Turing Test,” that is, to create an artificial intelligence that could fool a human interacting with it into thinking they were encountering a fellow human being. ELIZA would respond to prompts by asking questions, utilizing scripts that seized upon phrases used in the prompt. One script was designed to mimic the tone of a Rogerian psychoanalyst’s discourse, reflecting the analysand’s words back to them and adding a brief question or challenge. ELIZA was a very early prototype of a chatbot, in short. A forerunner of AI as we are coming to know it.
Let’s just allow that very very buzzy acronym to sink in for a moment: A I. Artificial Intelligence. A phrase to make any PWR instructor’s pulse speed up— either by awakening in the mind images of robotic confession and lazy (or chronically time-pressed) students shortcutting through the writing process, or perhaps by offering the possibilities of a powerful new tool, which, properly employed, might unleash students’ creativity in unanticipated ways. Or both, or neither. Like any tool, it can lend itself to helpful or not-so-helpful uses. In the end, it is the user of the tool who decides. But like any tool, it may also offer temptations to the user.
But do we consider sufficiently how tools almost seem to want to do things in a particular way? A hammer makes a poor screwdriver; it wants to hit things. Tools can point us in particular directions, sometimes without our being aware of it. This is one of the issues our wonderful PWR colleague Norah Fahim explores in her exciting new advanced PWR course, Writing and Representing Ourselves in the Time of GenAI, cross-listed with CSRE. How have popular AI programs (such as, most notably, ChatGPT) incorporated biases and stereotypes that skew the responses they provide to a user’s prompts? As Norah puts it in the course syllabus:
This course offers you the opportunity to reflect more deeply on the risks that GenAI poses in terms of reinforcing stock-narratives about marginalized groups. In this class, we will work together via our course readings, guest-lecture visits, and course assignments to find practical ways to counter the normative narratives that feed these Generative AI tools.
Norah brings in a range of guest speakers (from Stanford Law School, the Stanford Office of Digital Accessibility and Stanford's d. School, as well as Princeton's Writing Program) and challenges her students with assignments that invite them to bring their critical awareness to bear on their uses of AI tools. To the extent that ChatGPT is a large-language model (LLM), this critical awareness inevitably calls into question what Norah refers to as “the homogenizing and colonizing effect of the English Language as the majority language used in GenAI tools.” At the same time, however, students are encouraged to “explor[e] ways in which such tools can alternatively be ethically used or trained to revitalize indigenous or minoritized languages.” Students in the class “also consider how GenAI tools can be leveraged to represent a wider range of socio-cultural narratives and experiences lived by minoritized groups.” Clearly this is no flat Luddite rejection of the technology, but an active engagement with both its blind-spots and its extraordinary potential.
An important materialization of this engagement is the creation of a “flipped” genre project. Starting from an earlier Critical Discourse or Critical Visual Discourse analysis, each student will “propose and adopt a new genre to display [their] findings/analysis with the aim of exploring ideas on how to intervene and counter some of the dominant and normative cultural narrative results [they] (may) have come across earlier (eg. feeding the tool with more diverse images/examples, or proposing a different model).” This public-facing project might take the form of an X (formerly Twitter) thread, a slide deck, a TikTok video or a Medium article. In this way students immerse themselves not only in the problematics of the content these AI systems produce, but also the formal properties of the genres in which they produce content. See some examples from students here and here!
The reader may have noticed the cheap anthropomorphizing slipped in above, where I claimed that a hammer “wants” to hit things. Of course hammers don’t want anything; they are not capable of agency. Neither was ELIZA any kind of active, volitional intelligence, nor is any AI system. Nonetheless many people reported after an exchange with ELIZA that they felt they had genuinely connected with another living being. We ascribe consciousness to inanimate things all the time (I have lavished a few curses on my car, for example). We impute intelligence to artificiality. Machines can learn—and they need to be taught. Here we encounter the risks attendant upon that old computing rule, GIGO: garbage in, garbage out. This is the central question Norah’s students explore: with what prejudices might we be infecting these AI systems? How does the use of ChatGPT simply hand along biases, when we use it uncritically?
Norah emphasizes that these questions have a bearing not only on our writing practices, but also on our identities as selves, as thinkers. And while these concerns are receiving more attention in the academic literature around AI topics, the tech industry itself still seems to be caught up in a gold-rush mindset, intent on building out the technology without considering too deeply the questionable elements embedded in their products. For instance, GPT-2’s training data is sourced by scraping outbound links from Reddit, and Pew Internet Research’s 2016 survey reveals 67% of Reddit users in the United States are men, and 64% between ages 18 and 29. (as cited in Bender et al. 2021) Our old friend (I use the word advisedly, though it did bring some ELIZA connections to my attention) Wikipedia also has been important in training the LLMs.
Norah has been impressed with the determination of her students to roll up their sleeves and do something about these biases. It all begins with asking the right questions, and students are pushing forward. For example: if you should find yourself in a group of technology decision-makers and you discover that you are the only one in the room who represents or at least has some connection to a minority viewpoint, how do you bring that viewpoint into the discussion? They share with each other their ah-ha moments, not just in discovering further gee-whiz dimensions of the technology but also breakthroughs of critical awareness, creating a communal learning environment.
- Actual phrasing from an early AI program, ELIZA.
- A very engaging and compact account of the ELIZA project can be found in Janet Murray, Hamlet on the Holodeck: The Future of Narrative in Cyberspace (Cambridge, MA: MIT Press, 1999), pp. 68-74. There is also the later apologia published by ELIZA’s inventor, Computer Power and Human Reason: From Judgment to Calculation (San Francisco: W.H. Freeman, 1976).