Interview with Dr. Cynthia Bailey on Her Writing in the Major Course for Computer Science

PWR: How does your new course, Equity and Governance for Artificial Intelligence, align with the Writing in the Major (WiM) requirements for Computer Science?
Dr. Bailey: I designed this course to integrate writing at every stage, ensuring students continuously refine their ability. There is an urgent need for voices who can effectively communicate across the divide between AI experts and policymakers, and this class aims to fill that need. That’s why I foreground audience awareness and structure assignments around real-world constraints. In Computer Science, effective writing often means translating complex technical ideas for non-expert audiences. We practice that skill in everything we do, and add to it communication about ethics, policy implementation, and stakeholder considerations needed to find a path forward for policy proposals.
PWR: Your course includes professional writing genres such as policy memos, op-eds, and legislative recommendations. How has your experience as a technology advisor in Congress shaped your approach to teaching these forms of writing?
Dr. Bailey: In 2023–2024, I worked as an AI Policy Fellow in the United States Senate through the American Association for the Advancement of Science (AAAS), a job that required me to advise Senators and congressional staff on AI policy, educate government officials about technical elements of AI, and help Senators articulate their own AI policy viewpoints in clear and accurate language.
That experience was so transformative—and fun!—that I returned with a conviction that I wanted every Stanford student who was interested to qualify for opportunities like that. This goal gives my assignment design a laser focus on immediately applicable, practical skills associated with that work. For instance, when students write a legislative recommendation, they must select a particular member of Congress and follow the same templates used in congressional offices, where policy staffers distill complex information into a short, actionable brief for their boss. The focus on realism even extends to the deadline for that assignment, which I set quite short. Topping off the experience, students role-played presenting their recommendation to their legislator in a staff meeting, extemporaneously answering questions about their facts and recommendations. Similarly, the op-eds challenged students to embrace the realism of that genre of writing, including testing their arguments with different readers and adhering to strict word counts, so their work could actually be published in the mainstream press, something several students plan to do.
PWR: Has the fact that this is an AI policy course affected your approach to the guidance you give students about use of generative AI on their writing assignments?
Dr. Bailey: Absolutely! I am convinced that the only way forward for education in the era of generative AI is to refocus on intrinsic motivation: invite students to introspect about their goals, provide even more explicit rationales for course content and assignments, and engage students in ongoing dialog connecting the two. The realism and immediate practical applicability of the assignments in our class provided a powerful store of intrinsic motivation for me to draw on in recruiting students to responsibly limited use of generative AI tools. For example, in the “Honor Code and use of External Resources” section of the op-ed assignment instructions, I remind students who plan to submit their work for publication that they will need to be able to attest to their authorship and copyright, something current U.S. Copyright Office policy does not extend to AI-produced works. For the legislative recommendation assignment, I lean into the role-playing aspect, saying, “Congressional staff are held to extremely high standards of integrity, excellence, and discretion. You should feel responsible for the accuracy of every detail of the work that you put in front of a sitting member of Congress.”
At the same time, I want the realism of the working conditions for our assignments to extend to the reality that many policy professionals, including congressional staff, leverage AI to summarize huge volumes of reading and edit written work on their blisteringly fast-paced deadlines. Every assignment includes a list of explicitly green-lit uses of generative AI tools, such as editing an op-ed draft that is slightly longer than 800 words down to exactly 800 words, or advising on a choice of three options for the opening paragraph’s “hook” (see above links for more example prompts). As an AI policy class, I felt it was important for students to have firsthand experience with the capabilities and limitations of AI technology as it applies to day-to-day work tasks. Fortunately, an AI policy course affords ample opportunity to address these issues about the integrity of work in our own assignments in the broader context of ethical concerns about AI inputs and outputs, the future of work, and the meaning of human creativity.
PWR: Students are engaging with professional modes of technology policy writing in this course. What are some of the most valuable writing lessons they are learning from these assignments?
Dr. Bailey: Careful attunement to the audience was a core goal I had for our writing assignments. For example, for the op-ed assignment, students were encouraged to advocate for or against the same piece of legislation they had written about in their legislative recommendation. I wanted them to have firsthand experience seeing that the same underlying policy concepts could be cast in totally different forms for different audiences: a busy Congressional leader for the former, and the broad public for the latter. In software engineering, we often perform a task called “refactoring,” which means entirely rewriting a piece of code without changing its underlying behavior, for reasons such as improving speed performance or a need to change from one programming language to another. I encouraged our computer science majors to think of the recasting of their legislative recommendation content to an op-ed as an activity akin to refactoring.
PWR: Which assignments or writing exercises have students found most engaging or rewarding so far?
Dr. Bailey: Students have been particularly engaged with the op-ed assignment. They are used to technical and research writing, where the first person is taboo and the focus is being neutral and dispassionate. In contrast, this assignment challenges them to write with a distinct personal voice and persuasive urgency. In our debrief discussion, many students commented how unfamiliar and even uncomfortable that initially felt to them, but they ultimately found it rewarding to introspect and discover the unique value their particular experiences bring to bear on AI policy, and channel that into a writing voice all their own.
PWR: Looking ahead, what do you envision for future iterations of this course?
Dr. Bailey: I see this course constantly evolving to stay ahead of emerging AI policy debates. Even in this first offering, there were days when a newsworthy development compelled me to throw out a lesson plan at the last minute. One example was a Chinese startup’s release of the DeepSeek model, whose compact architecture and inexpensive training upended prevailing assumptions about how to attain frontier-class performance.
I’m also considering expanding the writing component to include public comment writing, where students would draft responses to federal AI policy proposals, much like what advocacy groups and industry leaders submit to agencies like the FTC (Federal Trade Commission) or NIST (National Institute of Standards and Technology). This would give students another opportunity to practice writing for real-world impact. Ultimately, I want students to leave this course not just as stronger writers but as informed, strategic thinkers who understand how to shape AI policy through persuasive, evidence-based arguments.