"

Considerations for Responsible Writing with GenAI

So far we’ve outlined a broad set of issues and questions, the answers to which in many cases will depend on the particular situation in which you are being asked to produce writing. Equally, we’ve considered the importance, in that setting, of identifying the particular purpose and audience that writing is serving, whether in an educational or professional or even personal context. That situation includes being attentive to the differences, for example, between a writing you might do in a classroom setting and at a job. But even the situation of a “classroom” requires more specific decision-making, given the different purposes that writing serves in different courses and different disciplines, and the potential different expectations of individual instructors. This section will offer some general guidelines that will help you navigate this decision-making in the context of your courses, with foundational principles that can extend to other contexts, so that you can be intentional about using GAI with integrity in your writing.

To think about whether or not you should or could use GAI for coursework, let’s first reflect on how writing works as a way for students to both develop and demonstrate expertise, a deep learning and mastery of core concepts and methodologies. For student writers, producing texts in a variety of forms for their courses, it will be essential to understand if the writing you are being asked to do – with or without GenAI – is what I call transactional or conceptual. Transactional writing is operational, and task-based, performing basic communicative functions, such as producing a shopping list. Conceptual writing reflects higher-level thinking, such as synthesis and argument, and the application of core concepts and practices, such as reading an article and producing a summary of it. In the context of a course, what is transactional and what is conceptual writing will depend on the practices and expectations of the discipline of study and the learning students are asked to demonstrate in the writing task. Therefore, your courses, programs, and instructors may differ in what uses they allow. For example, some instructors may consider the ability to develop your own ideas in response to those of others is a foundational skill, one that students need to be able to do without the assistance of a tool. The step of idea-generating, or brainstorming, may in such contexts be a form of conceptual writing, and those instructors would see that step as work they expect students to do without the support of GAI. Other instructors might see real value in having students engage with GAI to get started on a project, so that they avoid getting “stuck” because they feel like they don’t have any ideas, or lack confidence about the ideas they have come up with. (Students often report that they find coming up with ideas the most challenging step in starting an assignment (Anders 2023).)

In the context of how you might write in your courses, it is important to understand a) how the work of a writing activity reflects particular learning you are or will be asked to demonstrate, and b) whether or not using a writing app instead of doing the work yourself will align with or contradict that learning goal. For example, I might use GenAI to create a recipe out of the ingredients I have in my kitchen cupboards, and I would consider that a transactional use of this tool. However, if I were training to be a chef, the act of choosing ingredients, knowing which ones to put together and how to make something that would be tasty, nutritious, or appealing, would all be conceptual, reflecting both foundational and higher-order understanding that I would need to demonstrate to satisfy the expectations of my culinary training. Similarly, in order to revise a draft text, a writer needs to have foundational understanding of how to write that kind of text for the audience and purposes it is meant to serve – the rhetorical situation, as we considered in Section II. Turning over a revision to GAI without also having that expertise could result in a text that doesn’t make sense, or doesn’t meet the expectations of the assignment, or the needs of its intended readers. Lauren Goodlad and Samuel Baker (2023) make a helpful analogy: “Just as drivers who turn the wheel over to flawed autopilot systems surrender their judgment to an over-hyped technology, so a future generation raised on language models could end up, in effect, never learning to drive.” Do you have a solid foundation in the “rules of the road” for the type of writing you’re revising, in order to evaluate the quality of the help GAI has provided you?

An example of why GAI tools need our human oversight and expertise comes from the kinds of promises that some apps make, that their use will help writers “ace” assignments or increase workplace productivity by making their writing “better,” with “better” meaning writing that is clearer, more assertive, and concise, and free from “jargon.”[1] Such promises reflect the (mistaken) premise of a universal standard for “good” writing. However, as we’ve considered, what makes writing “good” – or writing that is “clear,” or “effective” – is highly context-dependent, reflecting that social situation: it meets the needs of the particular purpose, occasion, and audience for which it is designed. If we let a GAI writing tool or app “fix” our writing without that kind of contextual framework built in, then we risk making different kinds of mistakes – for example, being seen as simplistic, aggressive, or even offensive. Moreover, the rhetorical situation includes current social attitudes that extend beyond the content but to writers as well: attitudes about gender and race, for example, shape how writers and texts are received by audiences. For instance, what might for one writer be seen as winningly assertive (and therefore an example of “good” or “effective” writing) might for a different writer be seen as off-puttingly aggressive, reactions that reflect social prejudices that typically and disproportionately affect people from equity-deserving groups. How we negotiate these unspoken social constraints, dynamics, and inequities necessarily informs whether or not we write “effectively.” As both these biases and the Vanderbilt University example illustrate, writers need to ask an additional and critical question: what work is the text we produce meant to do in the world, beyond the literal meaning of the words? A message of condolence, for example, not only conveys information but also shows appropriate care and concern for the community in ways that will be recognizable others from the same time, place, and culture. These are the social actions of a genre (Miller, 1984). To know how to perform those actions, and to perform them well, requires an understanding of the cultural, social, and historical contexts – nuances that we cannot expect AI to appreciate.

Of course, students will also need to be attentive to and critical readers of whatever they co-produce with GAI, because of the limitations of these tools that we’ve already reviewed. Though at the time of writing GAI tools have become much more accurate and less likely to confabulate, it is still necessary to fact-check the content they produce. More substantially, ethical use of these tools would include also checking that content for its biases, for thinking about what perspectives and information are not included in the data set, or what ideas and attitudes are being promoted as “normal” because of what the LLM has been trained on. Like any tool – or any text – LLMs will necessarily be limited, partial, and biased. Ultimately, they are not the expert: you need to be!

The bottom line, from an integrity perspective, will be for students to confirm with each instructor about uses are allowed, and then take responsibility for the work they submit. Part of such responsibility will be your own transparency about how and when you’ve used these tools.[2] At minimum, you can include a note or acknowledgment to indicate that you’ve done so – as the Vanderbilt University letter did. However, if your instructor – or your employer – indicates that the use of these tools is not permitted, it would be unethical (and likely consequential) to use them even if you did provide such an acknowledgment.


  1. A current example of such claims can be found in advertisements for the platform Grammarly (Grammarly.com).
  2. The UBC Library citation guide for Generative AI and ChatGPT provides a useful overview and resources: https://guides.library.ubc.ca/GenAI/cite.

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Discipline-based Approaches to Academic Integrity Copyright © 2024 by Anita Chaudhuri is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book