"

Writing and GAI Use in the Rhetorical Situation

Laurie McNeill

The concerns about Vanderbilt University’s email remind us that writing is socially situated – that is, it responds to a particular audience to fulfil a particular purpose, in keeping with the recognized practices of the time and place of its production. Writing is therefore about more than putting words on a page: as a form of communication, it does work in the world, including developing and sustaining human relationships. In our writing, we can show that we understand the needs and expectations of our imagined readers and the communities they represent – or, by failing to meet them, suggest that we don’t have that shared understanding.

Academic writers, including students writing in the contexts of their courses, similarly negotiate the social contexts – the rhetorical situation (Bitzer 1968)– when they write to and for other scholars, about the knowledge they are producing through their research. One expectation of the academic community is that scholars do their own work, and that they uphold the foundational shared value of academic integrity. The development of language learning models like ChatGPT or other text-generating software – tools that can apparently take over so many steps of scholarly writing, from brainstorming to paragraph creation to revision – means that we need to think carefully and make informed choices about how and when it might be appropriate to use such tools, and when such uses will instead not meet the community expectations of what it means to write with integrity. The expectation that we do our “own work” also can help us reflect on the functions that writing performs, including – in educational settings – as part of the learning process, and what foundational knowledge we might need to develop about how to write in order to use GAI, effectively.

An important first step in upholding our responsibilities in relation to GAI is to understand how it works. Because GAI development is happening so quickly and in such diverse ways, making a definitive overview almost immediately outdated, here we will provide just a very basic introduction to large language models (LLMs), the app that has had such transformative and immediate implications for society (ChatGPT is an example of an LLM). In response to a prompt or query from a user, LLMs use natural language processing to produce predictive text or image generation based on Internet content that it “scrapes” for patterns; the LLM’s “algorithm makes choices about what word will come next in a sequence” (King 2023). The more content or data it has to scrape, the more it “learns” and the more accurate its predictions become – but that accuracy will be limited by the dataset on which it is trained. For most apps, it is not possible to know what particular sources the LLM is drawing upon and, significantly, what kinds of sources (and voices and perspectives) it isn’t using: if the data set is restricted to digital-only sources, for example, it excludes older materials as well as primary sources such as archival records, and would likely omit sound files that might capture oral histories. If the LLM is trained only on English-language texts, such a restriction would similarly represent a limited, typically Eurocentric, knowledge set (e.g., Furze 2023). In addition to such potential constrictions, the practices of building the training data raise privacy, copyright, and citation concerns. In many platforms, such as ChatGPT, any content users share with the app becomes incorporated  into the LLM’s data set, and thus available to all users of that app, and other apps, such as Google’s, that scrape internet sources may do so without the original author’s consent, and without attribution or payment for their use. As Anna Mills and Elle Dimopolous (2023) note, LLMs don’t copy the material that they scrape, but typically they also don’t cite any of the original authors. These are some, though certainly not all, of the compelling ethical issues that GAI raises, and that as consumers we should be considering about when and how to use these tools.

Because of this design, when we write with GAI, the texts that we produce may be shaped by our intentions – as captured in the prompt we provide the app – but they are not wholly our own, both in terms of what the text says and how it says it, that is, in its word choices, structure, style, and organization. For students using GAI in their courses, as well as for researchers and professionals, the questions of authorship and authenticity arise: can we say that we are the (sole) authors of a GAI-produced text? Is it actually our own work, for which we are responsible? Does it reflect and uphold the expectations of the social (or rhetorical) situation? Are we comfortable with the knowledge or recommendations that it is making and that become a representation of ourselves as scholars and communicators, and members of the communities to which we write?

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Discipline-based Approaches to Academic Integrity Copyright © 2024 by Anita Chaudhuri is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book