{"id":551,"date":"2025-08-14T12:36:29","date_gmt":"2025-08-14T16:36:29","guid":{"rendered":"https:\/\/pressbooks.bccampus.ca\/ubcacademicintegrity\/?post_type=chapter&#038;p=551"},"modified":"2025-08-14T13:15:55","modified_gmt":"2025-08-14T17:15:55","slug":"writing-and-gai-use-in-the-rhetorical-situation","status":"publish","type":"chapter","link":"https:\/\/pressbooks.bccampus.ca\/ubcacademicintegrity\/chapter\/writing-and-gai-use-in-the-rhetorical-situation\/","title":{"raw":"Writing and GAI Use in the Rhetorical Situation","rendered":"Writing and GAI Use in the Rhetorical Situation"},"content":{"raw":"The concerns about Vanderbilt University\u2019s email remind us that writing is <strong>socially situated<\/strong> \u2013 that is, it responds to a particular audience to fulfil a particular purpose, in keeping with the recognized practices of the time and place of its production. Writing is therefore about more than putting words on a page: as a form of communication, it does work in the world, including developing and sustaining human relationships. In our writing, we can show that we understand the needs and expectations of our imagined readers and the communities they represent \u2013 or, by failing to meet them, suggest that we don\u2019t have that shared understanding.\r\n\r\nAcademic writers, including students writing in the contexts of their courses, similarly negotiate the social contexts \u2013 the <strong>rhetorical situation<\/strong> (Bitzer 1968)\u2013 when they write to and for other scholars, about the knowledge they are producing through their research. One expectation of the academic community is that scholars do their own work, and that they uphold the foundational shared value of academic integrity. The development of language learning models like ChatGPT or other text-generating software \u2013 tools that can apparently take over so many steps of scholarly writing, from brainstorming to paragraph creation to revision \u2013 means that we need to think carefully and make informed choices about how and when it might be appropriate to use such tools, and when such uses will instead not meet the community expectations of what it means to write with integrity. The expectation that we do our \u201cown work\u201d also can help us reflect on the functions that writing performs, including \u2013 in educational settings \u2013 as part of the learning process, and what foundational knowledge we might need to develop about how to write in order to use GAI, effectively.\r\n\r\nAn important first step in upholding our responsibilities in relation to GAI is to understand how it works. Because GAI development is happening so quickly and in such diverse ways, making a definitive overview almost immediately outdated, here we will provide just a very basic introduction to <strong>large language models<\/strong> <strong>(LLMs)<\/strong>, the app that has had such transformative and immediate implications for society (ChatGPT is an example of an LLM). In response to a prompt or query from a user, LLMs use natural language processing to produce predictive text or image generation based on Internet content that it \u201cscrapes\u201d for patterns; the LLM\u2019s \u201calgorithm makes choices about what word will come next in a sequence\u201d (King 2023). The more content or data it has to scrape, the more it \u201clearns\u201d and the more accurate its predictions become \u2013 but that accuracy will be limited by the dataset on which it is trained. For most apps, it is not possible to know what particular sources the LLM is drawing upon and, significantly, what kinds of sources (and voices and perspectives) it isn\u2019t using: if the data set is restricted to digital-only sources, for example, it excludes older materials as well as primary sources such as archival records, and would likely omit sound files that might capture oral histories. If the LLM is trained only on English-language texts, such a restriction would similarly represent a limited, typically Eurocentric, knowledge set (e.g., Furze 2023). In addition to such potential constrictions, the practices of building the training data raise privacy, copyright, and citation concerns. In many platforms, such as ChatGPT, any content users share with the app becomes incorporated\u00a0 into the LLM\u2019s data set, and thus available to all users of that app, and other apps, such as Google\u2019s, that scrape internet sources may do so without the original author\u2019s consent, and without attribution or payment for their use. As Anna Mills and Elle Dimopolous (2023) note, LLMs don\u2019t copy the material that they scrape, but typically they also don\u2019t cite any of the original authors. These are some, though certainly not all, of the compelling ethical issues that GAI raises, and that as consumers we should be considering about when and how to use these tools.\r\n\r\nBecause of this design, when we write with GAI, the texts that we produce may be shaped by our intentions \u2013 as captured in the prompt we provide the app \u2013 but they are not wholly our own, both in terms of what the text says and how it says it, that is, in its word choices, structure, style, and organization. For students using GAI in their courses, as well as for researchers and professionals, the questions of <strong>authorship<\/strong> and <strong>authenticity<\/strong> arise: can we say that we are the (sole) authors of a GAI-produced text? Is it actually our own work, for which we are responsible? Does it reflect and uphold the expectations of the <strong>social (or rhetorical) situation<\/strong>? Are we comfortable with the knowledge or recommendations that it is making and that become a representation of ourselves as scholars and communicators, and members of the communities to which we write?","rendered":"<p>The concerns about Vanderbilt University\u2019s email remind us that writing is <strong>socially situated<\/strong> \u2013 that is, it responds to a particular audience to fulfil a particular purpose, in keeping with the recognized practices of the time and place of its production. Writing is therefore about more than putting words on a page: as a form of communication, it does work in the world, including developing and sustaining human relationships. In our writing, we can show that we understand the needs and expectations of our imagined readers and the communities they represent \u2013 or, by failing to meet them, suggest that we don\u2019t have that shared understanding.<\/p>\n<p>Academic writers, including students writing in the contexts of their courses, similarly negotiate the social contexts \u2013 the <strong>rhetorical situation<\/strong> (Bitzer 1968)\u2013 when they write to and for other scholars, about the knowledge they are producing through their research. One expectation of the academic community is that scholars do their own work, and that they uphold the foundational shared value of academic integrity. The development of language learning models like ChatGPT or other text-generating software \u2013 tools that can apparently take over so many steps of scholarly writing, from brainstorming to paragraph creation to revision \u2013 means that we need to think carefully and make informed choices about how and when it might be appropriate to use such tools, and when such uses will instead not meet the community expectations of what it means to write with integrity. The expectation that we do our \u201cown work\u201d also can help us reflect on the functions that writing performs, including \u2013 in educational settings \u2013 as part of the learning process, and what foundational knowledge we might need to develop about how to write in order to use GAI, effectively.<\/p>\n<p>An important first step in upholding our responsibilities in relation to GAI is to understand how it works. Because GAI development is happening so quickly and in such diverse ways, making a definitive overview almost immediately outdated, here we will provide just a very basic introduction to <strong>large language models<\/strong> <strong>(LLMs)<\/strong>, the app that has had such transformative and immediate implications for society (ChatGPT is an example of an LLM). In response to a prompt or query from a user, LLMs use natural language processing to produce predictive text or image generation based on Internet content that it \u201cscrapes\u201d for patterns; the LLM\u2019s \u201calgorithm makes choices about what word will come next in a sequence\u201d (King 2023). The more content or data it has to scrape, the more it \u201clearns\u201d and the more accurate its predictions become \u2013 but that accuracy will be limited by the dataset on which it is trained. For most apps, it is not possible to know what particular sources the LLM is drawing upon and, significantly, what kinds of sources (and voices and perspectives) it isn\u2019t using: if the data set is restricted to digital-only sources, for example, it excludes older materials as well as primary sources such as archival records, and would likely omit sound files that might capture oral histories. If the LLM is trained only on English-language texts, such a restriction would similarly represent a limited, typically Eurocentric, knowledge set (e.g., Furze 2023). In addition to such potential constrictions, the practices of building the training data raise privacy, copyright, and citation concerns. In many platforms, such as ChatGPT, any content users share with the app becomes incorporated\u00a0 into the LLM\u2019s data set, and thus available to all users of that app, and other apps, such as Google\u2019s, that scrape internet sources may do so without the original author\u2019s consent, and without attribution or payment for their use. As Anna Mills and Elle Dimopolous (2023) note, LLMs don\u2019t copy the material that they scrape, but typically they also don\u2019t cite any of the original authors. These are some, though certainly not all, of the compelling ethical issues that GAI raises, and that as consumers we should be considering about when and how to use these tools.<\/p>\n<p>Because of this design, when we write with GAI, the texts that we produce may be shaped by our intentions \u2013 as captured in the prompt we provide the app \u2013 but they are not wholly our own, both in terms of what the text says and how it says it, that is, in its word choices, structure, style, and organization. For students using GAI in their courses, as well as for researchers and professionals, the questions of <strong>authorship<\/strong> and <strong>authenticity<\/strong> arise: can we say that we are the (sole) authors of a GAI-produced text? Is it actually our own work, for which we are responsible? Does it reflect and uphold the expectations of the <strong>social (or rhetorical) situation<\/strong>? Are we comfortable with the knowledge or recommendations that it is making and that become a representation of ourselves as scholars and communicators, and members of the communities to which we write?<\/p>\n","protected":false},"author":1076,"menu_order":1,"template":"","meta":{"pb_show_title":"on","pb_short_title":"","pb_subtitle":"","pb_authors":["laurie-mcneill"],"pb_section_license":""},"chapter-type":[],"contributor":[78],"license":[],"class_list":["post-551","chapter","type-chapter","status-publish","hentry","contributor-laurie-mcneill"],"part":549,"_links":{"self":[{"href":"https:\/\/pressbooks.bccampus.ca\/ubcacademicintegrity\/wp-json\/pressbooks\/v2\/chapters\/551","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/pressbooks.bccampus.ca\/ubcacademicintegrity\/wp-json\/pressbooks\/v2\/chapters"}],"about":[{"href":"https:\/\/pressbooks.bccampus.ca\/ubcacademicintegrity\/wp-json\/wp\/v2\/types\/chapter"}],"author":[{"embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/ubcacademicintegrity\/wp-json\/wp\/v2\/users\/1076"}],"version-history":[{"count":1,"href":"https:\/\/pressbooks.bccampus.ca\/ubcacademicintegrity\/wp-json\/pressbooks\/v2\/chapters\/551\/revisions"}],"predecessor-version":[{"id":552,"href":"https:\/\/pressbooks.bccampus.ca\/ubcacademicintegrity\/wp-json\/pressbooks\/v2\/chapters\/551\/revisions\/552"}],"part":[{"href":"https:\/\/pressbooks.bccampus.ca\/ubcacademicintegrity\/wp-json\/pressbooks\/v2\/parts\/549"}],"metadata":[{"href":"https:\/\/pressbooks.bccampus.ca\/ubcacademicintegrity\/wp-json\/pressbooks\/v2\/chapters\/551\/metadata\/"}],"wp:attachment":[{"href":"https:\/\/pressbooks.bccampus.ca\/ubcacademicintegrity\/wp-json\/wp\/v2\/media?parent=551"}],"wp:term":[{"taxonomy":"chapter-type","embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/ubcacademicintegrity\/wp-json\/pressbooks\/v2\/chapter-type?post=551"},{"taxonomy":"contributor","embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/ubcacademicintegrity\/wp-json\/wp\/v2\/contributor?post=551"},{"taxonomy":"license","embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/ubcacademicintegrity\/wp-json\/wp\/v2\/license?post=551"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}