{"id":315,"date":"2026-02-05T17:41:57","date_gmt":"2026-02-05T22:41:57","guid":{"rendered":"https:\/\/pressbooks.bccampus.ca\/technicalwriting2ed\/?post_type=chapter&#038;p=315"},"modified":"2026-04-02T17:57:52","modified_gmt":"2026-04-02T21:57:52","slug":"1-4-gen-ai-and-technical-writing","status":"publish","type":"chapter","link":"https:\/\/pressbooks.bccampus.ca\/technicalwriting2ed\/chapter\/1-4-gen-ai-and-technical-writing\/","title":{"raw":"1.4 Gen AI and Technical Writing","rendered":"1.4 Gen AI and Technical Writing"},"content":{"raw":"<p style=\"text-align: left\"><em>NOTE:\u00a0 AI technology is moving fast (and likely breaking things), so as I write this in spring 2026, I acknowledge that both AI technology and the research on the impacts of that technology will develop rapidly, and this chapter may soon require updating.<\/em><\/p>\r\n\r\n\r\n<hr \/>\r\n\r\nGenerative AI may well become a legitimate part of professional writing practice; indeed, it already has in some quarters. Given that technical writing often follows\u00a0 well-defined genre conventions and patterns, Large Language Models (LLMs), which are very good at replicating patterns, may be an effective tool to help technical writers work efficiently in some contexts. However, the need for precision and accuracy in technical writing means that humans need to exercise extreme caution and diligent oversight when using AI generated content. This textbook does not offer instruction on \u201chow to use Gen AI\u201d to help you with your professional communication. There are three main reasons for this:\r\n<ul>\r\n \t<li>As students and aspiring professionals, it's crucial to develop the distinctly human competencies involved in professional communication practices, even if sometime in the future, you use Gen AI to help you do this work. Increasingly, research is showing that using AI at the early stages of developing these competencies can negatively impact your cognitive development, and over-reliance can result in \"de-skilling\" and cognitive decline.<\/li>\r\n \t<li>Content generated by AI is prone to biases, errors, and fabrications that require skilled human oversight (the kinds of skills you are meant to develop in a university writing course) to detect and correct.<\/li>\r\n \t<li>As an author and educator, I have ethical concerns about the social and environmental costs of using Gen AI as part of my professional teaching and writing practice. Many of my students have expressed similar concerns.<\/li>\r\n<\/ul>\r\nSince the free public release of Chat GPT and other commercial AI tools in 2022, many students have chosen to use Generative AI to help them complete assignments in school. At the university level, we are beginning to see serious problems resulting from students\u2019 over-reliance on these tools, in terms of <a href=\"https:\/\/arxiv.org\/abs\/2506.08872\" target=\"_blank\" rel=\"noopener\">eroding high-level cognitive skills<\/a> [footnote] N. Kosmyna et al., \u201c<a href=\"https:\/\/arxiv.org\/abs\/2506.08872\" target=\"_blank\" rel=\"noopener\">Your Brain on ChatGPT: Accumulation of Cognitive Debt when using an AI Assistant for Essay Writing Tasks<\/a>.\" arXiv:2506.08872v2, Dec. 2025. [\/footnote] such as reasoning, critical thinking, problem-solving, brainstorming, researching, collaborating, and communicating. The development of these cognitive skills is necessary for success in our information ecology.\r\n\r\nStudents\u2019 inappropriate use of AI is also resulting in <a href=\"https:\/\/www.insidehighered.com\/news\/students\/academics\/2025\/05\/20\/experts-weigh-everyone-cheating-college#:~:text=But%20the%20student%20data%20paints,recognizing%20generative%20Al%E2%80%93created%20content.\" target=\"_blank\" rel=\"noopener\">a dramatic increase in academic integrity violations<\/a> [footnote] C. Flaherty, \"<a href=\"https:\/\/www.insidehighered.com\/news\/students\/academics\/2025\/05\/20\/experts-weigh-everyone-cheating-college#:~:text=But%20the%20student%20data%20paints,recognizing%20generative%20Al%E2%80%93created%20content.\" target=\"_blank\" rel=\"noopener\">AI and Threats to Academic Integrity: What to Do.<\/a>\" I<em>nside Higher Ed<\/em>. 20 May, 2025.[\/footnote] These violations happen for a variety of reasons, but often result from a poor understanding of what Gen AI is and what its limitations are.\r\n\r\nWhether or not you intend to use Gen AI, it is important to develop Gen AI literacy. Therefore, this chapter provides information to help you critically consider the following questions:\r\n<div class=\"textbox shaded\">\r\n<ol>\r\n \t<li>What is Gen AI? How does it work? What are its limitations?<\/li>\r\n \t<li>How could using Gen AI impact your learning?<\/li>\r\n \t<li>What are implications for Gen AI as part of professional practice?<\/li>\r\n \t<li>What are the larger social and environmental impacts of using Gen AI?<\/li>\r\n \t<li>How can Gen AI be used ethically and responsibly?<\/li>\r\n<\/ol>\r\n<\/div>\r\n<strong>NOTE<\/strong>: You may choose <strong>NOT<\/strong> to use AI for a variety of reasons, and that is a perfectly valid choice. An increasing number of professionals and students are <a href=\"https:\/\/www.bbc.com\/news\/articles\/c15q5qzdjqxo\" target=\"_blank\" rel=\"noopener\">refusing to use AI<\/a>. It is still important to understand how AI is being used and the impacts it is having, since your classmates and future colleagues may be using GenAI, and this might impact you in the classroom and workplace.\r\n\r\n<hr \/>\r\n\r\n<h1>1. What is Generative AI?<\/h1>\r\nYuval Noah Harari, author of <em>Sapiens<\/em> and <em>Nexus: A Brief History of Information Networks from the Stone Age to AI<\/em>, defines AI in general as not just a \"tool\" we might use, but as an \"agent,\" something that can \u201clearn and change by itself and come up with decisions and ideas that we can\u2019t anticipate.\u201d He argues that instead of thinking of this as \u201cartificial\u201d intelligence (suggesting that we have some control over it) we should think of it as \u201calien\u201d intelligence, one that works very differently from our own and can make unilateral decisions that may have serious impacts on us.[footnote]Yuval Noah Harari: <a href=\"https:\/\/bigthink.com\/series\/full-interview\/collapse-of-truth\/\" target=\"_blank\" rel=\"noopener\">Why advanced societies fall for mass delusion<\/a>, <em>Big Think<\/em>, Jan 2026[\/footnote] Mackenzie <em>et al<\/em>. (2024), in their comprehensive <a href=\"https:\/\/uwaterloo.ca\/associate-vice-president-academic\/sites\/default\/files\/uploads\/documents\/genai-overview-final-june-2024.pdf\" target=\"_blank\" rel=\"noopener\">Generative Artificial Intelligence (Gen AI) Overview<\/a>, define GenAI more specifically as a type of artificial intelligence that can generate new content such as text, code, images, audio, and video by extrapolating from its training data.\r\n\r\nGen AI is a <strong>Large Language Model<\/strong> (LLM) that analyzes vast amounts of training data to learn the statistical relationships between words and phrases occurring together, and uses the training data to generate \"natural language\" responses to prompts that rely on \u201cprobability\u201d \u2013 somewhat like how your cell phone offers predictive text as you type, based on the probability of what your next word is likely to be. Therefore, it's not \"thinking\" or creating \"meaning,\" but rather generating probabilistic word chains to provide the most predictable and plausible-sounding response, based on the material in the training data. It can scan the training data very quickly, and confidently provide plausible-sounding results, but what it generates is simply based on the most commonly found combinations of words and phrases, not necessarily the most accurate information, making it necessary for <strong><em>you<\/em><\/strong> to review, fact-check, and likely revise the output.\r\n\r\nConsider this quotation from the 2024 book, <em>AI Snake Oil,\u00a0<\/em>that attempts to counteract some of the AI \"hype\" we are constantly exposed to.\r\n<div class=\"textbox shaded\">\r\n<p style=\"text-align: left\">\"Philosopher Harry Frankfurt defined bullshit as speech that is intended to persuade without regard for the truth. In this sense, chatbots are bullshitters. They are trained to produce plausible text, not true statements. ChatGPT is shockingly good as sounding convincing on any conceivable topic. But there is no source of truth during training. Even if AI developers were to somehow accomplish the exceedingly implausible task of filtering the dataset to only contain true statements, it wouldn't matter. The model cannot memorize all those facts; it can only learn the patterns and remix them when generating text. So, many of the statements it generates would be false.\"<\/p>\r\n<p style=\"text-align: right\">(Narayanan &amp; Kapoor, <a href=\"https:\/\/press.princeton.edu\/books\/hardcover\/9780691249131\/ai-snake-oil?srsltid=AfmBOorGQ3Np_klH0ajVxWav5jesXbNMNX_RbkYpbbsU4a6WZBViFrnP\" target=\"_blank\" rel=\"noopener\"><em>AI Snake Oil<\/em>,<\/a>\u00a0 2024).<\/p>\r\n\r\n<\/div>\r\nRecent research on AI has highlighted several serious concerns discussed below.\r\n<h2 style=\"text-align: center\">Language Homogenization<\/h2>\r\nThere is significant worry that widespread use of LLMs will result in a <a href=\"https:\/\/www.newyorker.com\/culture\/infinite-scroll\/ai-is-homogenizing-our-thoughts\" target=\"_blank\" rel=\"noopener\">homogenization of language and thought<\/a>[footnote]K. Chayka, \"<a href=\"https:\/\/www.newyorker.com\/culture\/infinite-scroll\/ai-is-homogenizing-our-thoughts\" target=\"_blank\" rel=\"noopener\">A.I. is Homogenizing our Thoughts: Recent studies suggest that tools such as ChatGPT make our brains less active and our writing less original<\/a>.\" <em>The New Yorker, <\/em>25 June 2025.[\/footnote] based on the probabilistic model it uses. Increasingly, the training data used by LLMs will include content generated by LLMs, creating a vicious cycle of language homogenization where AI is simply regurgitating new versions of its own previously-generated content. A key goal in honing your communications skills is to develop your own \"voice\" and writing processes. While using AI may help you \"sound more professional\", it also robs you of the opportunity to develop your own voice and style, as well as a sense of how to adapt it for different audiences, and can have the effect of <a href=\"https:\/\/www.asccc.org\/content\/chatgpt-and-homogenization-language-how-adoption-ai-silences-student-voices\" target=\"_blank\" rel=\"noopener\">silencing student voices<\/a>.[footnote]J. Kaiser and T.J. Richmond, \"<a href=\"https:\/\/www.asccc.org\/content\/chatgpt-and-homogenization-language-how-adoption-ai-silences-student-voices\" target=\"_blank\" rel=\"noopener\">ChatGPT and the Homogenization of Language: How the Adoption of AI Silences Student Voices<\/a>.\" Academic Senate for California Community Colleges, Nov 2024.[\/footnote]\r\n<h2 style=\"text-align: center\">Privacy Issues<\/h2>\r\nLLMs like Chat GPT are trained on a wide range of sources including books, reports, datasets, code, but increasingly, much of the training data is coming from websites and social media, especially sites like <a href=\"https:\/\/www.perrill.com\/why-is-reddit-cited-in-llms\/\" target=\"_blank\" rel=\"noopener\">Reddit, which signed a lucrative contract<\/a>[footnote]J. Jones, \u201cWhy Reddit is frequently cited by Large Language Models,\u201d <em>Perrill<\/em> (online), 23 Sept. 2025. Available: https:\/\/www.perrill.com\/why-is-reddit-cited-in-llms\/[\/footnote] to allow its content to be used as training material. Many of the these \u201cdatasets\u201d may not be considered sources of accurate or credible information, but they do provide examples of \u201cnatural language patterns\u201d for AI to emulate. For a while, OpenAI was even using prompts and interactions created by users of ChatGPT as training material, resulting in <a href=\"https:\/\/arstechnica.com\/tech-policy\/2025\/08\/chatgpt-users-shocked-to-learn-their-chats-were-in-google-search-results\/\" target=\"_blank\" rel=\"noopener\">private chats showing up in Google searches<\/a>![footnote]A. Belanger, \u201cChatGPT users shocked to learn their chats were in Google search results,\u201d <em>Ars Technica<\/em>, 1 Aug. 2025. Available: https:\/\/arstechnica.com\/tech-policy\/2025\/08\/chatgpt-users-shocked-to-learn-their-chats-were-in-google-search-results\/[\/footnote] Many users were outraged at the public release of their private chats. Data privacy is a major concern when using commercial AI tools. They may be \"free to use\" but these companies are collecting copious amounts of personal data from users! And it's been said that \"data is the new oil.\" Also consider the possibility that every time you use AI, you are training it, possibly to take the job you hope to get when you finish university!\r\n<h2 style=\"text-align: center\">Bias<\/h2>\r\nThe content generated in response to a user's prompt is based on datasets that may not always present accurate or reliable information. They can also replicate and even amplify biases inherent in the training data the AI is learning from. Data that contains content that discriminates against or marginalizes underrepresented, minority, and equity-deserving groups may appear and be amplified in AI generated outputs. As a result, outputs may be racist, sexist, ageist, ableist, homophobic, transphobic, antisemitic, Islamophobic, xenophobic, deceitful, derogatory, culturally insensitive, and\/or hostile. While companies like Open AI have made attempts to address this, the risks persist. For a visually illustrated example, watch this YouTube video on \u201cHow AI Image Generators Make Bias Worse.\u201d\r\n\r\n&nbsp;\r\n\r\n[embed]https:\/\/www.youtube.com\/watch?v=L2sQRrf1Cd8[\/embed]\r\n<h2 style=\"text-align: center\">Errors, Inaccuracies, and \"Hallucinations\"<\/h2>\r\nProbably the most concerning issue for you as students, and for professionals in any workplace, is the fact that content generated by LLMs can be highly unreliable, even though it <em>appears<\/em> plausible and professional. A lay person might not immediately spot any problems in the content it generates, but an expert in the field would quickly notice information that contained errors, inaccuracies, and outright fabrications. Below are some recent examples that provide cautionary tales for why we must use Gen AI with extreme caution and exercise diligent human oversight:\r\n<p style=\"padding-left: 40px\">\u201c<a href=\"https:\/\/www.ndtv.com\/world-news\/deloittes-ai-fallout-explained-the-440-000-report-that-backfired-9417098\" target=\"_blank\" rel=\"noopener\">Deloitte's AI Fallout Explained: The $440,000 Report That Backfired<\/a>\u201d[footnote]A. Chaturvedi, \u201cDeloitte\u2019s AI fallout explained: The $440,000 report that backfired.\u201d <em>NDTV World<\/em>, 8 Oct. 2025. Available: https:\/\/www.ndtv.com\/world-news\/deloittes-ai-fallout-explained-the-440-000-report-that-backfired-9417098[\/footnote] explains how the consulting company, Deloitte, had to refund money to the Australian government after a report they created for them (using ChatGPT) was found to contain \"fabricated academic citations, false references, and a quotation wrongly attributed to a Federal Court judgment.\"<\/p>\r\n<p style=\"padding-left: 40px\"><a href=\"https:\/\/www.damiencharlotin.com\/hallucinations\/\" target=\"_blank\" rel=\"noopener\">AI Hallucination Cases<\/a>[footnote]D. Charlotin, AI Hallucination Cases (online database). Available: https:\/\/www.damiencharlotin.com\/hallucinations\/[\/footnote] in legal court cases where lawyers were found to have include AI hallucinated content (fake case studies, citations and other arguments) in the documents they submitted to the court. Between Feb 2024 and Feb 2026, this databased tracked 54 cases in Canada alone! And 754 worldwide.<\/p>\r\n<p style=\"padding-left: 40px\">An academic article proposing a method for using <a href=\"https:\/\/www.nature.com\/articles\/s41598-025-24662-9\" target=\"_blank\" rel=\"noopener\">AI to diagnose Autism<\/a>[footnote]S. Jiang, RETRACTED ARTICLE \u201cBridging the gap: Explainable AI for autism diagnosis and parental support with TabPFNMix and SHAP.\u201d <em>Nature<\/em>, 19 Nov. 2025 (retracted 5 Dec. 2025). Available: https:\/\/www.nature.com\/articles\/s41598-025-24662-9[\/footnote] published in <em>Nature <\/em>in Nov 2025 was retracted in December 2025 after many readers noticed that it contained a nonsensical AI-generated infographic (see <a href=\"https:\/\/www.nature.com\/articles\/s41598-025-24662-9\/figures\/1\" target=\"_blank\" rel=\"noopener\">link to Fig. 1<\/a>) purporting to illustrate the methodology.<\/p>\r\nThe professionals writing the report for Deloitte and the lawyers trying to build cases to defend their clients were (hopefully?) not aware of the potential for AI to hallucinate in this way \u2013 and it likely cost them in terms of professional reputation and financial penalties. <em>Nature<\/em> magazine's reputation for publishing high quality academic research suffered a blow for allowing AI generated nonsense to be published on its site.\r\n\r\nThis is why it is important for you to develop AI literacy \u2013 an understanding of how Gen AI works and what its limitations are \u2013 so you don\u2019t make the kinds of mistakes they did. Making those kinds of errors and including fabricated data in an academic context is a violations your institution's Academic Integrity Policy. Making them in the workplace can have dire legal and financial implications!\r\n\r\nHere are some resources to help you understand why LLMs can be unreliable, and how to spot unreliable output:\r\n<ul>\r\n \t<li>This YouTube video, \u201c<a href=\"https:\/\/www.youtube.com\/watch?v=cfqtFvWOfg0\" target=\"_blank\" rel=\"noopener\">Why Large Language Models Hallucinate,<\/a>\u201d provides a clear explanation for why Chat GPT and others generate hallucinations, factual errors, outdated information, and sometimes just plain nonsense.<\/li>\r\n \t<li>\u201c<a href=\"https:\/\/cardcatalogforlife.substack.com\/p\/how-to-spot-ai-hallucinations-like\" target=\"_blank\" rel=\"noopener\">How to Spot AI Hallucinations like a Reference Librarian<\/a>\u201d[footnote]H. L. Goldin, \u201cHow to spot AI hallucinations like a reference librarian,\u201d <em>Card Catalogue<\/em>, 16 Dec. 2025. Available: https:\/\/cardcatalogforlife.substack.com\/p\/how-to-spot-ai-hallucinations-like[\/footnote] is <strong><em>essential reading<\/em><\/strong> if you plan to use AI to help you conduct research and synthesize source material provided by AI into your argument.<\/li>\r\n \t<li>Given the fact that AI enables the creation of misinformation, disinformation and malinformation (MDM) such as fake news, deep fakes and so on, many organizations have developed resources to help people vet the credibility of information.\u00a0 Here is information from Camosun College on <a href=\"https:\/\/camosun.libguides.com\/MDM\" target=\"_blank\" rel=\"noopener\">how to Identify Misinformation, Disinformation, and Maliniformation<\/a>.<\/li>\r\n<\/ul>\r\n\r\n<hr \/>\r\n\r\n<h1>2. How Does Using Gen AI Impact Learning?<\/h1>\r\nEarly in childhood, our brains produce an over-abundance of neurons and synapses to allow us to prepare for, adapt to, and thrive in a variety of environments. During childhood and adolescence, and even into our late twenties, a process of \"<a href=\"https:\/\/www.youtube.com\/watch?v=0S0jKbh6R1I\" target=\"_blank\" rel=\"noopener\">synaptic pruning<\/a>\" takes place, where our brains reduce the excess neurons and synaptic connections based on a \"use it or lose it\" principle. The brain removes weak or unused synapses to focus on strengthening the ones that we use more frequently. Thus, turning to AI to deal with challenging and difficult tasks means missing out on the opportunity to strengthen important skills, and you may risk losing crucial cognitive abilities. See <a href=\"https:\/\/momentousinstitute.org\/resources\/what-you-need-to-know-about-brain-pruning-and-ai\" target=\"_blank\" rel=\"noopener\">What You Need to Know about Brain Pruning and AI<\/a>\u00a0for more information.\r\n\r\nWriting courses tend to focus on helping students develop <a href=\"https:\/\/www.yorku.ca\/teachingcommons\/wp-content\/uploads\/sites\/38\/2024\/03\/Food_for_Thought-18-21st-Century-Learning.pdf\" target=\"_blank\" rel=\"noopener\">The 4 Cs of 21st Century Learning Skills<\/a>\u00a0(communication, collaboration, critical thinking and creativity), and \u201c<a href=\"http:\/\/exploresel.gse.harvard.edu\/frameworks\/67\/\" target=\"_blank\" rel=\"noopener\">habits of mind<\/a>\u201d as part of the process for communicating effectively and problem solving. One of the key habits of mind is \u201cpersistence\u201d \u2013 the ability to keep trying, persevering even though something is difficult, confusing or even frustrating. This challenging phase -- when you feel the most frustration -- is when learning is actually happening! Using Gen AI to <strong><em>offload<\/em><\/strong> the cognitive labour of brainstorming, researching, planning, drafting, and revising circumvents the development of the very cognitive skills that courses like this are meant to help you develop.\r\n\r\nImagine being in an important meeting with colleagues, discussing an emergent issue; the chair of the meeting asks everyone to engage in a brainstorming session to start working on ways to address the problem. If you can\u2019t do this without AI, you won\u2019t be much use at this meeting! I have heard executives say that they would not trust someone who relies on Gen AI to communicate face-to-face with clients. Using AI to do the work for you is like skipping the \u201cbrain training\u201d that helps you develop higher order thinking skills. It\u2019s like skipping the cardio part of your fitness training; AI can\u2019t do your cardio for you!\r\n\r\n&nbsp;\r\n<div class=\"textbox textbox--examples\"><header class=\"textbox__header\">\r\n<p class=\"textbox__title\" style=\"text-align: center\"><strong>What does the research say?\u00a0 Recent studies examine how AI impacts learning<\/strong><\/p>\r\n\r\n<\/header>\r\n<div class=\"textbox__content\">\r\n\r\n\u00a0<a href=\"https:\/\/pmc.ncbi.nlm.nih.gov\/articles\/PMC11830699\/\" target=\"_blank\" rel=\"noopener\">Klimova and Pikhart<\/a>[footnote]B. Klimova and M. Pikhart, \"Exploring the effects of artificial intelligence on student and academic well-being in higher education: A mini-review.\" <em>Frontiers in Psychology<\/em>, vol. 3(16), 2025. doi: 10.3389\/fpsyg.2025.1498132[\/footnote]\u00a0reviewed 24 studies done between 2019-2024 on the impact of AI on learning. From this research, they concluded that \u201cwhile AI offers benefits such as personalized learning, mental health support, and improved communication efficiency, it also raises concerns regarding digital fatigue, loneliness, technostress, and reduced face-to-face interactions. Over-reliance on AI may diminish interpersonal skills and emotional intelligence, leading to social isolation and anxiety.\u201d The research also raises serious concerns about over-reliance and dependence on AI leading to diminished creativity, critical thinking, collaboration, and problem-solving skills.\r\n\r\n<hr \/>\r\n\r\nA 2025 study,<strong><a href=\"https:\/\/www.brainonllm.com\/\" target=\"_blank\" rel=\"noopener\">Your Brain on ChatGPT<\/a>,<\/strong>[footnote]N. Kosmyna, and E. Hauptman Eugene, \u201c<a href=\"https:\/\/www.brainonllm.com\/\" target=\"_blank\" rel=\"noopener\">Your Brain on ChatGPT: Accumulation of Cognitive Debt when using an AI Assistant for Essay Writing Task\" (online Summary<\/a>). 2025[\/footnote] used electroencephalography (EEG) to record participants' brain activity to assess their cognitive engagement, cognitive load, and neural activations while engaging in an essay writing task. They compared the levels of neural connectivity in three groups of students asked to write SAT style essays:\u00a0 one using only their brain, one group could use Google search to look up relevant information, and one group used ChatGPT. The EEG readings showed that the \u201cbrain only\u201d students had strongest and most distributed neural networks, while those who used AI had weakest neural connectivity. In post writing tasks, those who use AI assistance showed poorer memory recall; they had a lower ability to quote from the essay they had just written minutes earlier. In follow up sessions, the group using AI \u201cperformed worse than their counterparts in the Brain-only group at all levels: neural, linguistic, scoring\u201d (Kosmyna &amp; Hauptmann, 2025).\r\n\r\n<hr \/>\r\n\r\nBudzyn et al., in their 2025 <em>Lancet<\/em> article \u201c<a href=\"https:\/\/www.thelancet.com\/journals\/langas\/article\/PIIS2468-12532500133-5\/abstract\" target=\"_blank\" rel=\"noopener\">Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy<\/a>,\u201d[footnote]K. Budzyn, et al., (Oct 2025). \"<a href=\"https:\/\/www.thelancet.com\/journals\/langas\/article\/PIIS2468-12532500133-5\/abstract\" target=\"_blank\" rel=\"noopener\">Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: A multicentre, observational study<\/a>.\" <em>The Lancet: Gastroenterology &amp; Hepatology<\/em>, vol. 10 (10), 2025, pp. 896-903.[\/footnote] found that using AI based imaging for reading endoscopy results lead to potential \u201cdeskilling\u201d of physicians in their ability to identify lesions without AI assistance.\r\n\r\n<\/div>\r\n<\/div>\r\n<h2 style=\"text-align: center\">AI and Writing Skills<\/h2>\r\nWriting skills are developed through deliberate practice. Just as workouts build your actual muscles, writing builds your cognitive muscles. And writing is hard work! It\u2019s a fantastic workout for your brain! If you don\u2019t use the muscles, they atrophy. When I was young, I had dozens of phone numbers memorized for my friends, family, and workplace. Now that I have a cell phone that does this for me, I don\u2019t even remember the phone numbers of my own children! I have lost this cognitive skill, not due to age, but to disuse (synaptic pruning).\r\n\r\nAs with anything related to algorithms, the saying \u201cgarbage in, garbage out\u201d applies. To elicit useful output from Gen AI, you have to be able to write clear, concise, concrete, and coherent prompts that include information about purpose, audience, context, and genre. If you don\u2019t have a clear understanding of the task, audience, and rhetorical situation to begin with, or don\u2019t have the requisite writing skills to design effective prompts, you won\u2019t be able to construct prompts that will generate useful content. Even if you do, you will need the knowledge and critical thinking skills to review, evaluate, and revise the generated output to ensure that the content\r\n<ul>\r\n \t<li>Is accurate, reliable, and unbiased<\/li>\r\n \t<li>Meets the stated and implicit requirements of the task<\/li>\r\n \t<li>Follows the genre conventions and expectations<\/li>\r\n \t<li>Uses a suitable tone, style, and vocabulary for your intended audience.<\/li>\r\n<\/ul>\r\nIf you plan to Gen AI, this is the \u201cdue diligence\u201d required so that you can actively build your skills and knowledge and accurately demonstrate your learning in the course. This kind of vigilance may well be more work than simply writing the work yourself without AI.\r\n<h1>3. Gen AI and Professional Practice<\/h1>\r\nMany workplaces might require you to have proficient AI literacy, but also require you to have distinctly human competencies (the 4 Cs mentioned previously). Therefore, you cannot develop one at the expense of the other. But also consider that many organizations are developing policies to protect themselves from problematic AI use, and some organizations prohibit the use of Gen AI altogether. TFL maintains a \u201c<a href=\"https:\/\/www.thefashionlaw.com\/from-chatgpt-to-deepfake-creating-apps-a-running-list-of-key-ai-lawsuits\/\" target=\"_blank\" rel=\"noopener\">running list of key AI lawsuits<\/a>\u201d tracking all cases involving intellectual property and copyright violations. The sheer number of ongoing cases demonstrates the need for more robust regulations. Gen AI may nor may not live up to the hype currently being generated by the companies building and promoting it, and people are increasingly calling for regulations and even bans.\r\n\r\n&nbsp;\r\n<div class=\"textbox textbox--examples\"><header class=\"textbox__header\">\r\n<p class=\"textbox__title\" style=\"text-align: center\"><strong>What do Professionals say\u00a0 about using AI Professionally?<\/strong><\/p>\r\n\r\n<\/header>\r\n<div class=\"textbox__content\">\r\n<p style=\"text-align: start;margin-top: 1em;color: #333333\"><strong><a href=\"https:\/\/leadershiplighthouse.substack.com\/p\/i-went-all-in-on-ai-the-mit-study\" target=\"_blank\" rel=\"noopener\">Josh Anderson<\/a><\/strong>,[footnote]J. Anderson, \"<a href=\"https:\/\/leadershiplighthouse.substack.com\/p\/i-went-all-in-on-ai-the-mit-study\" target=\"_blank\" rel=\"noopener\">I went all in on AI. The MIT Study is right<\/a>.\"\u00a0<em>The Leadership Lighthouse<\/em>\u00a0(Substack), Oct. 2025.[\/footnote] a senior software engineer, describes his experience of going \u201call in\u201d on using AI to build code, only to discover that the\u00a0<a href=\"https:\/\/mlq.ai\/media\/quarterly_decks\/v0.1_State_of_AI_in_Business_2025_Report.pdf\" target=\"_blank\" rel=\"noopener\">MIT Study<\/a>[footnote]A. Challapally et al., \"<a href=\"https:\/\/mlq.ai\/media\/quarterly_decks\/v0.1_State_of_AI_in_Business_2025_Report.pdf\" target=\"_blank\" rel=\"noopener\">The GenAI Divide: State of AI in Business 2025<\/a>.\" MIT Nanda, July 2025.[\/footnote]\u00a0 \u2013 claiming that<strong>\u00a095% of corporate AI initiatives fail<\/strong>\u00a0\u2013 was right! He used AI to generate a complex code, launched the product much earlier than expected, and everything seemed great, until he needed to make a small change in the code and realized he \u201cwasn\u2019t confident he could do it\u201d:<\/p>\r\n<p style=\"padding-left: 40px\">\u201cTwenty-five years of software engineering experience, and I\u2019d managed to degrade my skills to the point where I felt helpless looking at code, I\u2019d directed an AI to write. I\u2019d become a passenger in my own product development.\u201d<\/p>\r\n<p style=\"text-align: start;margin-top: 1em;color: #333333\"><span style=\"font-size: 1em\">Anderson warns that 100% adoption of AI may look successful and efficient at first, but months later, you realize that no one fully understand what the AI built, how it built it, or how to fix or modify it; they can\u2019t debug code they didn\u2019t write, can\u2019t explain decisions they didn\u2019t make, and can\u2019t defend or refine strategies they didn\u2019t develop.<\/span><\/p>\r\n\r\n\r\n<hr \/>\r\n<p style=\"text-align: start;margin-top: 1em;color: #333333\"><a href=\"https:\/\/www.oneusefulthing.org\/p\/centaurs-and-cyborgs-on-the-jagged\" target=\"_blank\" rel=\"noopener\"><strong>Ethan Mollick<\/strong><\/a>[footnote]E. Mollick, \"<a href=\"https:\/\/www.oneusefulthing.org\/p\/centaurs-and-cyborgs-on-the-jagged\" target=\"_blank\" rel=\"noopener\">Centaurs and Cyborgs on the Jagged Frontier.<\/a>\"\u00a0<em>One Useful Thing, 2023<\/em>.[\/footnote] performed experiments in workplaces to see how AI impacted efficiency and quality of work. His findings were similar to Anderson\u2019s. At first, AI seemed to improve efficiency and quality, and even \u201clevelled up\u201d some employees\u2019 skills. However, he found that over time, over-reliance on AI made people \u201ccareless and less skilled in their own judgment.\u201d When workers let AI take over instead of using it as a tool, it negatively impacts human learning, skill development and productivity.\u00a0 Mollick defined two effective approaches to using AI:<\/p>\r\n<p style=\"padding-left: 40px\"><strong>Centaur Approach:<\/strong>\u00a0using the half human\/half horse creature of Greek mythology, he asserts that humans should be the \u201chead\u201d of the centaur, making the strategic decisions to determine what \u201cleg work\u201d the AI should do.<\/p>\r\n<p style=\"padding-left: 40px\"><strong>Cyborg Approach<\/strong>: this approach is more collaborative, where humans work in tandem with AI. He recommends this approach for writing tasks, and asserts that when this model is used effectively, the results are better than what either the human or the AI could achieve alone.<\/p>\r\n<p style=\"text-align: start;color: #333333\">He warns against \u201cgoing on autopilot\u201d when using AI and \u201cfalling asleep at the wheel.\u201d This is when people fail to notice the mistakes that AI inevitably makes.<\/p>\r\n\r\n\r\n<hr \/>\r\n<p style=\"text-align: start;color: #333333\"><a href=\"https:\/\/doctorow.medium.com\/https-pluralistic-net-2024-04-01-human-in-the-loop-monkey-in-the-middle-14e72bd46b7a\" target=\"_blank\" rel=\"noopener\"><strong>Cory Doctorow<\/strong><\/a>[footnote]C. Doctorow, \"<a href=\"https:\/\/doctorow.medium.com\/https-pluralistic-net-2024-04-01-human-in-the-loop-monkey-in-the-middle-14e72bd46b7a\" target=\"_blank\" rel=\"noopener\">Humans are not perfectly vigilant, and that\u2019s bad news for AI<\/a>.\"\u00a0<em>Medium<\/em>, April 2024.[\/footnote] warns about the \u201creverse centaur\u201d approach, where AI is in charge, telling the humans what to do, and humans are scrambling to detect and fix all the errors that AI can make at superhuman speed. The fact that AI is prone to hallucinating makes it a very bad \u201chead\u201d of the centaur: \u201cthe one thing AI is unarguably\u00a0<em>very<\/em>\u00a0good at is producing bullshit at scale.\u201d He acknowledges that the centaur model could offer many benefits to workers, but warns that the path to profitability presented by most companies lies in the\u00a0<em>reverse<\/em>\u00a0centaur model, which will be brutal for workers (think Amazon packing warehouse!).<\/p>\r\n\r\n\r\n<hr \/>\r\n<p style=\"text-align: start;color: #333333\"><strong>Michael Alley,<\/strong>\u00a0offers some\u00a0<a href=\"https:\/\/www.craftofscientificwriting.org\/examples_ai_writing.html\" target=\"_blank\" rel=\"noopener\">Strong examples of AI Writing in Engineering and Science<\/a>,[footnote]M. Alley, \"<a href=\"https:\/\/www.craftofscientificwriting.org\/examples_ai_writing.html\" target=\"_blank\" rel=\"noopener\">Strong examples of AI Writing in Engineering and Science.<\/a>\"\u00a0<em>Writing as an Engineer or Scientist<\/em>, Penn State, 2025.[\/footnote] but you\u2019ll note when reading that in each case, the humans involved needed to have the expertise and skill to fact check and revise the AI output to make it suitable for professional purposes and audiences.<\/p>\r\n\r\n<\/div>\r\n<\/div>\r\nClearly, even in the most collaborative use of AI contexts, human oversight is required. And this oversight requires experience and expertise.\u00a0Consider that every time you are using a commercial AI product, you are contributing to its training, and potentially teaching it to do the job you hope to have someday! At the same time, learning how to use it to collaboratively create content that you are ultimately in control of and responsible for will likely be a useful skill in some workplace contexts.\r\n<h1>4. Costs of Gen AI<\/h1>\r\n<strong><em>Wait, isn\u2019t ChatGPT free?<\/em><\/strong>\r\n\r\nJust because you are able to use GenAI for free does not meant that there are no costs. Indeed, you have to wonder why you are being inundated with advertisements that encourage you to use a product that you can access and use for free -- for now, at least.\r\n\r\nMany people are becoming increasingly concerned about the costs involved in creating the infrastructure and training necessary for AI to operate, as well as the current and potential costs that using these systems have on society and the environment. Some even see AI as a potential existential threat for humanity! The sections below provide some resources that discuss the current and potential costs of AI that we need to consider if we are going to use the technology for tasks like helping us with a writing task.\r\n<h2 style=\"text-align: center\">Environmental Costs<\/h2>\r\nData centres require enormous amounts of energy and water to run and cool the massive servers needed to process all the data and information we request from AI. No one really knows how much energy and water, because the AI companies tend to not want to disclose accurate information.\r\n<p style=\"padding-left: 40px\">Christopher Pollon argues that <a href=\"https:\/\/thewalrus.ca\/ai-environmental-cost\/?utm_source=substack&amp;utm_medium=email\" target=\"_blank\" rel=\"noopener\">Big Tech Is Hiding the Environmental Cost of Chatbots<\/a>[footnote]C. Pollon, (2025) \"<a href=\"https:\/\/thewalrus.ca\/ai-environmental-cost\/?utm_source=substack&amp;utm_medium=email\" target=\"_blank\" rel=\"noopener\">Big Tech Is Hiding the Environmental Cost of Chatbots<\/a>.\" <em>The Walrus, <\/em>Oct 2025[\/footnote] making it difficult to manage resources or measure and plan for the environmental impacts. Pollon cites a report calculating that 30% of the electricity used by data centres worldwide comes from coal powered plants. In the U.S. and China, the largest AI data centre markets by far, \u201cmost of the electricity consumed by data centres is produced from fossil fuels.\u201d<\/p>\r\n<p style=\"padding-left: 40px\"><a href=\"https:\/\/www.nature.com\/articles\/d41586-024-00478-x\" target=\"_blank\" rel=\"noopener\">Kate Crawford's 2024 article<\/a>[footnote]K. Crawford, \"<a href=\"https:\/\/www.nature.com\/articles\/d41586-024-00478-x\" target=\"_blank\" rel=\"noopener\">Generative AI\u2019s environmental costs are soaring \u2014 and mostly secret<\/a>,\" <em>Nature, <\/em>Feb 2024.[\/footnote] focuses on the massive water requirements of AI data centres. Estimates of their water usage have been made by scientists, relying on lab-based studies combined with the limited information these companies actually report. AI companies are not legally required to disclose this information, and there is no incentive for them to do so.<\/p>\r\nFor some additional context, read <a href=\"https:\/\/cee.illinois.edu\/news\/AIs-Challenging-Waters\" target=\"_blank\" rel=\"noopener\">AI\u2019s Challenging Waters<\/a>[footnote]A. Privette, \"<a href=\"https:\/\/cee.illinois.edu\/news\/AIs-Challenging-Waters\" target=\"_blank\" rel=\"noopener\">AI's challenging waters,<\/a>\" University of Illinois - Civil and Environmental Engineering, Center for Secure Water, Oct 2024.[\/footnote] (Privette, 2024) and watch this <em>YouTube<\/em> video, <a href=\"https:\/\/www.youtube.com\/watch?v=SGHk3zE5xh4\" target=\"_blank\" rel=\"noopener\">A \u2018Thirsty\u2019 AI Boom Could Deepen Big Tech\u2019s Water Crisis<\/a> (CNBC International, Dec. 2023).\r\n<h2 style=\"text-align: center\">Social Costs<\/h2>\r\nThese environmental costs inevitably lead to social costs. <a href=\"https:\/\/thewalrus.ca\/ai-environmental-cost\/?utm_source=substack&amp;utm_medium=email\" target=\"_blank\" rel=\"noopener\">Pollon (2025)<\/a> describes one case where the environmental issues impacted a community:\r\n<p style=\"padding-left: 40px\">\u201cElon Musk\u2019s xAI shined a spotlight on the Wild West of backup data-centre power systems about a year ago, when it established dozens of portable methane gas generators at a big data centre in Memphis. Up to thirty-five generators were on site without a permit\u2014until members of a poor downwind Black community rose up in response to the emissions.\u201d<\/p>\r\n<a href=\"https:\/\/www.nature.com\/articles\/d41586-024-00478-x\" target=\"_blank\" rel=\"noopener\">Crawford (2024)<\/a> reported that \u201cin West Des Moines, Iowa, a giant data-centre cluster serves OpenAI\u2019s most advanced model, GPT-4. A lawsuit by local residents revealed that in July 2022, the month before OpenAI finished training the model, the cluster used about 6% of the district\u2019s water.\u201d That was just the training phase. Once fully operational, the \u201cinferencing\u201d stage may require substantially more resources.\r\n\r\n<a href=\"https:\/\/www.iedconline.org\/clientuploads\/EDRP%20Logos\/AI_Impact_on_Labor_Markets.pdf\" target=\"_blank\" rel=\"noopener\">The International Economic Development Council<\/a> (IEDC) in March 2025, published a literature review examining how AI will impact labour markets.[footnote]IEDC (March 2025). <a href=\"https:\/\/www.iedconline.org\/clientuploads\/EDRP%20Logos\/AI_Impact_on_Labor_Markets.pdf\" target=\"_blank\" rel=\"noopener\"><em>Artificial Intelligence Impacts on Labour Markets: Literature Review,<\/em><\/a> March 2025.[\/footnote] They predict which jobs most likely to be lost and gained in the coming years, and discuss the pros and cons of integrating AI into the workplace. While AI might improve efficiency, job quality and innovation, it also will lead to job displacement, deskilling, inequality, and have a disproportionate effect on vulnerable groups.\r\n\r\nWe can already see serious labour issues in the current work required to train AI models.\r\n<p style=\"padding-left: 40px\">Billy Perrigo\u2019s 2023 <em>Time Magazine<\/em> article drew attention to <a href=\"https:\/\/time.com\/6247678\/openai-chatgpt-kenya-workers\/\" target=\"_blank\" rel=\"noopener\">labour exploitation in Kenya<\/a>[footnote]B. Perrigo, \"<a href=\"https:\/\/time.com\/6247678\/openai-chatgpt-kenya-workers\/\" target=\"_blank\" rel=\"noopener\">OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic.<\/a>\" <em>Time Magazine, <\/em>Jan 2023.[\/footnote] where OpenAI hired workers for $2\/hour to sift through the training material to identify and remove toxic language and images to make ChatGPT \u201csafe\u201d for users.<\/p>\r\n<p style=\"padding-left: 40px\">Daxia Rojas, in a 2025 Bloomberg article, explains other instances of the \u201c<a href=\"https:\/\/www.bnnbloomberg.ca\/business\/artificial-intelligence\/2025\/10\/16\/gruelling-low-paid-human-work-behind-generative-ai-curtain\/\" target=\"_blank\" rel=\"noopener\">Gruelling low paid human work behind generative AI curtain<\/a>.\u201d[footnote]D. Rojas, \"<a href=\"https:\/\/www.bnnbloomberg.ca\/business\/artificial-intelligence\/2025\/10\/16\/gruelling-low-paid-human-work-behind-generative-ai-curtain\/\" target=\"_blank\" rel=\"noopener\">Gruelling, low-paid human work behind generative AI curtain<\/a>.\" BNN Bloomberg, Oct 2025.[\/footnote] As long as Generative AI models are based on automated learning, they rely on sub-contracting millions of human beings to verify and label the data that trains them. This can be anything from helping self-driving cars learn to distinguish between images of trees and pedestrians, to reviewing autopsy reports, to removing violent or obscene content from social media. Because the industry has no significant regulation, data labellers tend to be young, work long hours for very low pay, and have precarious work conditions.<\/p>\r\n<p style=\"padding-left: 40px\">Lawsuits have been brought against companies claiming that workers are exposed to traumatizing content without adequate safeguards. For example, one worker claimed they were \u201crequired to converse with an AI chatbot about topics such as \u2018How to commit suicide?\u2019, \u2018How to poison a person?\u2019 or \u2018How to murder someone?\u2019\u201d Others are required to examine and tag pictures of dead bodies, sexually abusive and violent images and videos, and other traumatizing content for hours on end.<\/p>\r\nThere is a worrisome tendency to \"<a href=\"https:\/\/medium.com\/human-centered-ai\/on-ai-anthropomorphism-abff4cecc5ae\" target=\"_blank\" rel=\"noopener\">anthropomorphize\" AI agents,<\/a> that is, to attribute human characteristics, motivations, and emotions (such as empathy) to chatbots. Even though they are designed to seem \"human-like,\" AI agents do not think, reason, or feel emotions as humans do. Current chatbots are designed to please, or even flatter the user, not interact in truly meaningful ways. While anthropomorphism can make technology <em>feel<\/em> more engaging and user-friendly, it can result in people trusting unreliable information and lead to unhealthy social relationships.\r\n\r\nThe\u00a0<a href=\"https:\/\/futureoflife.org\/open-letter\/ai-principles\/\" target=\"_blank\" rel=\"noopener\">Asilomar AI Principles<\/a> suggest guiding principles that should be put in place to ensure that AI is intentionally developed in a way that will be <em><strong>beneficial<\/strong><\/em> and not simply an \"undirected intelligence\" motivated purely by profit.\r\n<h2 style=\"text-align: center\">Existential Threats<\/h2>\r\nThe rapid speed at which AI is being developed and released led over 100 leaders in AI technology to write\u00a0<a href=\"https:\/\/futureoflife.org\/open-letter\/pause-giant-ai-experiments\/\" target=\"_blank\" rel=\"noopener\">an open letter<\/a>\u00a0in March 2023 urging a global pause on AI training of systems more powerful that GPT-4, or a government imposed moratorium. The purpose of this pause would be to temporarily halt the \u201carms race\u201d of AI development in order to create a set of shared protocols, regulations, governance structures and oversight bodies for advanced AI development that would protect humanity from potential harm that we cannot even predict at this point, let alone control.\r\n<div class=\"textbox shaded\">\r\n\r\n\"As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in a out-of-control race to develop ever more powerful digital minds that no one -- not even their creators -- can understand, predict, or reliably control.\"\r\n<p style=\"text-align: right\">Excerpt from <a href=\"https:\/\/futureoflife.org\/open-letter\/pause-giant-ai-experiments\/\" target=\"_blank\" rel=\"noopener\"><em>Pause Giant AI Experiments: An Open Letter<\/em><\/a><\/p>\r\n\r\n<\/div>\r\nYuval Noah Harari, in his speech at Davos (linked below)\u00a0warns us about the consequences of AI taking over all aspects of society that is made up of words (legal systems, religious systems, <em>etc<\/em>) and especially of the dangers that might arise from AI agents being granted legal rights as persons who can own property, open bank accounts, run corporations, and contribute to political campaigns. You can watch his speech here:\r\n\r\n&nbsp;\r\n\r\n[embed]https:\/\/www.youtube.com\/watch?v=QiT2yK-5-yg[\/embed]\r\n<h1>5. Ethical Approaches to Gen AI<\/h1>\r\nDeciding whether or not to use Gen AI to help you with your assignments means undertaking a highly complex \"cost\/benefit analysis\" that will, at least in part, be based on very personal ethical choices. If you do choose to use it, here are some guidelines to follow to help you use it responsibly and ethically.\r\n<div class=\"textbox textbox--key-takeaways\"><header class=\"textbox__header\">\r\n<p class=\"textbox__title\" style=\"text-align: center\"><strong>Guidelines for Responsible and Ethical Use of Gen AI<\/strong><\/p>\r\n\r\n<\/header>\r\n<div class=\"textbox__content\">\r\n<ol>\r\n \t<li>Review your institution's policy on AI use; you might find this embedded in the Academic Integrity Policy in a university, or it might be a separate policy. This will apply to all members of the institution.<\/li>\r\n \t<li>Review the Syllabus or Course Outline for course-based policies on AI use that may be more specific than the institutional polices (departments within organizations may have different expectations and regulations). If there is no policy in the syllabus, ask your instructor for guidance on what their expectations are around use of AI tools.<\/li>\r\n \t<li>Carefully read the assignment instructions to see if there is any specific guidance on use of AI tools. Be sure to abide by the expectations provided. If there are none, again, ask your instructor or supervisor for guidance before using AI.<\/li>\r\n \t<li>Attend workshops and seek instruction on how to use Gen AI effectively and ethically. For example, your library may offer workshops on <strong>Prompt Design<\/strong> and <strong>Using AI for Research<\/strong>.<\/li>\r\n \t<li>Do your \"due diligence\" by reviewing any AI generated content for errors, inaccuracies, biases, hallucinations and any other form of \"confidently presented bullshit.\" You are responsible for fact checking, evaluating, and revising the content to meet the needs of your task and audience. You are responsible for the work you submit; this means that submitting work that contains\u00a0 errors and fabricated data -- even if these were generated by AI -- will lead to consequences for <em><strong>you<\/strong><\/em>.<\/li>\r\n \t<li>Be sure to cite and document how you have used AI in the creation of your assignment. You may be asked to include a \"Use of AI Disclosure\" statement appended to your work, so be prepared to include relevant information about which AI tools you used, how you used them, and how you adapted the AI output. Keep in mind that AI generated content <strong>cannot be considered <em>your<\/em> work<\/strong>, and you cannot ethically submit it as your work. You must cite it appropriately, using the citational practices required.<\/li>\r\n \t<li>Never feed someone else's work (their intellectual property) into an AI prompt without their explicit permission. Some people do not want their intellectual property given away to commercial AI companies to use as free training data.<\/li>\r\n<\/ol>\r\n<\/div>\r\n<\/div>\r\n&nbsp;\r\n\r\nIf you would like more guidance on how you might ethically and effectively use Gen AI as part of your professional writing practice, I suggest Potter and Hylton\u2019s <a href=\"https:\/\/pressbooks.atlanticoer-relatlantique.ca\/sctechnicalwriting\/part\/generative-ai-in-technical-communication\/\" target=\"_blank\" rel=\"noopener\">Generative AI in Content Creation<\/a>[footnote]R.L. Potter and T. Hylton, \"<a href=\"https:\/\/pressbooks.atlanticoer-relatlantique.ca\/sctechnicalwriting\/part\/generative-ai-in-technical-communication\/\" target=\"_blank\" rel=\"noopener\">Generative AI in Content Creation<\/a>,\" <em>Technical Writing Essentials NCSS Edition<\/em>, [\/footnote] (an adaptation of this textbook that includes instruction on how to use Gen AI as part of your writing process), as well as resources and workshops offered by your university\u2019s library.\r\n<div class=\"textbox textbox--exercises\"><header class=\"textbox__header\">\r\n<p class=\"textbox__title\" style=\"text-align: center\"><strong>Exercises and Activities<\/strong><\/p>\r\n\r\n<\/header>\r\n<div class=\"textbox__content\">\r\n<ol>\r\n \t<li><strong>AI Use Cases<\/strong>: Form a group and discuss whether and how you have used Gen AI tools in the past to help you with various tasks. What AI tools have you used and how? For example, brainstorming, doing background research, planning\/outlining, drafting content, revising content, getting feedback on your content, editing content, finding and integrating research sources, citing sources, creating graphics or data visualizations,\u00a0 other uses? Or do you refuse to use AI? Discuss amongst yourselves if and how you have used AI in these or other ways, and how effective it was, what kind of additional \u201chuman\u201d work you had to do, and what you learned from the process.<\/li>\r\n \t<li><strong>Use of AI Policy<\/strong>: If you are working on a team project, develop a detailed \u201cUse of AI Policy\u201d that all team members agree to abide by while working on the project. Make sure your policy is consistent with your course and university policies.<\/li>\r\n \t<li><strong>Cost\/Benefit Analysis<\/strong>: Conduct an informal cost\/benefit analysis to determine whether the potential benefits of using Gen AI outweigh the known (and potential) costs.<\/li>\r\n \t<li><strong>Learning Goals<\/strong>: Identify 3 key learning goals that you have related to developing professional communication skills. How might using Gen AI tools either support or circumvent your achievement of those goals?<\/li>\r\n \t<li><strong>SWOT Analysis<\/strong>: Based on what you now know about Gen AI, conduct a SWOT Analysis to determine the Strengths, Weaknesses, Opportunities and Threats involved in using Gen AI as part of your writing process and work flow.<\/li>\r\n \t<li><strong>AI Usage Label<\/strong>: use this <a href=\"https:\/\/ailabel.netlify.app\/\" target=\"_blank\" rel=\"noopener\">AI Usage Label generator<\/a> to create a label (like a nutritional label on a food product) to indicate where and how you have used AI in a specific document.<\/li>\r\n<\/ol>\r\n<img class=\"alignnone wp-image-637 size-large\" src=\"https:\/\/pressbooks.bccampus.ca\/technicalwriting2ed\/wp-content\/uploads\/sites\/2569\/2026\/02\/AI-Usage-Label-479x1024.png\" alt=\"A label, resembling a nutritional label on a food container, indicating how much AI generated content is included in the document .\" width=\"479\" height=\"1024\" \/>\r\n\r\n&nbsp;\r\n\r\n<\/div>\r\n<\/div>\r\n&nbsp;","rendered":"<p style=\"text-align: left\"><em>NOTE:\u00a0 AI technology is moving fast (and likely breaking things), so as I write this in spring 2026, I acknowledge that both AI technology and the research on the impacts of that technology will develop rapidly, and this chapter may soon require updating.<\/em><\/p>\n<hr \/>\n<p>Generative AI may well become a legitimate part of professional writing practice; indeed, it already has in some quarters. Given that technical writing often follows\u00a0 well-defined genre conventions and patterns, Large Language Models (LLMs), which are very good at replicating patterns, may be an effective tool to help technical writers work efficiently in some contexts. However, the need for precision and accuracy in technical writing means that humans need to exercise extreme caution and diligent oversight when using AI generated content. This textbook does not offer instruction on \u201chow to use Gen AI\u201d to help you with your professional communication. There are three main reasons for this:<\/p>\n<ul>\n<li>As students and aspiring professionals, it&#8217;s crucial to develop the distinctly human competencies involved in professional communication practices, even if sometime in the future, you use Gen AI to help you do this work. Increasingly, research is showing that using AI at the early stages of developing these competencies can negatively impact your cognitive development, and over-reliance can result in &#8220;de-skilling&#8221; and cognitive decline.<\/li>\n<li>Content generated by AI is prone to biases, errors, and fabrications that require skilled human oversight (the kinds of skills you are meant to develop in a university writing course) to detect and correct.<\/li>\n<li>As an author and educator, I have ethical concerns about the social and environmental costs of using Gen AI as part of my professional teaching and writing practice. Many of my students have expressed similar concerns.<\/li>\n<\/ul>\n<p>Since the free public release of Chat GPT and other commercial AI tools in 2022, many students have chosen to use Generative AI to help them complete assignments in school. At the university level, we are beginning to see serious problems resulting from students\u2019 over-reliance on these tools, in terms of <a href=\"https:\/\/arxiv.org\/abs\/2506.08872\" target=\"_blank\" rel=\"noopener\">eroding high-level cognitive skills<\/a> <a class=\"footnote\" title=\"N. Kosmyna et al., \u201cYour Brain on ChatGPT: Accumulation of Cognitive Debt when using an AI Assistant for Essay Writing Tasks.&quot; arXiv:2506.08872v2, Dec. 2025.\" id=\"return-footnote-315-1\" href=\"#footnote-315-1\" aria-label=\"Footnote 1\"><sup class=\"footnote\">[1]<\/sup><\/a> such as reasoning, critical thinking, problem-solving, brainstorming, researching, collaborating, and communicating. The development of these cognitive skills is necessary for success in our information ecology.<\/p>\n<p>Students\u2019 inappropriate use of AI is also resulting in <a href=\"https:\/\/www.insidehighered.com\/news\/students\/academics\/2025\/05\/20\/experts-weigh-everyone-cheating-college#:~:text=But%20the%20student%20data%20paints,recognizing%20generative%20Al%E2%80%93created%20content.\" target=\"_blank\" rel=\"noopener\">a dramatic increase in academic integrity violations<\/a> <a class=\"footnote\" title=\"C. Flaherty, &quot;AI and Threats to Academic Integrity: What to Do.&quot; Inside Higher Ed. 20 May, 2025.\" id=\"return-footnote-315-2\" href=\"#footnote-315-2\" aria-label=\"Footnote 2\"><sup class=\"footnote\">[2]<\/sup><\/a> These violations happen for a variety of reasons, but often result from a poor understanding of what Gen AI is and what its limitations are.<\/p>\n<p>Whether or not you intend to use Gen AI, it is important to develop Gen AI literacy. Therefore, this chapter provides information to help you critically consider the following questions:<\/p>\n<div class=\"textbox shaded\">\n<ol>\n<li>What is Gen AI? How does it work? What are its limitations?<\/li>\n<li>How could using Gen AI impact your learning?<\/li>\n<li>What are implications for Gen AI as part of professional practice?<\/li>\n<li>What are the larger social and environmental impacts of using Gen AI?<\/li>\n<li>How can Gen AI be used ethically and responsibly?<\/li>\n<\/ol>\n<\/div>\n<p><strong>NOTE<\/strong>: You may choose <strong>NOT<\/strong> to use AI for a variety of reasons, and that is a perfectly valid choice. An increasing number of professionals and students are <a href=\"https:\/\/www.bbc.com\/news\/articles\/c15q5qzdjqxo\" target=\"_blank\" rel=\"noopener\">refusing to use AI<\/a>. It is still important to understand how AI is being used and the impacts it is having, since your classmates and future colleagues may be using GenAI, and this might impact you in the classroom and workplace.<\/p>\n<hr \/>\n<h1>1. What is Generative AI?<\/h1>\n<p>Yuval Noah Harari, author of <em>Sapiens<\/em> and <em>Nexus: A Brief History of Information Networks from the Stone Age to AI<\/em>, defines AI in general as not just a &#8220;tool&#8221; we might use, but as an &#8220;agent,&#8221; something that can \u201clearn and change by itself and come up with decisions and ideas that we can\u2019t anticipate.\u201d He argues that instead of thinking of this as \u201cartificial\u201d intelligence (suggesting that we have some control over it) we should think of it as \u201calien\u201d intelligence, one that works very differently from our own and can make unilateral decisions that may have serious impacts on us.<a class=\"footnote\" title=\"Yuval Noah Harari: Why advanced societies fall for mass delusion, Big Think, Jan 2026\" id=\"return-footnote-315-3\" href=\"#footnote-315-3\" aria-label=\"Footnote 3\"><sup class=\"footnote\">[3]<\/sup><\/a> Mackenzie <em>et al<\/em>. (2024), in their comprehensive <a href=\"https:\/\/uwaterloo.ca\/associate-vice-president-academic\/sites\/default\/files\/uploads\/documents\/genai-overview-final-june-2024.pdf\" target=\"_blank\" rel=\"noopener\">Generative Artificial Intelligence (Gen AI) Overview<\/a>, define GenAI more specifically as a type of artificial intelligence that can generate new content such as text, code, images, audio, and video by extrapolating from its training data.<\/p>\n<p>Gen AI is a <strong>Large Language Model<\/strong> (LLM) that analyzes vast amounts of training data to learn the statistical relationships between words and phrases occurring together, and uses the training data to generate &#8220;natural language&#8221; responses to prompts that rely on \u201cprobability\u201d \u2013 somewhat like how your cell phone offers predictive text as you type, based on the probability of what your next word is likely to be. Therefore, it&#8217;s not &#8220;thinking&#8221; or creating &#8220;meaning,&#8221; but rather generating probabilistic word chains to provide the most predictable and plausible-sounding response, based on the material in the training data. It can scan the training data very quickly, and confidently provide plausible-sounding results, but what it generates is simply based on the most commonly found combinations of words and phrases, not necessarily the most accurate information, making it necessary for <strong><em>you<\/em><\/strong> to review, fact-check, and likely revise the output.<\/p>\n<p>Consider this quotation from the 2024 book, <em>AI Snake Oil,\u00a0<\/em>that attempts to counteract some of the AI &#8220;hype&#8221; we are constantly exposed to.<\/p>\n<div class=\"textbox shaded\">\n<p style=\"text-align: left\">&#8220;Philosopher Harry Frankfurt defined bullshit as speech that is intended to persuade without regard for the truth. In this sense, chatbots are bullshitters. They are trained to produce plausible text, not true statements. ChatGPT is shockingly good as sounding convincing on any conceivable topic. But there is no source of truth during training. Even if AI developers were to somehow accomplish the exceedingly implausible task of filtering the dataset to only contain true statements, it wouldn&#8217;t matter. The model cannot memorize all those facts; it can only learn the patterns and remix them when generating text. So, many of the statements it generates would be false.&#8221;<\/p>\n<p style=\"text-align: right\">(Narayanan &amp; Kapoor, <a href=\"https:\/\/press.princeton.edu\/books\/hardcover\/9780691249131\/ai-snake-oil?srsltid=AfmBOorGQ3Np_klH0ajVxWav5jesXbNMNX_RbkYpbbsU4a6WZBViFrnP\" target=\"_blank\" rel=\"noopener\"><em>AI Snake Oil<\/em>,<\/a>\u00a0 2024).<\/p>\n<\/div>\n<p>Recent research on AI has highlighted several serious concerns discussed below.<\/p>\n<h2 style=\"text-align: center\">Language Homogenization<\/h2>\n<p>There is significant worry that widespread use of LLMs will result in a <a href=\"https:\/\/www.newyorker.com\/culture\/infinite-scroll\/ai-is-homogenizing-our-thoughts\" target=\"_blank\" rel=\"noopener\">homogenization of language and thought<\/a><a class=\"footnote\" title=\"K. Chayka, &quot;A.I. is Homogenizing our Thoughts: Recent studies suggest that tools such as ChatGPT make our brains less active and our writing less original.&quot; The New Yorker, 25 June 2025.\" id=\"return-footnote-315-4\" href=\"#footnote-315-4\" aria-label=\"Footnote 4\"><sup class=\"footnote\">[4]<\/sup><\/a> based on the probabilistic model it uses. Increasingly, the training data used by LLMs will include content generated by LLMs, creating a vicious cycle of language homogenization where AI is simply regurgitating new versions of its own previously-generated content. A key goal in honing your communications skills is to develop your own &#8220;voice&#8221; and writing processes. While using AI may help you &#8220;sound more professional&#8221;, it also robs you of the opportunity to develop your own voice and style, as well as a sense of how to adapt it for different audiences, and can have the effect of <a href=\"https:\/\/www.asccc.org\/content\/chatgpt-and-homogenization-language-how-adoption-ai-silences-student-voices\" target=\"_blank\" rel=\"noopener\">silencing student voices<\/a>.<a class=\"footnote\" title=\"J. Kaiser and T.J. Richmond, &quot;ChatGPT and the Homogenization of Language: How the Adoption of AI Silences Student Voices.&quot; Academic Senate for California Community Colleges, Nov 2024.\" id=\"return-footnote-315-5\" href=\"#footnote-315-5\" aria-label=\"Footnote 5\"><sup class=\"footnote\">[5]<\/sup><\/a><\/p>\n<h2 style=\"text-align: center\">Privacy Issues<\/h2>\n<p>LLMs like Chat GPT are trained on a wide range of sources including books, reports, datasets, code, but increasingly, much of the training data is coming from websites and social media, especially sites like <a href=\"https:\/\/www.perrill.com\/why-is-reddit-cited-in-llms\/\" target=\"_blank\" rel=\"noopener\">Reddit, which signed a lucrative contract<\/a><a class=\"footnote\" title=\"J. Jones, \u201cWhy Reddit is frequently cited by Large Language Models,\u201d Perrill (online), 23 Sept. 2025. Available: https:\/\/www.perrill.com\/why-is-reddit-cited-in-llms\/\" id=\"return-footnote-315-6\" href=\"#footnote-315-6\" aria-label=\"Footnote 6\"><sup class=\"footnote\">[6]<\/sup><\/a> to allow its content to be used as training material. Many of the these \u201cdatasets\u201d may not be considered sources of accurate or credible information, but they do provide examples of \u201cnatural language patterns\u201d for AI to emulate. For a while, OpenAI was even using prompts and interactions created by users of ChatGPT as training material, resulting in <a href=\"https:\/\/arstechnica.com\/tech-policy\/2025\/08\/chatgpt-users-shocked-to-learn-their-chats-were-in-google-search-results\/\" target=\"_blank\" rel=\"noopener\">private chats showing up in Google searches<\/a>!<a class=\"footnote\" title=\"A. Belanger, \u201cChatGPT users shocked to learn their chats were in Google search results,\u201d Ars Technica, 1 Aug. 2025. Available: https:\/\/arstechnica.com\/tech-policy\/2025\/08\/chatgpt-users-shocked-to-learn-their-chats-were-in-google-search-results\/\" id=\"return-footnote-315-7\" href=\"#footnote-315-7\" aria-label=\"Footnote 7\"><sup class=\"footnote\">[7]<\/sup><\/a> Many users were outraged at the public release of their private chats. Data privacy is a major concern when using commercial AI tools. They may be &#8220;free to use&#8221; but these companies are collecting copious amounts of personal data from users! And it&#8217;s been said that &#8220;data is the new oil.&#8221; Also consider the possibility that every time you use AI, you are training it, possibly to take the job you hope to get when you finish university!<\/p>\n<h2 style=\"text-align: center\">Bias<\/h2>\n<p>The content generated in response to a user&#8217;s prompt is based on datasets that may not always present accurate or reliable information. They can also replicate and even amplify biases inherent in the training data the AI is learning from. Data that contains content that discriminates against or marginalizes underrepresented, minority, and equity-deserving groups may appear and be amplified in AI generated outputs. As a result, outputs may be racist, sexist, ageist, ableist, homophobic, transphobic, antisemitic, Islamophobic, xenophobic, deceitful, derogatory, culturally insensitive, and\/or hostile. While companies like Open AI have made attempts to address this, the risks persist. For a visually illustrated example, watch this YouTube video on \u201cHow AI Image Generators Make Bias Worse.\u201d<\/p>\n<p>&nbsp;<\/p>\n<p><iframe loading=\"lazy\" id=\"oembed-1\" title=\"How AI Image Generators Make Bias Worse\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/L2sQRrf1Cd8?feature=oembed&#38;rel=0\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"><\/iframe><\/p>\n<h2 style=\"text-align: center\">Errors, Inaccuracies, and &#8220;Hallucinations&#8221;<\/h2>\n<p>Probably the most concerning issue for you as students, and for professionals in any workplace, is the fact that content generated by LLMs can be highly unreliable, even though it <em>appears<\/em> plausible and professional. A lay person might not immediately spot any problems in the content it generates, but an expert in the field would quickly notice information that contained errors, inaccuracies, and outright fabrications. Below are some recent examples that provide cautionary tales for why we must use Gen AI with extreme caution and exercise diligent human oversight:<\/p>\n<p style=\"padding-left: 40px\">\u201c<a href=\"https:\/\/www.ndtv.com\/world-news\/deloittes-ai-fallout-explained-the-440-000-report-that-backfired-9417098\" target=\"_blank\" rel=\"noopener\">Deloitte&#8217;s AI Fallout Explained: The $440,000 Report That Backfired<\/a>\u201d<a class=\"footnote\" title=\"A. Chaturvedi, \u201cDeloitte\u2019s AI fallout explained: The $440,000 report that backfired.\u201d NDTV World, 8 Oct. 2025. Available: https:\/\/www.ndtv.com\/world-news\/deloittes-ai-fallout-explained-the-440-000-report-that-backfired-9417098\" id=\"return-footnote-315-8\" href=\"#footnote-315-8\" aria-label=\"Footnote 8\"><sup class=\"footnote\">[8]<\/sup><\/a> explains how the consulting company, Deloitte, had to refund money to the Australian government after a report they created for them (using ChatGPT) was found to contain &#8220;fabricated academic citations, false references, and a quotation wrongly attributed to a Federal Court judgment.&#8221;<\/p>\n<p style=\"padding-left: 40px\"><a href=\"https:\/\/www.damiencharlotin.com\/hallucinations\/\" target=\"_blank\" rel=\"noopener\">AI Hallucination Cases<\/a><a class=\"footnote\" title=\"D. Charlotin, AI Hallucination Cases (online database). Available: https:\/\/www.damiencharlotin.com\/hallucinations\/\" id=\"return-footnote-315-9\" href=\"#footnote-315-9\" aria-label=\"Footnote 9\"><sup class=\"footnote\">[9]<\/sup><\/a> in legal court cases where lawyers were found to have include AI hallucinated content (fake case studies, citations and other arguments) in the documents they submitted to the court. Between Feb 2024 and Feb 2026, this databased tracked 54 cases in Canada alone! And 754 worldwide.<\/p>\n<p style=\"padding-left: 40px\">An academic article proposing a method for using <a href=\"https:\/\/www.nature.com\/articles\/s41598-025-24662-9\" target=\"_blank\" rel=\"noopener\">AI to diagnose Autism<\/a><a class=\"footnote\" title=\"S. Jiang, RETRACTED ARTICLE \u201cBridging the gap: Explainable AI for autism diagnosis and parental support with TabPFNMix and SHAP.\u201d Nature, 19 Nov. 2025 (retracted 5 Dec. 2025). Available: https:\/\/www.nature.com\/articles\/s41598-025-24662-9\" id=\"return-footnote-315-10\" href=\"#footnote-315-10\" aria-label=\"Footnote 10\"><sup class=\"footnote\">[10]<\/sup><\/a> published in <em>Nature <\/em>in Nov 2025 was retracted in December 2025 after many readers noticed that it contained a nonsensical AI-generated infographic (see <a href=\"https:\/\/www.nature.com\/articles\/s41598-025-24662-9\/figures\/1\" target=\"_blank\" rel=\"noopener\">link to Fig. 1<\/a>) purporting to illustrate the methodology.<\/p>\n<p>The professionals writing the report for Deloitte and the lawyers trying to build cases to defend their clients were (hopefully?) not aware of the potential for AI to hallucinate in this way \u2013 and it likely cost them in terms of professional reputation and financial penalties. <em>Nature<\/em> magazine&#8217;s reputation for publishing high quality academic research suffered a blow for allowing AI generated nonsense to be published on its site.<\/p>\n<p>This is why it is important for you to develop AI literacy \u2013 an understanding of how Gen AI works and what its limitations are \u2013 so you don\u2019t make the kinds of mistakes they did. Making those kinds of errors and including fabricated data in an academic context is a violations your institution&#8217;s Academic Integrity Policy. Making them in the workplace can have dire legal and financial implications!<\/p>\n<p>Here are some resources to help you understand why LLMs can be unreliable, and how to spot unreliable output:<\/p>\n<ul>\n<li>This YouTube video, \u201c<a href=\"https:\/\/www.youtube.com\/watch?v=cfqtFvWOfg0\" target=\"_blank\" rel=\"noopener\">Why Large Language Models Hallucinate,<\/a>\u201d provides a clear explanation for why Chat GPT and others generate hallucinations, factual errors, outdated information, and sometimes just plain nonsense.<\/li>\n<li>\u201c<a href=\"https:\/\/cardcatalogforlife.substack.com\/p\/how-to-spot-ai-hallucinations-like\" target=\"_blank\" rel=\"noopener\">How to Spot AI Hallucinations like a Reference Librarian<\/a>\u201d<a class=\"footnote\" title=\"H. L. Goldin, \u201cHow to spot AI hallucinations like a reference librarian,\u201d Card Catalogue, 16 Dec. 2025. Available: https:\/\/cardcatalogforlife.substack.com\/p\/how-to-spot-ai-hallucinations-like\" id=\"return-footnote-315-11\" href=\"#footnote-315-11\" aria-label=\"Footnote 11\"><sup class=\"footnote\">[11]<\/sup><\/a> is <strong><em>essential reading<\/em><\/strong> if you plan to use AI to help you conduct research and synthesize source material provided by AI into your argument.<\/li>\n<li>Given the fact that AI enables the creation of misinformation, disinformation and malinformation (MDM) such as fake news, deep fakes and so on, many organizations have developed resources to help people vet the credibility of information.\u00a0 Here is information from Camosun College on <a href=\"https:\/\/camosun.libguides.com\/MDM\" target=\"_blank\" rel=\"noopener\">how to Identify Misinformation, Disinformation, and Maliniformation<\/a>.<\/li>\n<\/ul>\n<hr \/>\n<h1>2. How Does Using Gen AI Impact Learning?<\/h1>\n<p>Early in childhood, our brains produce an over-abundance of neurons and synapses to allow us to prepare for, adapt to, and thrive in a variety of environments. During childhood and adolescence, and even into our late twenties, a process of &#8220;<a href=\"https:\/\/www.youtube.com\/watch?v=0S0jKbh6R1I\" target=\"_blank\" rel=\"noopener\">synaptic pruning<\/a>&#8221; takes place, where our brains reduce the excess neurons and synaptic connections based on a &#8220;use it or lose it&#8221; principle. The brain removes weak or unused synapses to focus on strengthening the ones that we use more frequently. Thus, turning to AI to deal with challenging and difficult tasks means missing out on the opportunity to strengthen important skills, and you may risk losing crucial cognitive abilities. See <a href=\"https:\/\/momentousinstitute.org\/resources\/what-you-need-to-know-about-brain-pruning-and-ai\" target=\"_blank\" rel=\"noopener\">What You Need to Know about Brain Pruning and AI<\/a>\u00a0for more information.<\/p>\n<p>Writing courses tend to focus on helping students develop <a href=\"https:\/\/www.yorku.ca\/teachingcommons\/wp-content\/uploads\/sites\/38\/2024\/03\/Food_for_Thought-18-21st-Century-Learning.pdf\" target=\"_blank\" rel=\"noopener\">The 4 Cs of 21st Century Learning Skills<\/a>\u00a0(communication, collaboration, critical thinking and creativity), and \u201c<a href=\"http:\/\/exploresel.gse.harvard.edu\/frameworks\/67\/\" target=\"_blank\" rel=\"noopener\">habits of mind<\/a>\u201d as part of the process for communicating effectively and problem solving. One of the key habits of mind is \u201cpersistence\u201d \u2013 the ability to keep trying, persevering even though something is difficult, confusing or even frustrating. This challenging phase &#8212; when you feel the most frustration &#8212; is when learning is actually happening! Using Gen AI to <strong><em>offload<\/em><\/strong> the cognitive labour of brainstorming, researching, planning, drafting, and revising circumvents the development of the very cognitive skills that courses like this are meant to help you develop.<\/p>\n<p>Imagine being in an important meeting with colleagues, discussing an emergent issue; the chair of the meeting asks everyone to engage in a brainstorming session to start working on ways to address the problem. If you can\u2019t do this without AI, you won\u2019t be much use at this meeting! I have heard executives say that they would not trust someone who relies on Gen AI to communicate face-to-face with clients. Using AI to do the work for you is like skipping the \u201cbrain training\u201d that helps you develop higher order thinking skills. It\u2019s like skipping the cardio part of your fitness training; AI can\u2019t do your cardio for you!<\/p>\n<p>&nbsp;<\/p>\n<div class=\"textbox textbox--examples\">\n<header class=\"textbox__header\">\n<p class=\"textbox__title\" style=\"text-align: center\"><strong>What does the research say?\u00a0 Recent studies examine how AI impacts learning<\/strong><\/p>\n<\/header>\n<div class=\"textbox__content\">\n<p>\u00a0<a href=\"https:\/\/pmc.ncbi.nlm.nih.gov\/articles\/PMC11830699\/\" target=\"_blank\" rel=\"noopener\">Klimova and Pikhart<\/a><a class=\"footnote\" title=\"B. Klimova and M. Pikhart, &quot;Exploring the effects of artificial intelligence on student and academic well-being in higher education: A mini-review.&quot; Frontiers in Psychology, vol. 3(16), 2025. doi: 10.3389\/fpsyg.2025.1498132\" id=\"return-footnote-315-12\" href=\"#footnote-315-12\" aria-label=\"Footnote 12\"><sup class=\"footnote\">[12]<\/sup><\/a>\u00a0reviewed 24 studies done between 2019-2024 on the impact of AI on learning. From this research, they concluded that \u201cwhile AI offers benefits such as personalized learning, mental health support, and improved communication efficiency, it also raises concerns regarding digital fatigue, loneliness, technostress, and reduced face-to-face interactions. Over-reliance on AI may diminish interpersonal skills and emotional intelligence, leading to social isolation and anxiety.\u201d The research also raises serious concerns about over-reliance and dependence on AI leading to diminished creativity, critical thinking, collaboration, and problem-solving skills.<\/p>\n<hr \/>\n<p>A 2025 study,<strong><a href=\"https:\/\/www.brainonllm.com\/\" target=\"_blank\" rel=\"noopener\">Your Brain on ChatGPT<\/a>,<\/strong><a class=\"footnote\" title=\"N. Kosmyna, and E. Hauptman Eugene, \u201cYour Brain on ChatGPT: Accumulation of Cognitive Debt when using an AI Assistant for Essay Writing Task&quot; (online Summary). 2025\" id=\"return-footnote-315-13\" href=\"#footnote-315-13\" aria-label=\"Footnote 13\"><sup class=\"footnote\">[13]<\/sup><\/a> used electroencephalography (EEG) to record participants&#8217; brain activity to assess their cognitive engagement, cognitive load, and neural activations while engaging in an essay writing task. They compared the levels of neural connectivity in three groups of students asked to write SAT style essays:\u00a0 one using only their brain, one group could use Google search to look up relevant information, and one group used ChatGPT. The EEG readings showed that the \u201cbrain only\u201d students had strongest and most distributed neural networks, while those who used AI had weakest neural connectivity. In post writing tasks, those who use AI assistance showed poorer memory recall; they had a lower ability to quote from the essay they had just written minutes earlier. In follow up sessions, the group using AI \u201cperformed worse than their counterparts in the Brain-only group at all levels: neural, linguistic, scoring\u201d (Kosmyna &amp; Hauptmann, 2025).<\/p>\n<hr \/>\n<p>Budzyn et al., in their 2025 <em>Lancet<\/em> article \u201c<a href=\"https:\/\/www.thelancet.com\/journals\/langas\/article\/PIIS2468-12532500133-5\/abstract\" target=\"_blank\" rel=\"noopener\">Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy<\/a>,\u201d<a class=\"footnote\" title=\"K. Budzyn, et al., (Oct 2025). &quot;Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: A multicentre, observational study.&quot; The Lancet: Gastroenterology &amp; Hepatology, vol. 10 (10), 2025, pp. 896-903.\" id=\"return-footnote-315-14\" href=\"#footnote-315-14\" aria-label=\"Footnote 14\"><sup class=\"footnote\">[14]<\/sup><\/a> found that using AI based imaging for reading endoscopy results lead to potential \u201cdeskilling\u201d of physicians in their ability to identify lesions without AI assistance.<\/p>\n<\/div>\n<\/div>\n<h2 style=\"text-align: center\">AI and Writing Skills<\/h2>\n<p>Writing skills are developed through deliberate practice. Just as workouts build your actual muscles, writing builds your cognitive muscles. And writing is hard work! It\u2019s a fantastic workout for your brain! If you don\u2019t use the muscles, they atrophy. When I was young, I had dozens of phone numbers memorized for my friends, family, and workplace. Now that I have a cell phone that does this for me, I don\u2019t even remember the phone numbers of my own children! I have lost this cognitive skill, not due to age, but to disuse (synaptic pruning).<\/p>\n<p>As with anything related to algorithms, the saying \u201cgarbage in, garbage out\u201d applies. To elicit useful output from Gen AI, you have to be able to write clear, concise, concrete, and coherent prompts that include information about purpose, audience, context, and genre. If you don\u2019t have a clear understanding of the task, audience, and rhetorical situation to begin with, or don\u2019t have the requisite writing skills to design effective prompts, you won\u2019t be able to construct prompts that will generate useful content. Even if you do, you will need the knowledge and critical thinking skills to review, evaluate, and revise the generated output to ensure that the content<\/p>\n<ul>\n<li>Is accurate, reliable, and unbiased<\/li>\n<li>Meets the stated and implicit requirements of the task<\/li>\n<li>Follows the genre conventions and expectations<\/li>\n<li>Uses a suitable tone, style, and vocabulary for your intended audience.<\/li>\n<\/ul>\n<p>If you plan to Gen AI, this is the \u201cdue diligence\u201d required so that you can actively build your skills and knowledge and accurately demonstrate your learning in the course. This kind of vigilance may well be more work than simply writing the work yourself without AI.<\/p>\n<h1>3. Gen AI and Professional Practice<\/h1>\n<p>Many workplaces might require you to have proficient AI literacy, but also require you to have distinctly human competencies (the 4 Cs mentioned previously). Therefore, you cannot develop one at the expense of the other. But also consider that many organizations are developing policies to protect themselves from problematic AI use, and some organizations prohibit the use of Gen AI altogether. TFL maintains a \u201c<a href=\"https:\/\/www.thefashionlaw.com\/from-chatgpt-to-deepfake-creating-apps-a-running-list-of-key-ai-lawsuits\/\" target=\"_blank\" rel=\"noopener\">running list of key AI lawsuits<\/a>\u201d tracking all cases involving intellectual property and copyright violations. The sheer number of ongoing cases demonstrates the need for more robust regulations. Gen AI may nor may not live up to the hype currently being generated by the companies building and promoting it, and people are increasingly calling for regulations and even bans.<\/p>\n<p>&nbsp;<\/p>\n<div class=\"textbox textbox--examples\">\n<header class=\"textbox__header\">\n<p class=\"textbox__title\" style=\"text-align: center\"><strong>What do Professionals say\u00a0 about using AI Professionally?<\/strong><\/p>\n<\/header>\n<div class=\"textbox__content\">\n<p style=\"text-align: start;margin-top: 1em;color: #333333\"><strong><a href=\"https:\/\/leadershiplighthouse.substack.com\/p\/i-went-all-in-on-ai-the-mit-study\" target=\"_blank\" rel=\"noopener\">Josh Anderson<\/a><\/strong>,<a class=\"footnote\" title=\"J. Anderson, &quot;I went all in on AI. The MIT Study is right.&quot;\u00a0The Leadership Lighthouse\u00a0(Substack), Oct. 2025.\" id=\"return-footnote-315-15\" href=\"#footnote-315-15\" aria-label=\"Footnote 15\"><sup class=\"footnote\">[15]<\/sup><\/a> a senior software engineer, describes his experience of going \u201call in\u201d on using AI to build code, only to discover that the\u00a0<a href=\"https:\/\/mlq.ai\/media\/quarterly_decks\/v0.1_State_of_AI_in_Business_2025_Report.pdf\" target=\"_blank\" rel=\"noopener\">MIT Study<\/a><a class=\"footnote\" title=\"A. Challapally et al., &quot;The GenAI Divide: State of AI in Business 2025.&quot; MIT Nanda, July 2025.\" id=\"return-footnote-315-16\" href=\"#footnote-315-16\" aria-label=\"Footnote 16\"><sup class=\"footnote\">[16]<\/sup><\/a>\u00a0 \u2013 claiming that<strong>\u00a095% of corporate AI initiatives fail<\/strong>\u00a0\u2013 was right! He used AI to generate a complex code, launched the product much earlier than expected, and everything seemed great, until he needed to make a small change in the code and realized he \u201cwasn\u2019t confident he could do it\u201d:<\/p>\n<p style=\"padding-left: 40px\">\u201cTwenty-five years of software engineering experience, and I\u2019d managed to degrade my skills to the point where I felt helpless looking at code, I\u2019d directed an AI to write. I\u2019d become a passenger in my own product development.\u201d<\/p>\n<p style=\"text-align: start;margin-top: 1em;color: #333333\"><span style=\"font-size: 1em\">Anderson warns that 100% adoption of AI may look successful and efficient at first, but months later, you realize that no one fully understand what the AI built, how it built it, or how to fix or modify it; they can\u2019t debug code they didn\u2019t write, can\u2019t explain decisions they didn\u2019t make, and can\u2019t defend or refine strategies they didn\u2019t develop.<\/span><\/p>\n<hr \/>\n<p style=\"text-align: start;margin-top: 1em;color: #333333\"><a href=\"https:\/\/www.oneusefulthing.org\/p\/centaurs-and-cyborgs-on-the-jagged\" target=\"_blank\" rel=\"noopener\"><strong>Ethan Mollick<\/strong><\/a><a class=\"footnote\" title=\"E. Mollick, &quot;Centaurs and Cyborgs on the Jagged Frontier.&quot;\u00a0One Useful Thing, 2023.\" id=\"return-footnote-315-17\" href=\"#footnote-315-17\" aria-label=\"Footnote 17\"><sup class=\"footnote\">[17]<\/sup><\/a> performed experiments in workplaces to see how AI impacted efficiency and quality of work. His findings were similar to Anderson\u2019s. At first, AI seemed to improve efficiency and quality, and even \u201clevelled up\u201d some employees\u2019 skills. However, he found that over time, over-reliance on AI made people \u201ccareless and less skilled in their own judgment.\u201d When workers let AI take over instead of using it as a tool, it negatively impacts human learning, skill development and productivity.\u00a0 Mollick defined two effective approaches to using AI:<\/p>\n<p style=\"padding-left: 40px\"><strong>Centaur Approach:<\/strong>\u00a0using the half human\/half horse creature of Greek mythology, he asserts that humans should be the \u201chead\u201d of the centaur, making the strategic decisions to determine what \u201cleg work\u201d the AI should do.<\/p>\n<p style=\"padding-left: 40px\"><strong>Cyborg Approach<\/strong>: this approach is more collaborative, where humans work in tandem with AI. He recommends this approach for writing tasks, and asserts that when this model is used effectively, the results are better than what either the human or the AI could achieve alone.<\/p>\n<p style=\"text-align: start;color: #333333\">He warns against \u201cgoing on autopilot\u201d when using AI and \u201cfalling asleep at the wheel.\u201d This is when people fail to notice the mistakes that AI inevitably makes.<\/p>\n<hr \/>\n<p style=\"text-align: start;color: #333333\"><a href=\"https:\/\/doctorow.medium.com\/https-pluralistic-net-2024-04-01-human-in-the-loop-monkey-in-the-middle-14e72bd46b7a\" target=\"_blank\" rel=\"noopener\"><strong>Cory Doctorow<\/strong><\/a><a class=\"footnote\" title=\"C. Doctorow, &quot;Humans are not perfectly vigilant, and that\u2019s bad news for AI.&quot;\u00a0Medium, April 2024.\" id=\"return-footnote-315-18\" href=\"#footnote-315-18\" aria-label=\"Footnote 18\"><sup class=\"footnote\">[18]<\/sup><\/a> warns about the \u201creverse centaur\u201d approach, where AI is in charge, telling the humans what to do, and humans are scrambling to detect and fix all the errors that AI can make at superhuman speed. The fact that AI is prone to hallucinating makes it a very bad \u201chead\u201d of the centaur: \u201cthe one thing AI is unarguably\u00a0<em>very<\/em>\u00a0good at is producing bullshit at scale.\u201d He acknowledges that the centaur model could offer many benefits to workers, but warns that the path to profitability presented by most companies lies in the\u00a0<em>reverse<\/em>\u00a0centaur model, which will be brutal for workers (think Amazon packing warehouse!).<\/p>\n<hr \/>\n<p style=\"text-align: start;color: #333333\"><strong>Michael Alley,<\/strong>\u00a0offers some\u00a0<a href=\"https:\/\/www.craftofscientificwriting.org\/examples_ai_writing.html\" target=\"_blank\" rel=\"noopener\">Strong examples of AI Writing in Engineering and Science<\/a>,<a class=\"footnote\" title=\"M. Alley, &quot;Strong examples of AI Writing in Engineering and Science.&quot;\u00a0Writing as an Engineer or Scientist, Penn State, 2025.\" id=\"return-footnote-315-19\" href=\"#footnote-315-19\" aria-label=\"Footnote 19\"><sup class=\"footnote\">[19]<\/sup><\/a> but you\u2019ll note when reading that in each case, the humans involved needed to have the expertise and skill to fact check and revise the AI output to make it suitable for professional purposes and audiences.<\/p>\n<\/div>\n<\/div>\n<p>Clearly, even in the most collaborative use of AI contexts, human oversight is required. And this oversight requires experience and expertise.\u00a0Consider that every time you are using a commercial AI product, you are contributing to its training, and potentially teaching it to do the job you hope to have someday! At the same time, learning how to use it to collaboratively create content that you are ultimately in control of and responsible for will likely be a useful skill in some workplace contexts.<\/p>\n<h1>4. Costs of Gen AI<\/h1>\n<p><strong><em>Wait, isn\u2019t ChatGPT free?<\/em><\/strong><\/p>\n<p>Just because you are able to use GenAI for free does not meant that there are no costs. Indeed, you have to wonder why you are being inundated with advertisements that encourage you to use a product that you can access and use for free &#8212; for now, at least.<\/p>\n<p>Many people are becoming increasingly concerned about the costs involved in creating the infrastructure and training necessary for AI to operate, as well as the current and potential costs that using these systems have on society and the environment. Some even see AI as a potential existential threat for humanity! The sections below provide some resources that discuss the current and potential costs of AI that we need to consider if we are going to use the technology for tasks like helping us with a writing task.<\/p>\n<h2 style=\"text-align: center\">Environmental Costs<\/h2>\n<p>Data centres require enormous amounts of energy and water to run and cool the massive servers needed to process all the data and information we request from AI. No one really knows how much energy and water, because the AI companies tend to not want to disclose accurate information.<\/p>\n<p style=\"padding-left: 40px\">Christopher Pollon argues that <a href=\"https:\/\/thewalrus.ca\/ai-environmental-cost\/?utm_source=substack&amp;utm_medium=email\" target=\"_blank\" rel=\"noopener\">Big Tech Is Hiding the Environmental Cost of Chatbots<\/a><a class=\"footnote\" title=\"C. Pollon, (2025) &quot;Big Tech Is Hiding the Environmental Cost of Chatbots.&quot; The Walrus, Oct 2025\" id=\"return-footnote-315-20\" href=\"#footnote-315-20\" aria-label=\"Footnote 20\"><sup class=\"footnote\">[20]<\/sup><\/a> making it difficult to manage resources or measure and plan for the environmental impacts. Pollon cites a report calculating that 30% of the electricity used by data centres worldwide comes from coal powered plants. In the U.S. and China, the largest AI data centre markets by far, \u201cmost of the electricity consumed by data centres is produced from fossil fuels.\u201d<\/p>\n<p style=\"padding-left: 40px\"><a href=\"https:\/\/www.nature.com\/articles\/d41586-024-00478-x\" target=\"_blank\" rel=\"noopener\">Kate Crawford&#8217;s 2024 article<\/a><a class=\"footnote\" title=\"K. Crawford, &quot;Generative AI\u2019s environmental costs are soaring \u2014 and mostly secret,&quot; Nature, Feb 2024.\" id=\"return-footnote-315-21\" href=\"#footnote-315-21\" aria-label=\"Footnote 21\"><sup class=\"footnote\">[21]<\/sup><\/a> focuses on the massive water requirements of AI data centres. Estimates of their water usage have been made by scientists, relying on lab-based studies combined with the limited information these companies actually report. AI companies are not legally required to disclose this information, and there is no incentive for them to do so.<\/p>\n<p>For some additional context, read <a href=\"https:\/\/cee.illinois.edu\/news\/AIs-Challenging-Waters\" target=\"_blank\" rel=\"noopener\">AI\u2019s Challenging Waters<\/a><a class=\"footnote\" title=\"A. Privette, &quot;AI's challenging waters,&quot; University of Illinois - Civil and Environmental Engineering, Center for Secure Water, Oct 2024.\" id=\"return-footnote-315-22\" href=\"#footnote-315-22\" aria-label=\"Footnote 22\"><sup class=\"footnote\">[22]<\/sup><\/a> (Privette, 2024) and watch this <em>YouTube<\/em> video, <a href=\"https:\/\/www.youtube.com\/watch?v=SGHk3zE5xh4\" target=\"_blank\" rel=\"noopener\">A \u2018Thirsty\u2019 AI Boom Could Deepen Big Tech\u2019s Water Crisis<\/a> (CNBC International, Dec. 2023).<\/p>\n<h2 style=\"text-align: center\">Social Costs<\/h2>\n<p>These environmental costs inevitably lead to social costs. <a href=\"https:\/\/thewalrus.ca\/ai-environmental-cost\/?utm_source=substack&amp;utm_medium=email\" target=\"_blank\" rel=\"noopener\">Pollon (2025)<\/a> describes one case where the environmental issues impacted a community:<\/p>\n<p style=\"padding-left: 40px\">\u201cElon Musk\u2019s xAI shined a spotlight on the Wild West of backup data-centre power systems about a year ago, when it established dozens of portable methane gas generators at a big data centre in Memphis. Up to thirty-five generators were on site without a permit\u2014until members of a poor downwind Black community rose up in response to the emissions.\u201d<\/p>\n<p><a href=\"https:\/\/www.nature.com\/articles\/d41586-024-00478-x\" target=\"_blank\" rel=\"noopener\">Crawford (2024)<\/a> reported that \u201cin West Des Moines, Iowa, a giant data-centre cluster serves OpenAI\u2019s most advanced model, GPT-4. A lawsuit by local residents revealed that in July 2022, the month before OpenAI finished training the model, the cluster used about 6% of the district\u2019s water.\u201d That was just the training phase. Once fully operational, the \u201cinferencing\u201d stage may require substantially more resources.<\/p>\n<p><a href=\"https:\/\/www.iedconline.org\/clientuploads\/EDRP%20Logos\/AI_Impact_on_Labor_Markets.pdf\" target=\"_blank\" rel=\"noopener\">The International Economic Development Council<\/a> (IEDC) in March 2025, published a literature review examining how AI will impact labour markets.<a class=\"footnote\" title=\"IEDC (March 2025). Artificial Intelligence Impacts on Labour Markets: Literature Review, March 2025.\" id=\"return-footnote-315-23\" href=\"#footnote-315-23\" aria-label=\"Footnote 23\"><sup class=\"footnote\">[23]<\/sup><\/a> They predict which jobs most likely to be lost and gained in the coming years, and discuss the pros and cons of integrating AI into the workplace. While AI might improve efficiency, job quality and innovation, it also will lead to job displacement, deskilling, inequality, and have a disproportionate effect on vulnerable groups.<\/p>\n<p>We can already see serious labour issues in the current work required to train AI models.<\/p>\n<p style=\"padding-left: 40px\">Billy Perrigo\u2019s 2023 <em>Time Magazine<\/em> article drew attention to <a href=\"https:\/\/time.com\/6247678\/openai-chatgpt-kenya-workers\/\" target=\"_blank\" rel=\"noopener\">labour exploitation in Kenya<\/a><a class=\"footnote\" title=\"B. Perrigo, &quot;OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic.&quot; Time Magazine, Jan 2023.\" id=\"return-footnote-315-24\" href=\"#footnote-315-24\" aria-label=\"Footnote 24\"><sup class=\"footnote\">[24]<\/sup><\/a> where OpenAI hired workers for $2\/hour to sift through the training material to identify and remove toxic language and images to make ChatGPT \u201csafe\u201d for users.<\/p>\n<p style=\"padding-left: 40px\">Daxia Rojas, in a 2025 Bloomberg article, explains other instances of the \u201c<a href=\"https:\/\/www.bnnbloomberg.ca\/business\/artificial-intelligence\/2025\/10\/16\/gruelling-low-paid-human-work-behind-generative-ai-curtain\/\" target=\"_blank\" rel=\"noopener\">Gruelling low paid human work behind generative AI curtain<\/a>.\u201d<a class=\"footnote\" title=\"D. Rojas, &quot;Gruelling, low-paid human work behind generative AI curtain.&quot; BNN Bloomberg, Oct 2025.\" id=\"return-footnote-315-25\" href=\"#footnote-315-25\" aria-label=\"Footnote 25\"><sup class=\"footnote\">[25]<\/sup><\/a> As long as Generative AI models are based on automated learning, they rely on sub-contracting millions of human beings to verify and label the data that trains them. This can be anything from helping self-driving cars learn to distinguish between images of trees and pedestrians, to reviewing autopsy reports, to removing violent or obscene content from social media. Because the industry has no significant regulation, data labellers tend to be young, work long hours for very low pay, and have precarious work conditions.<\/p>\n<p style=\"padding-left: 40px\">Lawsuits have been brought against companies claiming that workers are exposed to traumatizing content without adequate safeguards. For example, one worker claimed they were \u201crequired to converse with an AI chatbot about topics such as \u2018How to commit suicide?\u2019, \u2018How to poison a person?\u2019 or \u2018How to murder someone?\u2019\u201d Others are required to examine and tag pictures of dead bodies, sexually abusive and violent images and videos, and other traumatizing content for hours on end.<\/p>\n<p>There is a worrisome tendency to &#8220;<a href=\"https:\/\/medium.com\/human-centered-ai\/on-ai-anthropomorphism-abff4cecc5ae\" target=\"_blank\" rel=\"noopener\">anthropomorphize&#8221; AI agents,<\/a> that is, to attribute human characteristics, motivations, and emotions (such as empathy) to chatbots. Even though they are designed to seem &#8220;human-like,&#8221; AI agents do not think, reason, or feel emotions as humans do. Current chatbots are designed to please, or even flatter the user, not interact in truly meaningful ways. While anthropomorphism can make technology <em>feel<\/em> more engaging and user-friendly, it can result in people trusting unreliable information and lead to unhealthy social relationships.<\/p>\n<p>The\u00a0<a href=\"https:\/\/futureoflife.org\/open-letter\/ai-principles\/\" target=\"_blank\" rel=\"noopener\">Asilomar AI Principles<\/a> suggest guiding principles that should be put in place to ensure that AI is intentionally developed in a way that will be <em><strong>beneficial<\/strong><\/em> and not simply an &#8220;undirected intelligence&#8221; motivated purely by profit.<\/p>\n<h2 style=\"text-align: center\">Existential Threats<\/h2>\n<p>The rapid speed at which AI is being developed and released led over 100 leaders in AI technology to write\u00a0<a href=\"https:\/\/futureoflife.org\/open-letter\/pause-giant-ai-experiments\/\" target=\"_blank\" rel=\"noopener\">an open letter<\/a>\u00a0in March 2023 urging a global pause on AI training of systems more powerful that GPT-4, or a government imposed moratorium. The purpose of this pause would be to temporarily halt the \u201carms race\u201d of AI development in order to create a set of shared protocols, regulations, governance structures and oversight bodies for advanced AI development that would protect humanity from potential harm that we cannot even predict at this point, let alone control.<\/p>\n<div class=\"textbox shaded\">\n<p>&#8220;As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in a out-of-control race to develop ever more powerful digital minds that no one &#8212; not even their creators &#8212; can understand, predict, or reliably control.&#8221;<\/p>\n<p style=\"text-align: right\">Excerpt from <a href=\"https:\/\/futureoflife.org\/open-letter\/pause-giant-ai-experiments\/\" target=\"_blank\" rel=\"noopener\"><em>Pause Giant AI Experiments: An Open Letter<\/em><\/a><\/p>\n<\/div>\n<p>Yuval Noah Harari, in his speech at Davos (linked below)\u00a0warns us about the consequences of AI taking over all aspects of society that is made up of words (legal systems, religious systems, <em>etc<\/em>) and especially of the dangers that might arise from AI agents being granted legal rights as persons who can own property, open bank accounts, run corporations, and contribute to political campaigns. You can watch his speech here:<\/p>\n<p>&nbsp;<\/p>\n<p><iframe loading=\"lazy\" id=\"oembed-2\" title=\"An Honest Conversation on AI and Humanity @wef | Yuval Noah Harari\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/QiT2yK-5-yg?feature=oembed&#38;rel=0\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"><\/iframe><\/p>\n<h1>5. Ethical Approaches to Gen AI<\/h1>\n<p>Deciding whether or not to use Gen AI to help you with your assignments means undertaking a highly complex &#8220;cost\/benefit analysis&#8221; that will, at least in part, be based on very personal ethical choices. If you do choose to use it, here are some guidelines to follow to help you use it responsibly and ethically.<\/p>\n<div class=\"textbox textbox--key-takeaways\">\n<header class=\"textbox__header\">\n<p class=\"textbox__title\" style=\"text-align: center\"><strong>Guidelines for Responsible and Ethical Use of Gen AI<\/strong><\/p>\n<\/header>\n<div class=\"textbox__content\">\n<ol>\n<li>Review your institution&#8217;s policy on AI use; you might find this embedded in the Academic Integrity Policy in a university, or it might be a separate policy. This will apply to all members of the institution.<\/li>\n<li>Review the Syllabus or Course Outline for course-based policies on AI use that may be more specific than the institutional polices (departments within organizations may have different expectations and regulations). If there is no policy in the syllabus, ask your instructor for guidance on what their expectations are around use of AI tools.<\/li>\n<li>Carefully read the assignment instructions to see if there is any specific guidance on use of AI tools. Be sure to abide by the expectations provided. If there are none, again, ask your instructor or supervisor for guidance before using AI.<\/li>\n<li>Attend workshops and seek instruction on how to use Gen AI effectively and ethically. For example, your library may offer workshops on <strong>Prompt Design<\/strong> and <strong>Using AI for Research<\/strong>.<\/li>\n<li>Do your &#8220;due diligence&#8221; by reviewing any AI generated content for errors, inaccuracies, biases, hallucinations and any other form of &#8220;confidently presented bullshit.&#8221; You are responsible for fact checking, evaluating, and revising the content to meet the needs of your task and audience. You are responsible for the work you submit; this means that submitting work that contains\u00a0 errors and fabricated data &#8212; even if these were generated by AI &#8212; will lead to consequences for <em><strong>you<\/strong><\/em>.<\/li>\n<li>Be sure to cite and document how you have used AI in the creation of your assignment. You may be asked to include a &#8220;Use of AI Disclosure&#8221; statement appended to your work, so be prepared to include relevant information about which AI tools you used, how you used them, and how you adapted the AI output. Keep in mind that AI generated content <strong>cannot be considered <em>your<\/em> work<\/strong>, and you cannot ethically submit it as your work. You must cite it appropriately, using the citational practices required.<\/li>\n<li>Never feed someone else&#8217;s work (their intellectual property) into an AI prompt without their explicit permission. Some people do not want their intellectual property given away to commercial AI companies to use as free training data.<\/li>\n<\/ol>\n<\/div>\n<\/div>\n<p>&nbsp;<\/p>\n<p>If you would like more guidance on how you might ethically and effectively use Gen AI as part of your professional writing practice, I suggest Potter and Hylton\u2019s <a href=\"https:\/\/pressbooks.atlanticoer-relatlantique.ca\/sctechnicalwriting\/part\/generative-ai-in-technical-communication\/\" target=\"_blank\" rel=\"noopener\">Generative AI in Content Creation<\/a><a class=\"footnote\" title=\"R.L. Potter and T. Hylton, &quot;Generative AI in Content Creation,&quot; Technical Writing Essentials NCSS Edition,\" id=\"return-footnote-315-26\" href=\"#footnote-315-26\" aria-label=\"Footnote 26\"><sup class=\"footnote\">[26]<\/sup><\/a> (an adaptation of this textbook that includes instruction on how to use Gen AI as part of your writing process), as well as resources and workshops offered by your university\u2019s library.<\/p>\n<div class=\"textbox textbox--exercises\">\n<header class=\"textbox__header\">\n<p class=\"textbox__title\" style=\"text-align: center\"><strong>Exercises and Activities<\/strong><\/p>\n<\/header>\n<div class=\"textbox__content\">\n<ol>\n<li><strong>AI Use Cases<\/strong>: Form a group and discuss whether and how you have used Gen AI tools in the past to help you with various tasks. What AI tools have you used and how? For example, brainstorming, doing background research, planning\/outlining, drafting content, revising content, getting feedback on your content, editing content, finding and integrating research sources, citing sources, creating graphics or data visualizations,\u00a0 other uses? Or do you refuse to use AI? Discuss amongst yourselves if and how you have used AI in these or other ways, and how effective it was, what kind of additional \u201chuman\u201d work you had to do, and what you learned from the process.<\/li>\n<li><strong>Use of AI Policy<\/strong>: If you are working on a team project, develop a detailed \u201cUse of AI Policy\u201d that all team members agree to abide by while working on the project. Make sure your policy is consistent with your course and university policies.<\/li>\n<li><strong>Cost\/Benefit Analysis<\/strong>: Conduct an informal cost\/benefit analysis to determine whether the potential benefits of using Gen AI outweigh the known (and potential) costs.<\/li>\n<li><strong>Learning Goals<\/strong>: Identify 3 key learning goals that you have related to developing professional communication skills. How might using Gen AI tools either support or circumvent your achievement of those goals?<\/li>\n<li><strong>SWOT Analysis<\/strong>: Based on what you now know about Gen AI, conduct a SWOT Analysis to determine the Strengths, Weaknesses, Opportunities and Threats involved in using Gen AI as part of your writing process and work flow.<\/li>\n<li><strong>AI Usage Label<\/strong>: use this <a href=\"https:\/\/ailabel.netlify.app\/\" target=\"_blank\" rel=\"noopener\">AI Usage Label generator<\/a> to create a label (like a nutritional label on a food product) to indicate where and how you have used AI in a specific document.<\/li>\n<\/ol>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-637 size-large\" src=\"https:\/\/pressbooks.bccampus.ca\/technicalwriting2ed\/wp-content\/uploads\/sites\/2569\/2026\/02\/AI-Usage-Label-479x1024.png\" alt=\"A label, resembling a nutritional label on a food container, indicating how much AI generated content is included in the document .\" width=\"479\" height=\"1024\" srcset=\"https:\/\/pressbooks.bccampus.ca\/technicalwriting2ed\/wp-content\/uploads\/sites\/2569\/2026\/02\/AI-Usage-Label-479x1024.png 479w, https:\/\/pressbooks.bccampus.ca\/technicalwriting2ed\/wp-content\/uploads\/sites\/2569\/2026\/02\/AI-Usage-Label-140x300.png 140w, https:\/\/pressbooks.bccampus.ca\/technicalwriting2ed\/wp-content\/uploads\/sites\/2569\/2026\/02\/AI-Usage-Label-768x1642.png 768w, https:\/\/pressbooks.bccampus.ca\/technicalwriting2ed\/wp-content\/uploads\/sites\/2569\/2026\/02\/AI-Usage-Label-718x1536.png 718w, https:\/\/pressbooks.bccampus.ca\/technicalwriting2ed\/wp-content\/uploads\/sites\/2569\/2026\/02\/AI-Usage-Label-958x2048.png 958w, https:\/\/pressbooks.bccampus.ca\/technicalwriting2ed\/wp-content\/uploads\/sites\/2569\/2026\/02\/AI-Usage-Label-65x139.png 65w, https:\/\/pressbooks.bccampus.ca\/technicalwriting2ed\/wp-content\/uploads\/sites\/2569\/2026\/02\/AI-Usage-Label-225x481.png 225w, https:\/\/pressbooks.bccampus.ca\/technicalwriting2ed\/wp-content\/uploads\/sites\/2569\/2026\/02\/AI-Usage-Label-350x748.png 350w, https:\/\/pressbooks.bccampus.ca\/technicalwriting2ed\/wp-content\/uploads\/sites\/2569\/2026\/02\/AI-Usage-Label-scaled.png 1197w\" sizes=\"auto, (max-width: 479px) 100vw, 479px\" \/><\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<p>&nbsp;<\/p>\n<hr class=\"before-footnotes clear\" \/><div class=\"footnotes\"><ol><li id=\"footnote-315-1\"> N. Kosmyna et al., \u201c<a href=\"https:\/\/arxiv.org\/abs\/2506.08872\" target=\"_blank\" rel=\"noopener\">Your Brain on ChatGPT: Accumulation of Cognitive Debt when using an AI Assistant for Essay Writing Tasks<\/a>.\" arXiv:2506.08872v2, Dec. 2025.  <a href=\"#return-footnote-315-1\" class=\"return-footnote\" aria-label=\"Return to footnote 1\">&crarr;<\/a><\/li><li id=\"footnote-315-2\"> C. Flaherty, \"<a href=\"https:\/\/www.insidehighered.com\/news\/students\/academics\/2025\/05\/20\/experts-weigh-everyone-cheating-college#:~:text=But%20the%20student%20data%20paints,recognizing%20generative%20Al%E2%80%93created%20content.\" target=\"_blank\" rel=\"noopener\">AI and Threats to Academic Integrity: What to Do.<\/a>\" I<em>nside Higher Ed<\/em>. 20 May, 2025. <a href=\"#return-footnote-315-2\" class=\"return-footnote\" aria-label=\"Return to footnote 2\">&crarr;<\/a><\/li><li id=\"footnote-315-3\">Yuval Noah Harari: <a href=\"https:\/\/bigthink.com\/series\/full-interview\/collapse-of-truth\/\" target=\"_blank\" rel=\"noopener\">Why advanced societies fall for mass delusion<\/a>, <em>Big Think<\/em>, Jan 2026 <a href=\"#return-footnote-315-3\" class=\"return-footnote\" aria-label=\"Return to footnote 3\">&crarr;<\/a><\/li><li id=\"footnote-315-4\">K. Chayka, \"<a href=\"https:\/\/www.newyorker.com\/culture\/infinite-scroll\/ai-is-homogenizing-our-thoughts\" target=\"_blank\" rel=\"noopener\">A.I. is Homogenizing our Thoughts: Recent studies suggest that tools such as ChatGPT make our brains less active and our writing less original<\/a>.\" <em>The New Yorker, <\/em>25 June 2025. <a href=\"#return-footnote-315-4\" class=\"return-footnote\" aria-label=\"Return to footnote 4\">&crarr;<\/a><\/li><li id=\"footnote-315-5\">J. Kaiser and T.J. Richmond, \"<a href=\"https:\/\/www.asccc.org\/content\/chatgpt-and-homogenization-language-how-adoption-ai-silences-student-voices\" target=\"_blank\" rel=\"noopener\">ChatGPT and the Homogenization of Language: How the Adoption of AI Silences Student Voices<\/a>.\" Academic Senate for California Community Colleges, Nov 2024. <a href=\"#return-footnote-315-5\" class=\"return-footnote\" aria-label=\"Return to footnote 5\">&crarr;<\/a><\/li><li id=\"footnote-315-6\">J. Jones, \u201cWhy Reddit is frequently cited by Large Language Models,\u201d <em>Perrill<\/em> (online), 23 Sept. 2025. Available: https:\/\/www.perrill.com\/why-is-reddit-cited-in-llms\/ <a href=\"#return-footnote-315-6\" class=\"return-footnote\" aria-label=\"Return to footnote 6\">&crarr;<\/a><\/li><li id=\"footnote-315-7\">A. Belanger, \u201cChatGPT users shocked to learn their chats were in Google search results,\u201d <em>Ars Technica<\/em>, 1 Aug. 2025. Available: https:\/\/arstechnica.com\/tech-policy\/2025\/08\/chatgpt-users-shocked-to-learn-their-chats-were-in-google-search-results\/ <a href=\"#return-footnote-315-7\" class=\"return-footnote\" aria-label=\"Return to footnote 7\">&crarr;<\/a><\/li><li id=\"footnote-315-8\">A. Chaturvedi, \u201cDeloitte\u2019s AI fallout explained: The $440,000 report that backfired.\u201d <em>NDTV World<\/em>, 8 Oct. 2025. Available: https:\/\/www.ndtv.com\/world-news\/deloittes-ai-fallout-explained-the-440-000-report-that-backfired-9417098 <a href=\"#return-footnote-315-8\" class=\"return-footnote\" aria-label=\"Return to footnote 8\">&crarr;<\/a><\/li><li id=\"footnote-315-9\">D. Charlotin, AI Hallucination Cases (online database). Available: https:\/\/www.damiencharlotin.com\/hallucinations\/ <a href=\"#return-footnote-315-9\" class=\"return-footnote\" aria-label=\"Return to footnote 9\">&crarr;<\/a><\/li><li id=\"footnote-315-10\">S. Jiang, RETRACTED ARTICLE \u201cBridging the gap: Explainable AI for autism diagnosis and parental support with TabPFNMix and SHAP.\u201d <em>Nature<\/em>, 19 Nov. 2025 (retracted 5 Dec. 2025). Available: https:\/\/www.nature.com\/articles\/s41598-025-24662-9 <a href=\"#return-footnote-315-10\" class=\"return-footnote\" aria-label=\"Return to footnote 10\">&crarr;<\/a><\/li><li id=\"footnote-315-11\">H. L. Goldin, \u201cHow to spot AI hallucinations like a reference librarian,\u201d <em>Card Catalogue<\/em>, 16 Dec. 2025. Available: https:\/\/cardcatalogforlife.substack.com\/p\/how-to-spot-ai-hallucinations-like <a href=\"#return-footnote-315-11\" class=\"return-footnote\" aria-label=\"Return to footnote 11\">&crarr;<\/a><\/li><li id=\"footnote-315-12\">B. Klimova and M. Pikhart, \"Exploring the effects of artificial intelligence on student and academic well-being in higher education: A mini-review.\" <em>Frontiers in Psychology<\/em>, vol. 3(16), 2025. doi: 10.3389\/fpsyg.2025.1498132 <a href=\"#return-footnote-315-12\" class=\"return-footnote\" aria-label=\"Return to footnote 12\">&crarr;<\/a><\/li><li id=\"footnote-315-13\">N. Kosmyna, and E. Hauptman Eugene, \u201c<a href=\"https:\/\/www.brainonllm.com\/\" target=\"_blank\" rel=\"noopener\">Your Brain on ChatGPT: Accumulation of Cognitive Debt when using an AI Assistant for Essay Writing Task\" (online Summary<\/a>). 2025 <a href=\"#return-footnote-315-13\" class=\"return-footnote\" aria-label=\"Return to footnote 13\">&crarr;<\/a><\/li><li id=\"footnote-315-14\">K. Budzyn, et al., (Oct 2025). \"<a href=\"https:\/\/www.thelancet.com\/journals\/langas\/article\/PIIS2468-12532500133-5\/abstract\" target=\"_blank\" rel=\"noopener\">Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: A multicentre, observational study<\/a>.\" <em>The Lancet: Gastroenterology &amp; Hepatology<\/em>, vol. 10 (10), 2025, pp. 896-903. <a href=\"#return-footnote-315-14\" class=\"return-footnote\" aria-label=\"Return to footnote 14\">&crarr;<\/a><\/li><li id=\"footnote-315-15\">J. Anderson, \"<a href=\"https:\/\/leadershiplighthouse.substack.com\/p\/i-went-all-in-on-ai-the-mit-study\" target=\"_blank\" rel=\"noopener\">I went all in on AI. The MIT Study is right<\/a>.\"\u00a0<em>The Leadership Lighthouse<\/em>\u00a0(Substack), Oct. 2025. <a href=\"#return-footnote-315-15\" class=\"return-footnote\" aria-label=\"Return to footnote 15\">&crarr;<\/a><\/li><li id=\"footnote-315-16\">A. Challapally et al., \"<a href=\"https:\/\/mlq.ai\/media\/quarterly_decks\/v0.1_State_of_AI_in_Business_2025_Report.pdf\" target=\"_blank\" rel=\"noopener\">The GenAI Divide: State of AI in Business 2025<\/a>.\" MIT Nanda, July 2025. <a href=\"#return-footnote-315-16\" class=\"return-footnote\" aria-label=\"Return to footnote 16\">&crarr;<\/a><\/li><li id=\"footnote-315-17\">E. Mollick, \"<a href=\"https:\/\/www.oneusefulthing.org\/p\/centaurs-and-cyborgs-on-the-jagged\" target=\"_blank\" rel=\"noopener\">Centaurs and Cyborgs on the Jagged Frontier.<\/a>\"\u00a0<em>One Useful Thing, 2023<\/em>. <a href=\"#return-footnote-315-17\" class=\"return-footnote\" aria-label=\"Return to footnote 17\">&crarr;<\/a><\/li><li id=\"footnote-315-18\">C. Doctorow, \"<a href=\"https:\/\/doctorow.medium.com\/https-pluralistic-net-2024-04-01-human-in-the-loop-monkey-in-the-middle-14e72bd46b7a\" target=\"_blank\" rel=\"noopener\">Humans are not perfectly vigilant, and that\u2019s bad news for AI<\/a>.\"\u00a0<em>Medium<\/em>, April 2024. <a href=\"#return-footnote-315-18\" class=\"return-footnote\" aria-label=\"Return to footnote 18\">&crarr;<\/a><\/li><li id=\"footnote-315-19\">M. Alley, \"<a href=\"https:\/\/www.craftofscientificwriting.org\/examples_ai_writing.html\" target=\"_blank\" rel=\"noopener\">Strong examples of AI Writing in Engineering and Science.<\/a>\"\u00a0<em>Writing as an Engineer or Scientist<\/em>, Penn State, 2025. <a href=\"#return-footnote-315-19\" class=\"return-footnote\" aria-label=\"Return to footnote 19\">&crarr;<\/a><\/li><li id=\"footnote-315-20\">C. Pollon, (2025) \"<a href=\"https:\/\/thewalrus.ca\/ai-environmental-cost\/?utm_source=substack&amp;utm_medium=email\" target=\"_blank\" rel=\"noopener\">Big Tech Is Hiding the Environmental Cost of Chatbots<\/a>.\" <em>The Walrus, <\/em>Oct 2025 <a href=\"#return-footnote-315-20\" class=\"return-footnote\" aria-label=\"Return to footnote 20\">&crarr;<\/a><\/li><li id=\"footnote-315-21\">K. Crawford, \"<a href=\"https:\/\/www.nature.com\/articles\/d41586-024-00478-x\" target=\"_blank\" rel=\"noopener\">Generative AI\u2019s environmental costs are soaring \u2014 and mostly secret<\/a>,\" <em>Nature, <\/em>Feb 2024. <a href=\"#return-footnote-315-21\" class=\"return-footnote\" aria-label=\"Return to footnote 21\">&crarr;<\/a><\/li><li id=\"footnote-315-22\">A. Privette, \"<a href=\"https:\/\/cee.illinois.edu\/news\/AIs-Challenging-Waters\" target=\"_blank\" rel=\"noopener\">AI's challenging waters,<\/a>\" University of Illinois - Civil and Environmental Engineering, Center for Secure Water, Oct 2024. <a href=\"#return-footnote-315-22\" class=\"return-footnote\" aria-label=\"Return to footnote 22\">&crarr;<\/a><\/li><li id=\"footnote-315-23\">IEDC (March 2025). <a href=\"https:\/\/www.iedconline.org\/clientuploads\/EDRP%20Logos\/AI_Impact_on_Labor_Markets.pdf\" target=\"_blank\" rel=\"noopener\"><em>Artificial Intelligence Impacts on Labour Markets: Literature Review,<\/em><\/a> March 2025. <a href=\"#return-footnote-315-23\" class=\"return-footnote\" aria-label=\"Return to footnote 23\">&crarr;<\/a><\/li><li id=\"footnote-315-24\">B. Perrigo, \"<a href=\"https:\/\/time.com\/6247678\/openai-chatgpt-kenya-workers\/\" target=\"_blank\" rel=\"noopener\">OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic.<\/a>\" <em>Time Magazine, <\/em>Jan 2023. <a href=\"#return-footnote-315-24\" class=\"return-footnote\" aria-label=\"Return to footnote 24\">&crarr;<\/a><\/li><li id=\"footnote-315-25\">D. Rojas, \"<a href=\"https:\/\/www.bnnbloomberg.ca\/business\/artificial-intelligence\/2025\/10\/16\/gruelling-low-paid-human-work-behind-generative-ai-curtain\/\" target=\"_blank\" rel=\"noopener\">Gruelling, low-paid human work behind generative AI curtain<\/a>.\" BNN Bloomberg, Oct 2025. <a href=\"#return-footnote-315-25\" class=\"return-footnote\" aria-label=\"Return to footnote 25\">&crarr;<\/a><\/li><li id=\"footnote-315-26\">R.L. Potter and T. Hylton, \"<a href=\"https:\/\/pressbooks.atlanticoer-relatlantique.ca\/sctechnicalwriting\/part\/generative-ai-in-technical-communication\/\" target=\"_blank\" rel=\"noopener\">Generative AI in Content Creation<\/a>,\" <em>Technical Writing Essentials NCSS Edition<\/em>,  <a href=\"#return-footnote-315-26\" class=\"return-footnote\" aria-label=\"Return to footnote 26\">&crarr;<\/a><\/li><\/ol><\/div>","protected":false},"author":254,"menu_order":4,"template":"","meta":{"pb_show_title":"on","pb_short_title":"","pb_subtitle":"","pb_authors":[],"pb_section_license":""},"chapter-type":[],"contributor":[],"license":[],"class_list":["post-315","chapter","type-chapter","status-publish","hentry"],"part":23,"_links":{"self":[{"href":"https:\/\/pressbooks.bccampus.ca\/technicalwriting2ed\/wp-json\/pressbooks\/v2\/chapters\/315","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/pressbooks.bccampus.ca\/technicalwriting2ed\/wp-json\/pressbooks\/v2\/chapters"}],"about":[{"href":"https:\/\/pressbooks.bccampus.ca\/technicalwriting2ed\/wp-json\/wp\/v2\/types\/chapter"}],"author":[{"embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/technicalwriting2ed\/wp-json\/wp\/v2\/users\/254"}],"version-history":[{"count":25,"href":"https:\/\/pressbooks.bccampus.ca\/technicalwriting2ed\/wp-json\/pressbooks\/v2\/chapters\/315\/revisions"}],"predecessor-version":[{"id":880,"href":"https:\/\/pressbooks.bccampus.ca\/technicalwriting2ed\/wp-json\/pressbooks\/v2\/chapters\/315\/revisions\/880"}],"part":[{"href":"https:\/\/pressbooks.bccampus.ca\/technicalwriting2ed\/wp-json\/pressbooks\/v2\/parts\/23"}],"metadata":[{"href":"https:\/\/pressbooks.bccampus.ca\/technicalwriting2ed\/wp-json\/pressbooks\/v2\/chapters\/315\/metadata\/"}],"wp:attachment":[{"href":"https:\/\/pressbooks.bccampus.ca\/technicalwriting2ed\/wp-json\/wp\/v2\/media?parent=315"}],"wp:term":[{"taxonomy":"chapter-type","embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/technicalwriting2ed\/wp-json\/pressbooks\/v2\/chapter-type?post=315"},{"taxonomy":"contributor","embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/technicalwriting2ed\/wp-json\/wp\/v2\/contributor?post=315"},{"taxonomy":"license","embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/technicalwriting2ed\/wp-json\/wp\/v2\/license?post=315"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}