Recommendations and Guidelines

The Limitations of AI Detectors in Academic Settings

Introduction

Artificial Intelligence (AI) has made significant strides in various sectors, including education. One application that has gained attention is AI detectors, tools designed to identify whether a student’s work has been generated by AI. While the intent behind these tools is commendable, their reliability and accuracy have been called into question. This document aims to shed light on the limitations of AI detectors and why they may not be the best tools for ensuring academic integrity.


1. False Positives and Negatives

  • False Positives: Multiple studies show that AI detectors can sometimes flag genuine student work as AI-generated, leading to unwarranted accusations and potential damage to student-instructor trust. This study shows that GPT detectors show bias against non-native English writers.  Van Oijen (2023) disclosed that tools could only detect AI-produced text with an accuracy of roughly 28%, with the top-performing tool reaching 50% accuracy. On the other hand, these instruments were notably more proficient, with an 83% accuracy rate, in identifying text written by humans.
  • False Negatives: Conversely, sophisticated AI outputs might go undetected, giving a false sense of security. There are ways students can modify prompts to make the writing appear ‘more human’, and they can edit the text to introduce small errors,  change the tone of the text or use paraphrasing tools to modify the original output.

2. Evolving Nature of AI

  • As AI models become more advanced, they produce content that is increasingly indistinguishable from human-generated content. This makes it challenging for detectors to keep up.

3. Over-reliance on Technology

  • Solely depending on AI detectors can lead instructors to overlook the importance of knowing their students, understanding their capabilities, and recognizing their unique writing styles.

4. Ethical Concerns

  • Privacy Issues: Scanning student work through AI detectors might raise concerns about data privacy and how student data is being used. Submitting UNBC student work to AI detectors is against information privacy regulations and is strongly discouraged.
  • Trust Erosion: Over-reliance on AI tools can erode the trust between students and instructors, creating an environment of suspicion. Instructors must be aware of the damage caused by accusations of academic misconduct. See: Accused- How Students Respond to allegations of using ChatGPT on Assignments

5. Context Matters

  • AI detectors often lack the capability to understand context. A student might use complex terminology or advanced sentence structures due to prior knowledge or extensive research, which could be mistakenly flagged.

6. Potential for Misunderstanding

  • Not all flagged content is a result of AI generation. Students might use tools like grammar checkers or paraphrasing tools, which can alter the text in ways that might seem AI-generated but are legitimate.

Conclusion

While AI detectors offer a novel approach to upholding academic integrity, their limitations make them less reliable than traditional methods of assessment and evaluation. It’s crucial for educators to be aware of these limitations and approach the use of such tools with caution. Building a relationship of trust with students, combined with traditional methods of detecting academic misconduct, remains the most effective approach.

 

More information on AI, ChatGPT: 

See AI Machine Learning & Writing Assistants and what to do to discourage unauthorized student AI use in your course.

Self-Enroll in the CTLT’s Moodle course on “Teaching, Learning & AI Technologies” to watch recorded workshops, and see example policies on student AI use to place in your Syllabus.

 

License

Icon for the Public Domain license

This work (An Instructor's Guide to Teaching & Learning With Technology @UNBC by UNBC CTLT) is free of known copyright restrictions.

Share This Book