With the development of easily accessible generative AI technologies, there has also been a significant rise in the number of AI-generated content detection tools available. These tools are being used by some instructors who have concerns over students utilizing AI-generated content in their work. However, this shift towards technological policing has the potential to cause harm in educational settings. Instructors should refrain from using AI detectors on student work due to the inherent inaccuracies and biases in these tools.
Issues surrounding AI detectors also include the potential for false accusations, a negative impact on student well-being, and AI equity issues. Instead of relying on AI detection, educators should focus on developing more effective, equitable, and pedagogically sound approaches to fostering original thinking and academic integrity.
How do AI detectors work?
AI detectors are trained on large datasets containing human-written and AI-generated text. The detectors analyze various linguistic features of the content uploaded to the system, such as vocabulary; use of clichés, idioms, and colloquialisms; coherence and flow between sentences; and any patterns in word choice and phrasing.
AI detectors also use statistical methods to identify patterns that are more common in AI-generated text. Some detector tools describe a variable used as the "perplexity" of the text—how predictable the sequence of words is. Depending on the prompt that generated the content AI-generated text often has lower perplexity than human-written text. However, content created by a complex prompt, or content that is a blend of human and AI text will often be missed by this check. The AI detector looks at all these components and produces a number related to the possibility that the content was generated by a computer. As Geoffrey Fowler said in the Washington Post, “With AI, a detector doesn’t have any ‘evidence’ — just a hunch based on some statistical patterns.”
Unlike AI detectors, a plagiarism checker like Turnitin will scan the submitted text and compare it to a massive database of previously collected work. There is no likelihood or probability involved in these tools, they look for matches and flag the matches for instructors to decide if these matches constitute plagiarism.
Harms of AI detectors
The use of AI detectors will create an adversarial environment that can negatively impact your classroom. It places you in a role more interested in catching “cheaters” than helping students to learn. Stories of students being falsely accused of breaking academic integrity have been in the news, some with national exposure. These AI detectors have been found to be biased against non-native English writers (Liang, 2023), and Black teens are more than twice as likely as white or Latino teens to say that teachers flagged their schoolwork as being created by generative AI when it was not (Madden, 2024).
Alternative approaches: Focusing on assignment design
Instead of AI detection, it can be beneficial to focus on setting clear expectations around academic integrity and the design of your assignment to encourage original thinking. Some instructors have found success in requiring students to include direct quotations or links to their cited resources in their writing. Another strategy that helps to deter students from relying on generative AI to complete assignments is to focus on the process and be transparent about the learning objectives—to explain the reason students are completing the work. Have conversations with your students about what is an acceptable level of AI use in your classroom, and the reasons for the limits that you are placing. Instructors should refrain from using AI detectors on student work, and instead communicate with students about the best way to learn during their time at the University.
Instead of relying on AI detectors, educators should prioritize fostering a learning environment that values academic integrity and encourages students to engage in critical thinking and creativity. By focusing on well-designed assignments, setting clear expectations, and maintaining open communication with students about AI use, instructors can promote a culture of excellence and create the dynamic and transformative learning experiences valued by our campus community.
Further reading
Dumin, L. (2023). AI detectors: Why I won’t use them. https://medium.com/@ldumin157/ai-detectors-why-i-wont-use-them-6d9bd7358d2b
Eaton, S. (2023). The Use of AI-Detection Tools in the Assessment of Student Work. https://drsaraheaton.com/2023/05/06/the-use-of-ai-detection-tools-in-the-assessment-of-student-work
Fowler, G. (2023). What to do when you’re accused of AI cheating. Washington Post. https://www.washingtonpost.com/technology/2023/08/14/prove-false-positive-ai-detection-turnitin-gptzero/
Klee, M. (2023). Professor flunks all his students after ChatGPT falsely claims it wrote their papers. Rolling Stone. https://www.rollingstone.com/culture/culture-features/texas-am-chatgpt-ai-professor-flunks-students-false-claims-123473660
Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4 (7). https://www.sciencedirect.com/science/article/pii/S2666389923001307
Madden, M., Calvin, A., Hasse, A., & Lenhart, A. (2024). The dawn of the AI era: Teens, parents, and the adoption of generative AI at home and school. San Francisco, CA: Common Sense. https://www.commonsensemedia.org/sites/default/files/research/report/2024-the-dawn-of-the-ai-era_final-release-for-web.pdf
Ofgang, E. (2024). 8 Ways to Create AI-Proof Writing Prompts. Tech & Learning. https://www.techlearning.com/how-to/8-ways-to-create-ai-proof-writing-prompts
As we continue to explore AI, generative AI in particular, we’re open to learning more about how you are using AI. Faculty and staff can email their AI experiences, questions, and suggestions to ai-feedback@uiowa.edu.