Disable Turnitin AI Detection at UB


Disable Turnitin AI Detection at UB
The Issue
At UB, the widespread use of Turnitin’s AI detection tool has led to professors flagging up to half a class at a time. This reflects a failure of both technology and oversight, not evidence of widespread misconduct. Innocent students are being blindsided by baseless accusations, left scrambling to defend work they poured their time and effort into. How does one prove a negative? There is no safety net, no warning, and no guaranteed way to stay safe. The results have been devastating: shattered mental health, wasted hours, delayed degrees, and futures put at risk.
This is not an isolated incident. It is a systemic failure. Some educators rely heavily on AI detection scores, although Turnitin itself warns its tool is "not a definitive measure" and should not be used as the sole basis for decisions. Worse, professors are not always impartial judges of academic integrity. The university has also failed to provide meaningful guidance on the use of these tools, instructing faculty only that a score of 35 to 45 percent AI should trigger the academic integrity process, without addressing the well-documented pitfalls, inaccuracies, or ethical concerns surrounding AI detection. When flawed technology combines with human error, bias, or blind trust in AI, students face potentially life-altering academic consequences.
These systems also raise serious privacy concerns. Students' writing is scanned, stored, and analyzed without clear consent, often with little transparency about where their work is kept, how long it is stored, or how it might be used. Academic institutions are outsourcing judgment of student work to private companies, exposing students to data collection and profiling practices they never agreed to.
The evidence is clear: none of the AI detection software that currently exists is reliable. OpenAI shut down its own AI detector in 2023 because of its inaccuracy. Research has repeatedly shown AI detectors misclassify human writing, flagging original work while missing actual misconduct. False positive rates have been documented as high as 61%. No matter how diligent a student is, no one is truly safe from the harms of this technology.
These tools also disproportionately harm students who are already marginalized. Studies show that non-native English speakers, neurodivergent students, and others whose writing falls outside "algorithmic norms" are more likely to be falsely flagged. Our lived experience confirms this.
Some institutions, including Vanderbilt, have already disabled Turnitin’s AI detection feature due to these concerns. Upholding academic integrity is important, but it must be done through fair, transparent, and human-led processes, not through unreliable algorithms. Faculty can already address concerns through careful review of student work, conversations with students, and traditional methods of academic inquiry that respect due process. No student should be accused, sanctioned, or forced to delay their degree based on the output of an unregulated, unreliable algorithm.
Please sign the petition. Share your voice. Students deserve better.
.
1,398
The Issue
At UB, the widespread use of Turnitin’s AI detection tool has led to professors flagging up to half a class at a time. This reflects a failure of both technology and oversight, not evidence of widespread misconduct. Innocent students are being blindsided by baseless accusations, left scrambling to defend work they poured their time and effort into. How does one prove a negative? There is no safety net, no warning, and no guaranteed way to stay safe. The results have been devastating: shattered mental health, wasted hours, delayed degrees, and futures put at risk.
This is not an isolated incident. It is a systemic failure. Some educators rely heavily on AI detection scores, although Turnitin itself warns its tool is "not a definitive measure" and should not be used as the sole basis for decisions. Worse, professors are not always impartial judges of academic integrity. The university has also failed to provide meaningful guidance on the use of these tools, instructing faculty only that a score of 35 to 45 percent AI should trigger the academic integrity process, without addressing the well-documented pitfalls, inaccuracies, or ethical concerns surrounding AI detection. When flawed technology combines with human error, bias, or blind trust in AI, students face potentially life-altering academic consequences.
These systems also raise serious privacy concerns. Students' writing is scanned, stored, and analyzed without clear consent, often with little transparency about where their work is kept, how long it is stored, or how it might be used. Academic institutions are outsourcing judgment of student work to private companies, exposing students to data collection and profiling practices they never agreed to.
The evidence is clear: none of the AI detection software that currently exists is reliable. OpenAI shut down its own AI detector in 2023 because of its inaccuracy. Research has repeatedly shown AI detectors misclassify human writing, flagging original work while missing actual misconduct. False positive rates have been documented as high as 61%. No matter how diligent a student is, no one is truly safe from the harms of this technology.
These tools also disproportionately harm students who are already marginalized. Studies show that non-native English speakers, neurodivergent students, and others whose writing falls outside "algorithmic norms" are more likely to be falsely flagged. Our lived experience confirms this.
Some institutions, including Vanderbilt, have already disabled Turnitin’s AI detection feature due to these concerns. Upholding academic integrity is important, but it must be done through fair, transparent, and human-led processes, not through unreliable algorithms. Faculty can already address concerns through careful review of student work, conversations with students, and traditional methods of academic inquiry that respect due process. No student should be accused, sanctioned, or forced to delay their degree based on the output of an unregulated, unreliable algorithm.
Please sign the petition. Share your voice. Students deserve better.
.
1,398
The Decision Makers
Supporter Voices
Share this petition
Petition created on April 14, 2025