Stop AI From Policing Student Voices

The Issue

Have you ever drafted a paper without the use of AI and had your work flagged as AI-generated? If so, this petition will make sense for you. There are individuals in the academic world who believe that these AI detection tools are 100% exact. However, I have discovered that these AI detection tools are not entirely accurate, they are biased and primitive in nature. How did I come to that conclusion? Well, I have assessed AI detecting tools (i.e., zerogpt, and phrasly.ai.) by uploading original works and receiving results that accuse me of having a majority or 100% AI-generated work. How can an AI system tell me, as a human being, how to write like a human being? Does that make any logical sense to anyone? You see, humans are beginning to trust AI more than they trust themselves, which is very problematic in the grand scheme of things. If you take the time and effort to write a paper and your professor leaves you feedback that some or most of your work has come back AI-generated, it not only causes doubt in your institution, but it also leaves you feeling unsure and discouraged when it comes time to submit an assignment. You ask yourself, “Should I dumb it down?" “Do I sound too well-written"? These are questions you should never need to ask yourself as a student. 

Sometimes I miss the days when AI did not exist, because as students we never had to worry about being ridiculed for writing too well. This should be a revelation for everyone. Understand that AI can be very biased, and it does not have the capability of detecting thought, authenticity, creativity, intention nor measuring your intelligence. Two of the top universities, such as Stanford and Harvard, have expressed how these tools can be unreliable in academia. As Harvard Crimson Staff writer Elias J. Schisgall stated, “The FAS discouraged professors from using AI detection tools, which Stubbs said were too unreliable for use.” Another article that I found written by Dr. Timothy Marley on K Altman Law’s website titled "The Problem with AI Detectors: Why Professors Should Reconsider Their Use,” explains, “For students, an accusation of academic dishonesty can be devastating. Some professors take AI detection results at face value, failing students or even reporting them to disciplinary boards without further investigation.” (“The Problem with AI Detectors: Why Professors Should Rethink Their Use") These are not one-off incidents that rarely occur, this is an issue that continues to negatively affect students across the globe, and I can only expect it to get worse as AI detection tools advance. I understand that there are students out there who use AI and do not write anything of originality, but that does not mean that hard working students should be punished as well. 

If these AI detecting tools are programmed to use statistical patterns, perplexity, sentence variation, and probability, these AI detecting tools are simply guessing and assuming that your tone and grammar must be a certain way no matter what you are writing about. This means that these tools cannot detect your tone, intent, thoughts, context, creativity, writing process, knowledge, emotional depth, dialect, writing style, academic phrasing, and most of all your humanity. So how can an AI system tell me, as a human being, how to write like a human being? The answer is that it cannot. As a neurodivergent individual who has always excelled in English and writing, AI cannot tell me that I am not human enough with my writing style. That is discriminatory and asinine. At what point do we admit that these AI-detecting tools are more problematic than they are useful? No longer should institutions punish professionally-written individuals. Instead, institutions should promote human judgement over automation and provide AI literacy courses for staff and students. Students deserve an education system that appreciates human thought over algorithms. Let's protect academic integrity by restoring trust in the human mind. 

Sources cited:

Schisgall, E. J. (2023, September 1). Harvard releases guidance for AI use in classrooms: News: The Harvard Crimson. News | The Harvard Crimson. https://www.thecrimson.com/article/2023/9/1/fas-ai-guidance/

Markley, T. (2025, February 27). The problem with AI detectors: Why professors should reconsider their use. K Altman Law. https://www.kaltmanlaw.com/post/problem-with-ai-detectors-professors-should-rethink

 

 
 

 

 
 

16

The Issue

Have you ever drafted a paper without the use of AI and had your work flagged as AI-generated? If so, this petition will make sense for you. There are individuals in the academic world who believe that these AI detection tools are 100% exact. However, I have discovered that these AI detection tools are not entirely accurate, they are biased and primitive in nature. How did I come to that conclusion? Well, I have assessed AI detecting tools (i.e., zerogpt, and phrasly.ai.) by uploading original works and receiving results that accuse me of having a majority or 100% AI-generated work. How can an AI system tell me, as a human being, how to write like a human being? Does that make any logical sense to anyone? You see, humans are beginning to trust AI more than they trust themselves, which is very problematic in the grand scheme of things. If you take the time and effort to write a paper and your professor leaves you feedback that some or most of your work has come back AI-generated, it not only causes doubt in your institution, but it also leaves you feeling unsure and discouraged when it comes time to submit an assignment. You ask yourself, “Should I dumb it down?" “Do I sound too well-written"? These are questions you should never need to ask yourself as a student. 

Sometimes I miss the days when AI did not exist, because as students we never had to worry about being ridiculed for writing too well. This should be a revelation for everyone. Understand that AI can be very biased, and it does not have the capability of detecting thought, authenticity, creativity, intention nor measuring your intelligence. Two of the top universities, such as Stanford and Harvard, have expressed how these tools can be unreliable in academia. As Harvard Crimson Staff writer Elias J. Schisgall stated, “The FAS discouraged professors from using AI detection tools, which Stubbs said were too unreliable for use.” Another article that I found written by Dr. Timothy Marley on K Altman Law’s website titled "The Problem with AI Detectors: Why Professors Should Reconsider Their Use,” explains, “For students, an accusation of academic dishonesty can be devastating. Some professors take AI detection results at face value, failing students or even reporting them to disciplinary boards without further investigation.” (“The Problem with AI Detectors: Why Professors Should Rethink Their Use") These are not one-off incidents that rarely occur, this is an issue that continues to negatively affect students across the globe, and I can only expect it to get worse as AI detection tools advance. I understand that there are students out there who use AI and do not write anything of originality, but that does not mean that hard working students should be punished as well. 

If these AI detecting tools are programmed to use statistical patterns, perplexity, sentence variation, and probability, these AI detecting tools are simply guessing and assuming that your tone and grammar must be a certain way no matter what you are writing about. This means that these tools cannot detect your tone, intent, thoughts, context, creativity, writing process, knowledge, emotional depth, dialect, writing style, academic phrasing, and most of all your humanity. So how can an AI system tell me, as a human being, how to write like a human being? The answer is that it cannot. As a neurodivergent individual who has always excelled in English and writing, AI cannot tell me that I am not human enough with my writing style. That is discriminatory and asinine. At what point do we admit that these AI-detecting tools are more problematic than they are useful? No longer should institutions punish professionally-written individuals. Instead, institutions should promote human judgement over automation and provide AI literacy courses for staff and students. Students deserve an education system that appreciates human thought over algorithms. Let's protect academic integrity by restoring trust in the human mind. 

Sources cited:

Schisgall, E. J. (2023, September 1). Harvard releases guidance for AI use in classrooms: News: The Harvard Crimson. News | The Harvard Crimson. https://www.thecrimson.com/article/2023/9/1/fas-ai-guidance/

Markley, T. (2025, February 27). The problem with AI detectors: Why professors should reconsider their use. K Altman Law. https://www.kaltmanlaw.com/post/problem-with-ai-detectors-professors-should-rethink

 

 
 

 

 
 

Supporter Voices

Petition Updates

Share this petition

Petition created on October 10, 2025