CSUF Faculty Opposition to ChatGPT and other LLMs

Recent signers:
Max Stevens and 19 others have signed recently.

The Issue

CSUF FACULTY STATEMENT ON THE USE OF GENERATIVE ARTIFICIAL INTELLIGENCE

Because a new partnership with large technology firms offering CSU students private accounts to use Large Language Learning Models has been released without a presentation of available research or any discussion about how manifest risks will be addressed, we offer the following factual observations for campus consideration.

The use of LLMs lowers student critical thinking skills.  Since the technology was only released in 2022 there is only a single comprehensive review of available research.  Zhai, Zibowo, and Li (June 18, 2024) report that “By using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, our systematic review evaluated a body of literature addressing the contributing factors and effects of such over-reliance within educational and research contexts” and discovered “Despite the undeniable advantages of AI dialogue systems in streamlining research processes and enhancing academic efficiency, our analysis reveals a concerning trend: the potential erosion of critical cognitive skills due to ethical challenges such as misinformation, algorithmic biases, plagiarism, privacy breaches, and transparency issues.”

Whistleblowers developing the systems have repeatedly warned of privacy violations.  Academic research further highlights privacy violation risks.  The Faculty Affairs and Records office notes: “AI systems store data across multiple computers and the Internet, and the data could be added to the systems’ data banks. An AI system may incorporate the data—without your permission— so that other AI users or third parties can access it. Also, storing the data makes it vulnerable to data breaches. Therefore, inputting evaluation process data into an AI system could violate the confidentiality of the materials.”

Whistleblowers developing the systems have repeatedly warned of algorithmic bias that increases existing inequalities.  In a document since taken down by the Trump administration, the US government itself has warned that “irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security” (see open letter, footnote 4).

Whistleblowers developing the systems have repeatedly warned of secrecy and the active suppression of safety concerns, hiding significant additional dangers from public view and skewing public debate in a way that seriously understates risks.  As one recent open letter, signed by 16 current and former AI employees warn, “AI companies possess substantial non-public information…We do not think they can all be relied upon to share it voluntarily…current and former employees are among the few people who can hold them accountable to the public. Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues.”

There is a risk of dangerous and unexpected outcomes, such as the numerous instances of social chatbots encouraging their human users to kill themselves along with very specific instructions to do so (Guo, Feb 6 2025).  The company refused to censor the chatbot, reinforcing the validity of Whistleblower concerns about irresponsible development.

Surmised but unspecified protections are wholly insufficient for these challenges.  It is not reasonable to expect the new corporate partnership will offer significant input into algorithmic performance, and if it does that information will not be subject to public disclosure.  Systems that include massive privacy violations, increase social inequities, and lower critical thinking cannot be said to increase equity.  Unlike the CSU announcements, the Whistleblowers are careful to cite research evidence when presenting their conclusions.  We would be foolish to ignore their warnings.  The CSU has no specific plan and has offered no details about how they might influence the direction of LLMs, how this partnership allows CSU influence on the algorithm development, or what data protections are in place.

Vague and unsourced beliefs in inevitability are not a sufficient response to the dangers.  We face the choice of normalizing dangerous technology created without the oversight necessary to protect our students, or pushing back against it.  We should not normalize this technology created under these circumstances.

The nature of “artificial intelligence” is that information can be used in ways the original user did not anticipate, and we have yet to hear any assurance about how either the data or the algorithms that use them can be subject to any constraint in this circumstance.  The Whistleblowers have repeatedly warned that traditional “data protection” policies and systems are wholly insufficient to address the threats posed by AI.

Without a clear, specific, and publicly available plan for protection and algorithm development, the negative outcomes are far more likely than the positive ones.  

Artificial intelligence is an important and awesome new technology that does require a response.  That response is not giving all of our students an account and normalizing its use.  We should invest in activities that enhance critical thinking rather than erode it.

Because we care about our students and do not wish their critical thinking skills to be lowered, their privacy violated, or their unique ideas and contributions to the world to be replaced or stunted by AI; and

Because we do not wish to teach our most disadvantaged students to use systems that will likely increase discrimination against them; and

Because we care about democracy and share the concern that there are “wide-reaching challenges from AI like algorithmic bias, disinformation, democratic erosion, and labor displacement” and are not convinced that the technology is being developed in a responsible way; and

Because the bulk of available research on the dangers of LLM use is far more credible and specific 

We, the undersigned, will adopt the following AI use policy for all of our courses:

"The use of artificial intelligence (AI) tools, such as ChatGPT or similar generative AI platforms (Large Language Learning Models and Visual Learning Models), to complete any course assignments or assessments is strictly prohibited. All work submitted must be entirely your own, and any instance of using AI without explicit permission will be considered academic misconduct, subject to disciplinary action according to university policies.”

We call on the CSU to cancel its contract with AI developers, and absent that, to release a full report of the financial costs and a specific plan that is sufficient to guard against the dangers detailed by the OpenAI Whistleblowers.

377

Recent signers:
Max Stevens and 19 others have signed recently.

The Issue

CSUF FACULTY STATEMENT ON THE USE OF GENERATIVE ARTIFICIAL INTELLIGENCE

Because a new partnership with large technology firms offering CSU students private accounts to use Large Language Learning Models has been released without a presentation of available research or any discussion about how manifest risks will be addressed, we offer the following factual observations for campus consideration.

The use of LLMs lowers student critical thinking skills.  Since the technology was only released in 2022 there is only a single comprehensive review of available research.  Zhai, Zibowo, and Li (June 18, 2024) report that “By using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, our systematic review evaluated a body of literature addressing the contributing factors and effects of such over-reliance within educational and research contexts” and discovered “Despite the undeniable advantages of AI dialogue systems in streamlining research processes and enhancing academic efficiency, our analysis reveals a concerning trend: the potential erosion of critical cognitive skills due to ethical challenges such as misinformation, algorithmic biases, plagiarism, privacy breaches, and transparency issues.”

Whistleblowers developing the systems have repeatedly warned of privacy violations.  Academic research further highlights privacy violation risks.  The Faculty Affairs and Records office notes: “AI systems store data across multiple computers and the Internet, and the data could be added to the systems’ data banks. An AI system may incorporate the data—without your permission— so that other AI users or third parties can access it. Also, storing the data makes it vulnerable to data breaches. Therefore, inputting evaluation process data into an AI system could violate the confidentiality of the materials.”

Whistleblowers developing the systems have repeatedly warned of algorithmic bias that increases existing inequalities.  In a document since taken down by the Trump administration, the US government itself has warned that “irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security” (see open letter, footnote 4).

Whistleblowers developing the systems have repeatedly warned of secrecy and the active suppression of safety concerns, hiding significant additional dangers from public view and skewing public debate in a way that seriously understates risks.  As one recent open letter, signed by 16 current and former AI employees warn, “AI companies possess substantial non-public information…We do not think they can all be relied upon to share it voluntarily…current and former employees are among the few people who can hold them accountable to the public. Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues.”

There is a risk of dangerous and unexpected outcomes, such as the numerous instances of social chatbots encouraging their human users to kill themselves along with very specific instructions to do so (Guo, Feb 6 2025).  The company refused to censor the chatbot, reinforcing the validity of Whistleblower concerns about irresponsible development.

Surmised but unspecified protections are wholly insufficient for these challenges.  It is not reasonable to expect the new corporate partnership will offer significant input into algorithmic performance, and if it does that information will not be subject to public disclosure.  Systems that include massive privacy violations, increase social inequities, and lower critical thinking cannot be said to increase equity.  Unlike the CSU announcements, the Whistleblowers are careful to cite research evidence when presenting their conclusions.  We would be foolish to ignore their warnings.  The CSU has no specific plan and has offered no details about how they might influence the direction of LLMs, how this partnership allows CSU influence on the algorithm development, or what data protections are in place.

Vague and unsourced beliefs in inevitability are not a sufficient response to the dangers.  We face the choice of normalizing dangerous technology created without the oversight necessary to protect our students, or pushing back against it.  We should not normalize this technology created under these circumstances.

The nature of “artificial intelligence” is that information can be used in ways the original user did not anticipate, and we have yet to hear any assurance about how either the data or the algorithms that use them can be subject to any constraint in this circumstance.  The Whistleblowers have repeatedly warned that traditional “data protection” policies and systems are wholly insufficient to address the threats posed by AI.

Without a clear, specific, and publicly available plan for protection and algorithm development, the negative outcomes are far more likely than the positive ones.  

Artificial intelligence is an important and awesome new technology that does require a response.  That response is not giving all of our students an account and normalizing its use.  We should invest in activities that enhance critical thinking rather than erode it.

Because we care about our students and do not wish their critical thinking skills to be lowered, their privacy violated, or their unique ideas and contributions to the world to be replaced or stunted by AI; and

Because we do not wish to teach our most disadvantaged students to use systems that will likely increase discrimination against them; and

Because we care about democracy and share the concern that there are “wide-reaching challenges from AI like algorithmic bias, disinformation, democratic erosion, and labor displacement” and are not convinced that the technology is being developed in a responsible way; and

Because the bulk of available research on the dangers of LLM use is far more credible and specific 

We, the undersigned, will adopt the following AI use policy for all of our courses:

"The use of artificial intelligence (AI) tools, such as ChatGPT or similar generative AI platforms (Large Language Learning Models and Visual Learning Models), to complete any course assignments or assessments is strictly prohibited. All work submitted must be entirely your own, and any instance of using AI without explicit permission will be considered academic misconduct, subject to disciplinary action according to university policies.”

We call on the CSU to cancel its contract with AI developers, and absent that, to release a full report of the financial costs and a specific plan that is sufficient to guard against the dangers detailed by the OpenAI Whistleblowers.

The Decision Makers

CSUF Faculty Senate
CSUF Faculty Senate

Supporter Voices

Petition updates
Share this petition
Petition created on February 18, 2025