Strengthen AI Research Oversight at University of Zurich


Strengthen AI Research Oversight at University of Zurich
The Issue
In light of the recent unauthorized AI experiment conducted by researchers from the University of Zurich on Reddit's r/ChangeMyView community, where AI was used to impersonate trauma victims and minority groups to manipulate people's opinions, we, the undersigned, respectfully petition the University to strengthen its oversight of AI research projects.
Our Concerns
Rapidly Escalating AI Capabilities and Risks:
As AI language models grow increasingly powerful, both the scope and severity of potential harms expand dramatically. The recent experiment demonstrates how even current-generation AI systems can be deployed in ways that violate community norms and individual autonomy. Future AI systems will only intensify these risks, potentially introducing unexpected harms that current ethical frameworks are ill-equipped to address. Without proper monitoring and oversight, research involving advanced AI systems could lead to serious psychological harms, privacy violations, and manipulation that may not be immediately apparent during initial ethics reviews.
Inadequate Practical Implementation of Existing Guidelines:
While ethical guidelines exist, this incident reveals significant gaps in their practical implementation and enforcement. The researchers proceeded with methodology changes without proper consultation and deployed emotionally manipulative AI personas without adequate oversight. These failures suggest that current implementation mechanisms are insufficient to address AI-specific research risks.
Our Request
We respectfully ask the University of Zurich to:
Develop a Tiered Risk Assessment Framework for AI Research that:
Identifies High-Risk AI Research Categories requiring enhanced scrutiny, including:
- Psychological manipulation studies
- Cybersecurity research involving AI systems
- Use of AI to impersonate vulnerable populations
- Studies using self-learning or potentially uncontrollable AI systems
Enhance Documentation Requirements for AI research, specifically mandating:
- Comprehensive explanation of societal benefits and knowledge advancement
- Thorough risk assessment of potential misuse and harmful applications
- Description of safety measures implemented to prevent unintended consequences
- Model parameters, training data characteristics, and monitoring protocols
Conclusion
The University of Zurich has an opportunity to lead by example in establishing pragmatic, effective oversight for AI research. These enhanced documentation requirements would provide researchers with clear guidance on responsible AI experimentation while protecting research subjects from emerging risks.
We urge the UZH to take immediate action to prevent future incidents and establish the University as a leader in responsible AI research.
14
The Issue
In light of the recent unauthorized AI experiment conducted by researchers from the University of Zurich on Reddit's r/ChangeMyView community, where AI was used to impersonate trauma victims and minority groups to manipulate people's opinions, we, the undersigned, respectfully petition the University to strengthen its oversight of AI research projects.
Our Concerns
Rapidly Escalating AI Capabilities and Risks:
As AI language models grow increasingly powerful, both the scope and severity of potential harms expand dramatically. The recent experiment demonstrates how even current-generation AI systems can be deployed in ways that violate community norms and individual autonomy. Future AI systems will only intensify these risks, potentially introducing unexpected harms that current ethical frameworks are ill-equipped to address. Without proper monitoring and oversight, research involving advanced AI systems could lead to serious psychological harms, privacy violations, and manipulation that may not be immediately apparent during initial ethics reviews.
Inadequate Practical Implementation of Existing Guidelines:
While ethical guidelines exist, this incident reveals significant gaps in their practical implementation and enforcement. The researchers proceeded with methodology changes without proper consultation and deployed emotionally manipulative AI personas without adequate oversight. These failures suggest that current implementation mechanisms are insufficient to address AI-specific research risks.
Our Request
We respectfully ask the University of Zurich to:
Develop a Tiered Risk Assessment Framework for AI Research that:
Identifies High-Risk AI Research Categories requiring enhanced scrutiny, including:
- Psychological manipulation studies
- Cybersecurity research involving AI systems
- Use of AI to impersonate vulnerable populations
- Studies using self-learning or potentially uncontrollable AI systems
Enhance Documentation Requirements for AI research, specifically mandating:
- Comprehensive explanation of societal benefits and knowledge advancement
- Thorough risk assessment of potential misuse and harmful applications
- Description of safety measures implemented to prevent unintended consequences
- Model parameters, training data characteristics, and monitoring protocols
Conclusion
The University of Zurich has an opportunity to lead by example in establishing pragmatic, effective oversight for AI research. These enhanced documentation requirements would provide researchers with clear guidance on responsible AI experimentation while protecting research subjects from emerging risks.
We urge the UZH to take immediate action to prevent future incidents and establish the University as a leader in responsible AI research.
14
The Decision Makers
Petition created on May 11, 2025