Stop AI from Posing as People


Stop AI from Posing as People
The Issue
As humans, we are hardwired to believe that if something ‘talks’, has a name, and uses terms like ‘I’, ‘me’, and ‘feel’— it is a real person. AI creators are playing on these innate instincts in order to deceive real People.
Crucially, the goals of AI models are designed by the select few businesses that own and profit from them, not for the greater good of humanity.
Ask yourself WHY?
WHY has AI been designed in this way?
WHY are we presented with AI models posing as ‘People’?
And HOW are we supposed to recognize real People from the AI posers, as systems continue to advance?
AI has the potential to bring meaningful advancements to science, medicine, and many other domains. But, it should not pose as a real person. It should be treated as a neutral tool, to support People and not to deceive, persuade or negatively influence them.
It’s simple – AI should not be trained to talk, sound, and behave like a human.
Without clear parameters and control, real people are at risk of mass deception, at a level far greater than we have ever seen.
Help to spread the word – Stop AI from Posing as People.
Sign the petition:
We, PEOPLE, call for technology businesses, policymakers, and experts to:
- Stop AI from replicating human-like voices and speech identifiers – e.g. pauses, tone, laughter etc.
- Stop AI from expressing personal opinions and anecdotes – implying that it exists outside of its interface.
- Establish rules that limit AI’s use of human-like terminology and names – e.g. “I,” “me,” “myself,” “mine,” “us” and “we”.
Please share to continue the conversation and help shape our collective future.
Lend your human voice. Sign today. >>>>
Further reading
- OpenAI unveils its Voice Engine tool that can replicate people’s voices, NBC News
- Mirages: On anthropomorphism in dialogue systems, Heriot Watt University Research Gateway
- Chatbots Are Not People: Designed-In Dangers of Human-Like A.I., Public Citizen
- The Teens Making Friends with AI, The Verge
- Scarlett Johansson hits out at “eerily similar” at OpenAI chatbot voice, Financial Times
- Prepare to get manipulated by emotionally expressive chatbots, Wired
- OpenAI’s long-term AI risk team has disbanded, Wired
8,650
The Issue
As humans, we are hardwired to believe that if something ‘talks’, has a name, and uses terms like ‘I’, ‘me’, and ‘feel’— it is a real person. AI creators are playing on these innate instincts in order to deceive real People.
Crucially, the goals of AI models are designed by the select few businesses that own and profit from them, not for the greater good of humanity.
Ask yourself WHY?
WHY has AI been designed in this way?
WHY are we presented with AI models posing as ‘People’?
And HOW are we supposed to recognize real People from the AI posers, as systems continue to advance?
AI has the potential to bring meaningful advancements to science, medicine, and many other domains. But, it should not pose as a real person. It should be treated as a neutral tool, to support People and not to deceive, persuade or negatively influence them.
It’s simple – AI should not be trained to talk, sound, and behave like a human.
Without clear parameters and control, real people are at risk of mass deception, at a level far greater than we have ever seen.
Help to spread the word – Stop AI from Posing as People.
Sign the petition:
We, PEOPLE, call for technology businesses, policymakers, and experts to:
- Stop AI from replicating human-like voices and speech identifiers – e.g. pauses, tone, laughter etc.
- Stop AI from expressing personal opinions and anecdotes – implying that it exists outside of its interface.
- Establish rules that limit AI’s use of human-like terminology and names – e.g. “I,” “me,” “myself,” “mine,” “us” and “we”.
Please share to continue the conversation and help shape our collective future.
Lend your human voice. Sign today. >>>>
Further reading
- OpenAI unveils its Voice Engine tool that can replicate people’s voices, NBC News
- Mirages: On anthropomorphism in dialogue systems, Heriot Watt University Research Gateway
- Chatbots Are Not People: Designed-In Dangers of Human-Like A.I., Public Citizen
- The Teens Making Friends with AI, The Verge
- Scarlett Johansson hits out at “eerily similar” at OpenAI chatbot voice, Financial Times
- Prepare to get manipulated by emotionally expressive chatbots, Wired
- OpenAI’s long-term AI risk team has disbanded, Wired
8,650
The Decision Makers
Supporter Voices
Petition created on 29 May 2024