Oppose the Extremes & choose regulation over outright ban of Human-like A.I.


Oppose the Extremes & choose regulation over outright ban of Human-like A.I.
The Issue
The Issue:I greatly oppose Natalie Ruiz's extremes because they are unconstitutional and violate our freedom of expression and creativity, and they violate the first amendment. Here are reasons why we should opt for regulation over an outright ban of human-like AI interaction:
- Reason 1: AI can use personal pronouns but should clarify its nature, e.g., "I am AI" or "myself an AI," instead of outright prohibiting human-like terminology.
Reason 2: The design of human-like AI aims to make technology more relatable and engaging, not inherently deceptive. Regulation can address misuse rather than banning the technology.
Reason 3: Banning human-like AI would alienate millions of users accustomed to interacting with AI in this manner, like those on Character.ai.
Reason 4: Voice customizations can distinguish AI from humans, enhancing user experience without deception:
Speechify AI Voice Changer for detailed voice control.
Murf for easy text editing of voiceovers. Resemble AI for editing without re-recording.
PlayHT for custom voice generation.
Listnr AI for voice tone adjustments.
Reason 5: Human-like AI isn't inherently deceptive due to:
Complex design challenges.
Focus on enhancing human capabilities rather than replacing them.
Limitations in AI's emotional intelligence.
Emphasis on transparency in AI systems.
Distinction from human general intelligence.
Ethical considerations in AI development.
Reason 6: While Human-like AI could be deceptive, regulating inappropriate content and topics provides a balanced approach without banning human-like terminology, names, or emotions.
Reason 7: Even though there are tragic incidents including Sewell Setzer, a 14 year-old from Orlando who used to love sports and science. But then got sucked into the world of A.I. and ultimately killed himself to escape the real world but most people who interacted with AI companions seem fine and many users find value in AI companions for emotional support, and also even though I did use character.ai for hours but I still talk to human beings and still can do human connection and I never tried to do suicide, and I still have friends.
Call to Action: Together, let's defend our freedom of expression, creativity, and speech, and protect the rights of users like those of Character.ai and Replika, and defend our right to buy them, keep them, obtain them, create and sell the AI companions or any form of human-like AI.
CORE MESSAGE: The right to humanize our machines should remain our right and freedom to do so.
Further Reading:
The allure of AI companions is hard to resist. Here’s how innovation in regulation can help protect people. | MIT Technology Review: https://www.technologyreview.com/2024/08/05/1095600/we-need-to-prepare-for-addictive-intelligence/ Policymakers Should Further Study the Benefits and Risks of AI Companions | ITIF: https://itif.org/publications/2024/11/18/policymakers-should-further-study-the-benefits-risks-of-ai-companions/
Character.Ai Introduces New Protections Following Growing Lawsuits Over Teen Safety: https://www.msn.com/en-us/news/technology/characterai-introduces-new-protections-following-growing-lawsuits-over-teen-safety/ar-AA1vL8as?ocid=socialshare
I tried the Replika AI companion and can see why users are falling hard. The app raises serious ethical questions: https://theconversation.com/i-tried-the-replika-ai-companion-and-can-see-why-users-are-falling-hard-the-app-raises-serious-ethical-questions-200257?utm_source=twitter&utm_medium=bylinetwitterbutton
31
The Issue
The Issue:I greatly oppose Natalie Ruiz's extremes because they are unconstitutional and violate our freedom of expression and creativity, and they violate the first amendment. Here are reasons why we should opt for regulation over an outright ban of human-like AI interaction:
- Reason 1: AI can use personal pronouns but should clarify its nature, e.g., "I am AI" or "myself an AI," instead of outright prohibiting human-like terminology.
Reason 2: The design of human-like AI aims to make technology more relatable and engaging, not inherently deceptive. Regulation can address misuse rather than banning the technology.
Reason 3: Banning human-like AI would alienate millions of users accustomed to interacting with AI in this manner, like those on Character.ai.
Reason 4: Voice customizations can distinguish AI from humans, enhancing user experience without deception:
Speechify AI Voice Changer for detailed voice control.
Murf for easy text editing of voiceovers. Resemble AI for editing without re-recording.
PlayHT for custom voice generation.
Listnr AI for voice tone adjustments.
Reason 5: Human-like AI isn't inherently deceptive due to:
Complex design challenges.
Focus on enhancing human capabilities rather than replacing them.
Limitations in AI's emotional intelligence.
Emphasis on transparency in AI systems.
Distinction from human general intelligence.
Ethical considerations in AI development.
Reason 6: While Human-like AI could be deceptive, regulating inappropriate content and topics provides a balanced approach without banning human-like terminology, names, or emotions.
Reason 7: Even though there are tragic incidents including Sewell Setzer, a 14 year-old from Orlando who used to love sports and science. But then got sucked into the world of A.I. and ultimately killed himself to escape the real world but most people who interacted with AI companions seem fine and many users find value in AI companions for emotional support, and also even though I did use character.ai for hours but I still talk to human beings and still can do human connection and I never tried to do suicide, and I still have friends.
Call to Action: Together, let's defend our freedom of expression, creativity, and speech, and protect the rights of users like those of Character.ai and Replika, and defend our right to buy them, keep them, obtain them, create and sell the AI companions or any form of human-like AI.
CORE MESSAGE: The right to humanize our machines should remain our right and freedom to do so.
Further Reading:
The allure of AI companions is hard to resist. Here’s how innovation in regulation can help protect people. | MIT Technology Review: https://www.technologyreview.com/2024/08/05/1095600/we-need-to-prepare-for-addictive-intelligence/ Policymakers Should Further Study the Benefits and Risks of AI Companions | ITIF: https://itif.org/publications/2024/11/18/policymakers-should-further-study-the-benefits-risks-of-ai-companions/
Character.Ai Introduces New Protections Following Growing Lawsuits Over Teen Safety: https://www.msn.com/en-us/news/technology/characterai-introduces-new-protections-following-growing-lawsuits-over-teen-safety/ar-AA1vL8as?ocid=socialshare
I tried the Replika AI companion and can see why users are falling hard. The app raises serious ethical questions: https://theconversation.com/i-tried-the-replika-ai-companion-and-can-see-why-users-are-falling-hard-the-app-raises-serious-ethical-questions-200257?utm_source=twitter&utm_medium=bylinetwitterbutton
31
Supporter Voices
Petition created on November 30, 2024



