TAKE ACTION AGAINST X/TWITTER/GROK AI FOR ALLOWING SEXUAL ABUSE MATERIAL TO BE GENERATED


TAKE ACTION AGAINST X/TWITTER/GROK AI FOR ALLOWING SEXUAL ABUSE MATERIAL TO BE GENERATED
The Issue
Japanese
Every SECOND, Grok, the AI of the social media platform X (Formerly Twitter) is being used to generate images of real life women and children, with their clothes removed, altered, or being put into sexually explicit scenarios, WITHOUT CONSENT. There is a rampant trend going on of people using the replies of womens photos to undress them with the click of the button. This is going as far as users creating throwaway accounts where they upload images of women they know, screenshotted from their private accounts or otherwise, ask Grok to remove their clothes, and then delete the post when their image is generated. One scroll through Groks reply section will show you exactly what this AI is being used for. It is clearly a massive distribution of nonconsensual porn being publicly posted on one of the internets LARGEST SOCIAL MEDIA PLATFORMS. This means someone could be currently uploading images of your mother, grandmother, sister, cousin, best friend, etc and digitally altering them in the nude for ANYONE to publicly see. This is a massive issue being ignored and allowed to continue on rampantly. This has gone as far as Grok creating CSAM UNPROMPTED in some users replies. We should not be subject to unwillingly seeing CSAM and nonconsensual adult material while scrolling on X. We need voices to be heard. There needs to be regulations in place.
Per the The TAKE IT DOWN Act, creating and distributing NONCONSENSUAL publication of Intimate Images including "digital forgeries" (i.e. deepfakes) is ILLEGAL.
Creating child sexual abuse material is ILLEGAL.
X/Twitter is complicit in allowing their AI to continue down this route. WE NEED REGULATION. NOW!
70,433
The Issue
Japanese
Every SECOND, Grok, the AI of the social media platform X (Formerly Twitter) is being used to generate images of real life women and children, with their clothes removed, altered, or being put into sexually explicit scenarios, WITHOUT CONSENT. There is a rampant trend going on of people using the replies of womens photos to undress them with the click of the button. This is going as far as users creating throwaway accounts where they upload images of women they know, screenshotted from their private accounts or otherwise, ask Grok to remove their clothes, and then delete the post when their image is generated. One scroll through Groks reply section will show you exactly what this AI is being used for. It is clearly a massive distribution of nonconsensual porn being publicly posted on one of the internets LARGEST SOCIAL MEDIA PLATFORMS. This means someone could be currently uploading images of your mother, grandmother, sister, cousin, best friend, etc and digitally altering them in the nude for ANYONE to publicly see. This is a massive issue being ignored and allowed to continue on rampantly. This has gone as far as Grok creating CSAM UNPROMPTED in some users replies. We should not be subject to unwillingly seeing CSAM and nonconsensual adult material while scrolling on X. We need voices to be heard. There needs to be regulations in place.
Per the The TAKE IT DOWN Act, creating and distributing NONCONSENSUAL publication of Intimate Images including "digital forgeries" (i.e. deepfakes) is ILLEGAL.
Creating child sexual abuse material is ILLEGAL.
X/Twitter is complicit in allowing their AI to continue down this route. WE NEED REGULATION. NOW!
70,433
Supporter Voices
Petition created on January 1, 2026