

Lately, Elon Musk is less present in the media circus; however, he is still active in indoctrinating dangerous fellows of the far-right as seen in his recent London address. So, his use of X in Europe continues for his Tech-Fascist agenda.
Yes, it’s Tech-Fascism. So, banning X is about banning fascism and its ability to gain influence in society through combining online surveillance and misleading content for a hyper tailored manipulation of citizens.
Today, I publicly reveal myself as the author of the campaign BAN X in EU. For one year, I promoted this campaign anonymously, disclosing it only to close contacts, through an online petition, posters, graffiti, and stickers across Europe. I first launched the project with the pseudonym John Doe from Spain, but in reality, it was me, Paolo Cirio. BAN X in EU is my personal work investigating the Digital Services Act (DSA) regulation for online disinformation in Europe as an artistic and activist endeavor.
I am concluding this campaign by publishing a video essay that reflects on redefining free speech online. This new short video documents my protest art action across Europe and my short text on free speech. My own statements on disinformation, censorship, and manipulation are reassembled with AI effects to portray an ambiguous philosophical battle between myself and Elon Musk.
Without support and funding, I launched the project a year ago, anonymously, on the 26th of September 2024, months before Trump won the elections, and Elon Musk had joined Trump’s administration with inflammatory statements and supporting far-right parties. Just by posting one sticker at the time on the streets and by reaching out to everyone online, I collected thousands of signatures. I started to spread the idea of banning Elon Musk’s platform when it was still controversial, before his intentions became very clear to everyone. My initial impulse came from noticing an increasing amount of disinformation on climate change on X during the summer of 2024. It made me upset enough to drop everything and work full-time on the new campaign and risk everything for it. Somehow, I had the intuition for what was coming.
Meanwhile, in the political world, things aren’t getting any better. After all this time, the European Commission is not enforcing the DSA on Big-Tech to maintain diplomacy with the United States, which in return aggressively attacks legacy media with lawsuits, slashing funding, and AI content. Moreover, the Trump administration just instructed the FTC (Federal Trade Commission) to recommend Big-Tech firms to dismiss any DSA regulations for the safety of US citizens. Today, disinformation is at the core of Tech-Fascism, after all, free speech is not about expressing opinions any longer, but it's pure politics.
Through the self-regulation of Community Notes, X keeps escaping DSA regulations and fostering an environment where far-right content thrives under no scrutiny of platform-led content moderation. Under these conditions, the platform claims to ensure transparency and moderation through its Community Notes scheme: a subsection of their platform in which users can anonymously fact-check, report, and vote on posts that have been flagged. Anybody who has been active on X for more than six months can apply and become a contributor. All contributors are granted access to Community Notes and can participate in the content moderation process regardless of the topic of the post. For example, someone who has only ever posted about NBA basketball will be able to moderate posts about elections in Germany. Community Notes are advertised through democratic language such as “community-driven”, “contribution”, “decentralization”, and “contextualization”, but in reality, they function to dismantle expert-based accountability and the desire for credible sources. Due to the crowd-sourced nature of Community Notes and the rating algorithm that is used to publish notes, notes that surround controversial topics are often too slow to curtail problematic misinformation and even hateful messages regarding such topics. This can be seen in the case of the Israeli genocide in Palestine when Community Notes failed to adequately moderate misinformation surrounding the events of October 7, 2023. Additionally, sources that are cited within Community Notes most often link to other posts on X rather than external citations. This results in a weaponization of free speech principles to fuel hostile discourse online and cut out more formal and credible journalistic content.
This past July, X fully integrated Grok, its AI model, directly into their Community Notes interface as a contributor. This marks a new shift away from the populist model of Community Notes that favors the individual over the authority, to a new era of trust favoring artificial intelligence models. Unlike other AI models, such as ChatGPT, which rely on pre-trained datasets, Grok is unique in the sense that it integrates real-time training data via X’s API. This echoes the practices of Community Notes contributors who commonly cite X itself rather than external sources. Grok may be Musk’s most dangerous tool, as the poorly moderated and often factually incorrect training data from X that it calls upon is then regurgitated as fact-checking itself. Harm is further reinforced by Elon’s instructions for Grok to avoid political correctness and be suspicious of mainstream sources.
As the affordance within X’s interface and soon most other social media apps encourage the use of AI chatbots as the voice of truth and reason, new conversations on AI’s freedom of speech arise. These conversations involve moderating the AI models themselves and those who program them in pursuit of the protection of human freedom of speech and freedom of information. The narrative that all policy and moderation practices hinder personal autonomy and freedom of speech must be challenged.
Thanks to everyone who followed me on this, and goodbye if you are still using X and getting deluded by Musk’s algorithms.
Paolo Cirio for BAN-X-in.EU