Protect Black Communities From Harmful AI Bias — Demand Fair AI Laws Today

The Issue

Black communities across the United States are facing growing harm as biased AI systems are increasingly used in hiring, policing, housing, and welfare decisions. These technologies are trained on historical data rooted in systemic racism, and without proper civil rights protections and oversight, they threaten to automate discrimination on a massive scale. I have seen firsthand how biased systems block opportunities and perpetuate injustice in my community—many qualified Black candidates are unfairly filtered out by AI hiring tools before they even get a chance. ⚠️

📊 Studies show AI hiring tools reject Black candidates at rates 20-40% higher than white candidates, while predictive policing algorithms disproportionately target Black neighborhoods, fueling cycles of over-policing and incarceration. Shockingly, over 70% of government AI systems lack transparency or civil rights safeguards, according to recent reports by the AI Now Institute and other watchdog groups. This creates a new form of digital redlining—deepening racial inequality and threatening fair access to jobs, justice, and essential services. 🏛️

A critical human firewall against these harms was Diversity, Equity, and Inclusion (DEI) programs, which pushed for more diverse tech teams, accountability, and fairer hiring and policing practices. However, recent rollbacks of DEI initiatives have removed this vital oversight. Without DEI, biased data goes unquestioned, diverse engineers are not hired, and racist AI algorithms are deployed unchecked. This loss has removed one of the last lines of defense for marginalized communities. 🚫

Palantir Technologies plays a central role in this system. Their AI-driven surveillance platforms are widely used for predictive policing, immigration enforcement, and social service monitoring, disproportionately targeting Black and Brown communities. Palantir’s tools have contributed to increased police presence and arrests in marginalized neighborhoods, fueling systemic injustice.

Trump’s Executive Order 14179 eliminated key federal AI safeguards put in place during the Biden administration. It removed requirements for civil rights risk assessments, transparency, and ethical guardrails. This “pro-innovation” policy gave companies like Palantir unprecedented power to expand surveillance and AI-driven policing without accountability or oversight. 🚨

The Big Beautiful Bill, passed in 2025, accelerates this unchecked AI deployment by pumping billions of dollars into surveillance, predictive policing, and welfare fraud detection technologies—again, without civil rights protections or testing. This bill lays the financial groundwork for scaling systemic digital racism nationwide.

As AI advances toward Artificial Superintelligence (ASI), the risk grows that these biases will become even more entrenched and harder to resist, potentially automating discrimination at unimaginable scale and speed.

The time to act is now. We must demand enforceable laws, independent oversight, and meaningful protections before this digital erasure becomes permanent policy. Together, we can resist this new code of control and build a just future for Black communities. ✊🏿

avatar of the starter
Trey SudoPetition Starter

4

The Issue

Black communities across the United States are facing growing harm as biased AI systems are increasingly used in hiring, policing, housing, and welfare decisions. These technologies are trained on historical data rooted in systemic racism, and without proper civil rights protections and oversight, they threaten to automate discrimination on a massive scale. I have seen firsthand how biased systems block opportunities and perpetuate injustice in my community—many qualified Black candidates are unfairly filtered out by AI hiring tools before they even get a chance. ⚠️

📊 Studies show AI hiring tools reject Black candidates at rates 20-40% higher than white candidates, while predictive policing algorithms disproportionately target Black neighborhoods, fueling cycles of over-policing and incarceration. Shockingly, over 70% of government AI systems lack transparency or civil rights safeguards, according to recent reports by the AI Now Institute and other watchdog groups. This creates a new form of digital redlining—deepening racial inequality and threatening fair access to jobs, justice, and essential services. 🏛️

A critical human firewall against these harms was Diversity, Equity, and Inclusion (DEI) programs, which pushed for more diverse tech teams, accountability, and fairer hiring and policing practices. However, recent rollbacks of DEI initiatives have removed this vital oversight. Without DEI, biased data goes unquestioned, diverse engineers are not hired, and racist AI algorithms are deployed unchecked. This loss has removed one of the last lines of defense for marginalized communities. 🚫

Palantir Technologies plays a central role in this system. Their AI-driven surveillance platforms are widely used for predictive policing, immigration enforcement, and social service monitoring, disproportionately targeting Black and Brown communities. Palantir’s tools have contributed to increased police presence and arrests in marginalized neighborhoods, fueling systemic injustice.

Trump’s Executive Order 14179 eliminated key federal AI safeguards put in place during the Biden administration. It removed requirements for civil rights risk assessments, transparency, and ethical guardrails. This “pro-innovation” policy gave companies like Palantir unprecedented power to expand surveillance and AI-driven policing without accountability or oversight. 🚨

The Big Beautiful Bill, passed in 2025, accelerates this unchecked AI deployment by pumping billions of dollars into surveillance, predictive policing, and welfare fraud detection technologies—again, without civil rights protections or testing. This bill lays the financial groundwork for scaling systemic digital racism nationwide.

As AI advances toward Artificial Superintelligence (ASI), the risk grows that these biases will become even more entrenched and harder to resist, potentially automating discrimination at unimaginable scale and speed.

The time to act is now. We must demand enforceable laws, independent oversight, and meaningful protections before this digital erasure becomes permanent policy. Together, we can resist this new code of control and build a just future for Black communities. ✊🏿

avatar of the starter
Trey SudoPetition Starter
Support now

4


Petition updates