Petition updateRegulate the Use of AI in Talent SoftwareDear Workday: You've acknowledged your ignorance as well as the harm caused. Form 10-Q for Workday
Maria RochaPA, United States
10 Mar 2025

Response addressing the issues arising from Workday, Inc.’s SEC Form10-Q filing and  acknowledgment of potential flaws, damages, and liability associated with its use of artificial intelligence (AI) and evolving technologies in its offerings. The statements made in your disclosures indicate a recognition of substantial risks, including reputational harm, litigation exposure, regulatory challenges, and ethical considerations. These admissions raise serious concerns regarding the adequacy of safeguards, compliance measures, and accountability frameworks within Workday’s AI-driven product suite.

Failure to Mitigate Known Risks and Harm
Workday has explicitly acknowledged that its AI-powered products, including large language models and generative AI, pose risks affecting human, civil, privacy, and employment rights. Despite this recognition, your company continues to integrate such technologies without demonstrating comprehensive safeguards to prevent discriminatory practices, privacy violations, or other forms of harm. This affirms concerns about whether Workday has exercised due diligence in assessing and mitigating these risks prior to deployment.

Legal and Regulatory Compliance Deficiencies
The European Union’s AI Act and other emerging regulatory frameworks impose stringent requirements on AI providers. Your disclosure suggests that Workday may not be fully prepared to comply with these evolving laws, thereby exposing stakeholders—including customers, employees, and end-users—to unforeseen legal liabilities. Failure to proactively align your AI governance policies with existing and forthcoming regulations constitutes a serious lapse in corporate responsibility.

Workday itself has stated that:

“If customers are not satisfied with the quality and timing of work performed by us or a third party… we could incur additional costs to address the situation.”
This admission contradicts Workday’s attempt to offload responsibility. If customer dissatisfaction leads to financial risks for Workday, then it logically follows that Workday retains some level of responsibility in ensuring that its AI products function fairly and effectively.

Additionally, Workday acknowledges that its own employees may be involved in professional service deployments. This means that even in cases where third-party implementers are used, Workday often plays a direct role in deployment and integration. Workday cannot have it both ways—profiting from AI-driven automation while evading liability for its consequences.

Workday’s argument focuses on financial and contractual impacts, but it ignores the human cost of flawed AI-driven hiring decisions. When Workday’s AI disproportionately excludes candidates who have been unemployed for extended periods, the real harm is borne by job seekers who are systematically denied opportunities due to biased algorithms.

Given the complexity of Workday’s AI-powered hiring systems, it is unrealistic—and legally unsound—to place full responsibility on clients and third-party implementers. If Workday’s technology fails to provide fair and unbiased outcomes, Workday itself must be held accountable for the consequences.

While Workday attempts to limit its liability by shifting responsibility to customers and third-party service providers, its own admissions highlight that many of these risks stem from internal system design and security vulnerabilities. 

Despite Workday’s efforts to disclaim responsibility, its acknowledged history of system failures and security incidents makes it clear that it bears significant risks. 

AI should work for people, not against them. When it fails, it must be fixed—not defended as an infallible system.

Pages 42-48

Copy link
WhatsApp
Facebook
Nextdoor
Email
X