Petition updateRegulate the Use of AI in Talent SoftwareWorkday and the Stepford Workforce
Maria RochaPA, United States
26 Feb 2025

AI-Driven Applicant Scoring System Flaws, Privacy Violations, Bias, Discrimination, and Legality (Image: Another Ironic AI Fail)

Workday emphasizes that its AI and machine learning technologies are so seamlessly integrated into its platform that end users hardly notice their presence. explore.workday.com

 

These AI models are powered by over 625 billion transactions processed annually begs the question of how does Workday ensure the legality and ethical use of the 625 billion datasets and 250,000 (refined top 55,000) skills embedded within its AI and ML models? What processes are in place to review and validate the sources of this data, and to what extent are Workday’s partners informed about its origins and compliance with data protection regulations?

 

Introduction

Workday’s AI-driven applicant scoring system is intended to optimize hiring efficiency by ranking candidates based on various factors, including work experience, skills, endorsements, education, and inferred potential. However, this methodology raises significant concerns regarding flaws in hiring practices, privacy violations, bias, discrimination, and possible illegalities.

This analysis provides an in-depth breakdown of each issue and how Workday’s system may deviate from fair employment practices.

 

1. Flaws in Methodology

Workday's hiring algorithm introduces hidden variables and prioritization mechanisms that lead to unfair candidate evaluation.

A. Internal vs. External Candidate Scoring Disparity

Problem: Internal candidates receive an automatic advantage over external candidates, making it difficult for new applicants to compete.

Internal candidates start with higher baseline scores due to:
Verified performance records
Manager/peer endorsements
Past training history within Workday Learning
Inclusion in succession plans
External candidates must outperform internal ones in experience and assessments to be considered.
Unfair Advantage: An equally or more qualified external candidate can be ranked lower than an internal applicant simply because they lack historical data within Workday.
Workday's hiring algorithm exhibits methodological flaws, notably in its treatment of internal versus external candidates. Internal applicants gain a distinct advantage due to their verified performance records, endorsements, and training history within Workday[1]. This results in a higher baseline score, placing external candidates at a disadvantage. Consequently, even more qualified external candidates may rank lower simply due to a lack of historical data within the system. Summary: The algorithm's bias towards internal candidates undermines fair competition and potentially overlooks qualified external talent.

 

B. AI Weighting and Black-Box Scoring

Problem: Workday does not disclose how each factor is weighted in scoring candidates.

Hidden Criteria: The AI scores candidates based on undisclosed employer configurations (e.g., prioritizing Fortune 500 experience or specific educational institutions).
Opaque Overrides: Hiring managers can manually adjust scores, but it’s unclear how often this happens or if it corrects biases.
Impact: Candidates do not know how to improve their applications or appeal unfair rankings.

Workday's AI-driven scoring system lacks transparency in factor weighting, leading to confusion and potential bias in candidate evaluations[2]. The hidden criteria, such as prioritizing specific employers or educational backgrounds, remain undisclosed, causing unfair advantages[3]. Additionally, opaque score adjustments by hiring managers further obscure the process, leaving candidates uncertain about improving their applications or contesting rankings. This lack of clarity impedes candidates' ability to understand or influence their standings, raising concerns about fairness and discrimination.

 

Summary: Workday's non-transparent AI scoring system causes confusion and potential bias in candidate evaluations, necessitating clearer disclosure to ensure fairness.

 

C. Resume Keyword Matching Overemphasis

Problem: Workday’s AI heavily relies on keyword matching, leading to misleading evaluations.

Encourages gaming the system (e.g., stuffing resumes with job description keywords).
Overlooks nuanced experience (e.g., applicants with equivalent but differently worded skills may rank lower).
Fails to assess real competency (e.g., candidates with hands-on experience but fewer matching keywords may be disadvantaged).
Workday’s AI system's reliance on resume keyword matching can mislead evaluations by encouraging candidates to game the system through keyword stuffing, as highlighted by Yao[4]. This approach overlooks nuanced experiences, where applicants with equivalent but differently worded skills may rank lower[5]. Additionally, candidates with hands-on experience but fewer matching keywords could be unfairly disadvantaged, failing to assess real competencies. In summary, the overemphasis on keyword matching in Workday's AI could result in biased hiring decisions and undervaluation of genuine expertise.

D. Employer Reputation Bias

Problem: Workday scores applicants higher if they have worked at prestigious or industry-leading companies.

Candidates from well-known firms receive preferential treatment, even if their individual contributions were minimal.
Conversely, qualified applicants from smaller firms or startups may be unfairly penalized.
E. Career Progression Assumptions

Problem: Workday ranks candidates lower if they have been in the same role for too long.

Unfair to long-term specialists who prefer deep expertise over frequent promotions.
Favors fast promotion cycles, which may disadvantage older workers or employees in traditionally stable roles (e.g., academia, government).
Workday's AI-driven applicant scoring system unfairly penalizes candidates with long tenure in the same role, disadvantaging long-term specialists who value deep expertise over rapid promotions. This bias favors quick promotion cycles, potentially discriminating against older workers or employees in stable fields like academia and government[6],[7]. The methodological flaw in Workday’s system leads to systemic age and role-based discrimination, contradicting equitable hiring practices. Addressing this issue requires recalibrating AI algorithms to recognize and value deep expertise alongside diverse career trajectories, ensuring fair assessments for all candidates.

 

2. Privacy Violation Potential

Workday’s AI system integrates data from multiple sources, raising concerns about data privacy, unauthorized use of employment history, and compliance with data protection laws.

 

A. Unauthorized Use of Past Employment Records

Problem: If an applicant previously worked for a company that used Workday, their past records may be accessed without consent.

What Can Be Retrieved? Workday allows companies to pull prior employment data if a candidate reapplies.
Violation Risk: If performance evaluations or disciplinary records are used without candidate consent, this could violate privacy laws (GDPR, CCPA).
B. AI-Driven Video Interview Risks

Problem: Workday integrates video interview tools that analyze facial expressions, tone, and speech patterns.

Unclear if candidates consent to biometric data processing.
Risk of discriminatory bias based on gender, race, accents, or disabilities.
Possible violation of biometric data laws in states like Illinois (BIPA).
C. Integration with Third-Party Data Sources

Problem: Workday’s AI system integrates with LinkedIn, background check providers, and external assessment platforms.

Unclear what data is shared and how it affects scoring.
Candidates may be unaware that Workday uses external profiling to modify rankings.
Potential FCRA (Fair Credit Reporting Act) violation if external screening results negatively impact hiring without applicant consent.
Workday’s AI system poses significant privacy concerns, particularly regarding unauthorized use of past employment records, AI-driven video interviews, and integration with third-party data sources. Accessing prior employment data without consent risks violating GDPR and CCPA[8]. Video interview tools raise potential discrimination and biometric data law issues, such as BIPA[9]. The unclear sharing of data with external platforms might breach FCRA if it affects hiring without candidate consent. Addressing these issues is vital for compliance and ethical hiring practices. In summary, Workday must enhance transparency and consent mechanisms to mitigate privacy and legal risks.

 

3. Bias Potential

Workday’s AI-driven hiring process contains systemic biases that disadvantage minority groups, older applicants, and non-traditional candidates.

A. Discriminatory Internal Hiring Preference

Problem: Internal employees receive a built-in advantage, reinforcing a lack of workforce diversity.

Candidates from underrepresented groups struggle to break into organizations where internal hiring is prioritized.
External candidates must outperform internal ones by a significant margin to compensate.
B. Bias Against Candidates from Non-Elite Institutions

Problem: Some companies configure Workday to favor degrees from top-tier universities.

Candidates from Ivy League or prestigious institutions receive higher scores than those from community colleges or state universities.
This exacerbates socioeconomic bias, disadvantaging highly skilled candidates who lacked elite education opportunities.
C. Penalty for Career Gaps

Problem: Candidates with employment gaps are flagged by Workday’s AI.

Penalizes caregivers, parents, and veterans returning to the workforce.
Excludes candidates with valid non-traditional work histories (e.g., freelancers, entrepreneurs).
Even when gaps are not explicitly penalized, these applicants rank lower unless manually overridden.
D. Bias Against Job Hoppers

Problem: Workday ranks applicants lower if they have multiple short-term jobs.

Harms gig workers and startup employees who frequently change jobs.
Fails to consider industry norms (e.g., tech roles often involve shorter stints).
Workday’s AI-driven hiring system exhibits biases that disadvantage minority groups, older applicants, and non-traditional candidates. Internal hiring preferences create barriers for underrepresented groups trying to enter an organization[10]. Bias against non-elite institutions leads to higher scores for candidates from prestigious universities, exacerbating socioeconomic disparities[11].

Candidates with career gaps, such as caregivers or veterans, and job hoppers, like gig workers, are penalized, which neglects valid non-traditional work histories. These systemic biases reinforcing existing inequalities in the hiring process.

Summary: Workday’s AI hiring system poses bias risks against minorities, non-traditional candidates, and those from less prestigious backgrounds.

 

4. Discrimination Potential

Workday’s AI inadvertently enforces systemic discrimination in multiple ways.

A. Possible Violation of the Americans with Disabilities Act (ADA)

AI-based assessments may unintentionally disadvantage neurodiverse candidates or those with disabilities.
Candidates with speech impairments or neurological conditions may score lower in AI-driven video interviews.
B. Gender and Racial Bias in Peer Reviews & Endorsements

Workday assigns higher scores to candidates with positive peer/manager endorsements.
Women and minorities historically receive lower subjective performance ratings, leading to systemic disadvantages in ranking.
C. Age Discrimination

Older workers are penalized if they remain in a single role for too long.
Candidates with long career histories but fewer promotions receive lower scores.
 

Workday’s AI system risks violating the ADA by potentially disadvantaging neurodiverse candidates or those with disabilities, such as individuals with speech impairments, in AI-driven interviews[12]. It also perpetuates gender and racial bias, as women and minorities often receive lower peer reviews, affecting their scores[13]. Furthermore, the system may discriminate against older workers by penalizing long tenure without promotions, adversely impacting their rankings.

Overall, Workday's AI inadvertently enforces systemic discrimination through biases related to disability, gender, race, and age, necessitating urgent reforms to ensure fairness and compliance with legal standards.

5. Potentially Illegal Practices

Workday’s hiring process raises legal concerns that could lead to EEOC investigations, class-action lawsuits, and regulatory penalties.

 

Legal Issue

 

How Workday’s AI May Violate It

 

EEOC Guidelines

 

Hidden criteria like employer prestige and inferred performance deviate from job-based evaluations.

 

Title VII (Disparate Impact)

 

AI rankings may unintentionally exclude women, minorities, and disabled candidates.

 

ADA (Disability Bias)

 

AI-based video analysis may fail to accommodate candidates with disabilities.

 

Age Discrimination (ADEA)

 

Penalizing long tenure without promotions may disadvantage older workers.

 

GDPR/CCPA (Privacy Violations)

 

AI collects and processes candidate data without explicit consent.

 

Workday’s AI risks perpetuating hiring discrimination and legal non-compliance, leading to major liabilities for companies using it.

 

Workday

Liability for Its AI-Driven Hiring System’s Impact on Applicants

 

Workday, as the developer of an AI-driven applicant scoring system, could face legal liability for the impact of its technology on job candidates. This liability arises under employment discrimination laws, privacy regulations, and consumer protection laws. While employers configure Workday’s hiring AI, Workday itself could be held responsible if its system systemically discriminates, violates privacy rights, or fails to comply with legal standards.

Below is a detailed breakdown of the legal theories under which Workday could be held liable.

 

1. Liability Under Employment Discrimination Laws

Workday's AI hiring system could lead to unlawful discrimination if it causes disparate impact against protected groups.

A. Title VII of the Civil Rights Act of 1964 (Disparate Impact Liability)

How Workday’s AI May Violate Title VII:
Workday’s AI ranks candidates using hidden, non-job-related factors, such as:
Employer prestige
Employment gaps
Internal vs. external applicant status
If these factors disproportionately exclude women, minorities, or other protected groups, it creates a disparate impact under Title VII.
Even if employers configure Workday’s settings, Workday itself may be liable if its algorithm inherently leads to biased outcomes.
Workday’s Legal Exposure:
The Equal Employment Opportunity Commission (EEOC) has made it clear that AI hiring tools must comply with Title VII.
If an applicant is unfairly ranked lower due to systemic bias in Workday’s AI, both the employer and Workday could be sued for discrimination.
B. Age Discrimination in Employment Act (ADEA)

How Workday’s AI May Violate the ADEA:
If Workday’s AI lowers rankings for older workers based on:
Long tenure in one role (which AI may misinterpret as stagnation)
Lack of promotions over time
Employment gaps due to layoffs or caregiving responsibilities
This would disproportionately impact workers over 40, violating the ADEA.
Workday’s Legal Exposure:
If Workday’s system systematically reduces older applicants' scores, Workday could be sued for enabling age discrimination.
The EEOC has already warned companies about AI systems that disadvantage older workers.
C. Americans with Disabilities Act (ADA)

How Workday’s AI May Violate the ADA:
If Workday relies on AI-driven video interviews, speech analysis, or behavioral scoring, it may:
Unfairly penalize candidates with disabilities that affect their speech, facial expressions, or ability to complete certain assessments.
Fail to provide reasonable accommodations for disabled candidates.
Workday’s Legal Exposure:
The ADA prohibits discrimination based on disability, and if Workday’s AI rejects candidates due to disabilities, it could face liability.
Lawsuits are already being filed against AI hiring systems that penalize neurodivergent or disabled applicants.
 

2. Liability for Privacy Violations

Workday collects, processes, and analyzes applicant data, which could lead to violations of privacy laws.

A. General Data Protection Regulation (GDPR) (EU)

How Workday May Violate GDPR:
Workday does not clearly disclose how it collects and processes applicant data.
If Workday’s AI automatically retrieves prior employment history (e.g., from LinkedIn or past Workday employers) without explicit consent, this violates data processing rules.
Lack of transparency in AI decision-making breaches GDPR’s “right to explanation”.
Workday’s Legal Exposure:
European regulators could fine Workday up to 4% of its annual revenue for processing candidate data without consent.
Applicants can sue under GDPR for being unfairly ranked or rejected without explanation.
B. California Consumer Privacy Act (CCPA)

How Workday May Violate CCPA:
If Workday tracks and stores candidate data without explicit consent, it could violate CCPA.
Workday may fail to provide candidates with access to their AI-generated scores, which is required under CCPA’s “right to access” provisions.
Workday’s Legal Exposure:
Candidates can file private lawsuits if Workday fails to disclose how their data is used in hiring decisions.
CCPA penalties could apply if Workday sells or shares applicant data without proper opt-out options.
C. Biometric Privacy Laws (e.g., Illinois BIPA)

How Workday May Violate Biometric Laws:
If Workday uses AI to analyze video interviews, voice tone, or facial expressions, this could be considered biometric data collection.
Some states (like Illinois) require written consent before collecting biometric data.
Workday’s Legal Exposure:
If Workday fails to obtain explicit consent, it could face lawsuits under Illinois' Biometric Information Privacy Act (BIPA).
BIPA lawsuits have led to massive settlements against companies like Clearview AI and Facebook.
 

3. Liability for Unfair & Deceptive Practices (Consumer Protection Laws)

Workday’s AI may mislead candidates by creating an unfair hiring process with undisclosed scoring criteria.

A. Federal Trade Commission (FTC) Unfair & Deceptive Practices

How Workday May Violate FTC Rules:
Workday’s AI automatically ranks and rejects candidates based on criteria that candidates cannot see.
False claims: If Workday markets its AI as “fair and unbiased” but applicants face hidden bias, it could be considered deceptive.
Workday’s Legal Exposure:
The FTC can fine Workday for deceptive AI marketing.
The FTC is already investigating AI hiring tools for violating fairness standards.
B. Fair Credit Reporting Act (FCRA)

How Workday May Violate FCRA:
If Workday retrieves external data (e.g., LinkedIn profiles, background checks) to rank candidates, it may be functioning as a consumer reporting agency.
FCRA requires candidates to be notified and given a chance to dispute negative reports.
Workday’s Legal Exposure:
If candidates are rejected due to incorrect or hidden data, Workday could be sued under FCRA.
Violations could lead to class-action lawsuits.
 

4. Joint Liability with Employers Using Workday

· Even though employers configure Workday’s AI, Workday still owns the core technology.

· Precedents in AI bias lawsuits show that software providers can be held liable alongside employers.

· Courts may rule that Workday had a duty to design a non-discriminatory system, regardless of how employers configure it.

 

Conclusion: How Workday Can Be Held Liable

Workday faces multiple legal risks due to its AI-driven hiring system. These include:

 

Legal Area

 

Potential Violation

 

Workday’s Risk

 

Title VII (Civil Rights Act)

 

Disparate impact discrimination

 

EEOC lawsuits, class actions

 

ADEA (Age Discrimination)

 

AI penalizing older applicants

 

Federal fines, lawsuits

 

ADA (Disability Act)

 

AI-based video analysis harming disabled candidates

 

Non-compliance, legal actions

 

GDPR (EU Privacy Law)

 

Hidden AI processing of candidate data

 

Regulatory fines, lawsuits

 

CCPA (California Privacy Law)

 

Lack of transparency in AI scoring

 

Consumer lawsuits, state fines

 

BIPA (Biometric Privacy Law)

 

Video AI analysis without consent

 

Multi-million-dollar class actions

 

FTC (Deceptive Practices)

 

False claims of unbiased AI

 

Federal investigations, penalties

 

FCRA (Fair Credit Reporting Act)

 

Using external data to score candidates

 

Private lawsuits, regulatory fines

Workday can be sued if:

✅ Candidates are unfairly rejected due to AI bias.
✅ AI-based hiring decisions disproportionately exclude protected groups.
✅ Workday fails to disclose how its AI ranks candidates.

 

[1] Wyffels, F., Waegeman, T. (2012) Adaptive Modular Architectures for Rich Motor Skills ICT-248311 D 5 . 2 March 2012 ( 24 months ) Technical report on Hierarchical Reservoir Computing architectures

 

[2] Deshpande, Y., Patil, P., Hole, A., Kale, A., Mali, M. (2024) Automated Resume Scoring and Course Recommendation International Journal For Multidisciplinary Research

 

[3] Armstrong, L., Liu, A., MacNeil, S., Metaxa, D. (2024) The Silicon Ceiling: Auditing GPT's Race and Gender Biases in Hiring , 2:1-2:18

 

[4] Yao, J., Xu, Y., Gao, J. (2023) A Study of Reciprocal Job Recommendation for College Graduates Integrating Semantic Keyword Matching and Social Networking Applied Sciences

 

[5] Logaiyan, P., Ramakrishnan, R., Deepa, V., Narmatha, K. (2025) AI-Powered Keyword Extraction System Using NLP Techniques for Contextual Insights and Document Accessibility International Scientific Journal of Engineering and Management

 

[6] Mbokazi, M., S., Mkhasibe, R., Ajani, O., A. (2022) Evaluating the Promotion Requirements for the Appointment of Office-Based Educators in the Department of Basic Education in South Africa International Journal of Higher Education

 

[7] Brauner, M., K., Massey, H., G., Moore, S., Medlin, D. (2009) Improving Development and Utilization of U.S. Air Force Intelligence Officers

 

[8] Zhao, Y., Li, Z., Lv, S. (2024) Enhancing AI System Privacy: An Automatic Tool for Achieving GDPR Compliance in NoSQL Databases Computers, Materials & Continua

 

[9] Jim, M., M., I. (2024) The Role Of AI In Strengthening Data Privacy For Cloud Banking Innovatech Engineering Journal

 

[10] Ekmekçi, E. (2024) Exploring Bias and Inclusion: Behavioral Economics and Experimental Insights into Diversity and Discrimination Next Generation Journal for The Young Researchers

 

[11] Weisshaar, K., Chavez, K., Hutt, T. (2024) Hiring Discrimination Under Pressures to Diversify: Gender, Race, and Diversity Commodification across Job Transitions in Software Engineering American Sociological Review 89, 584-613

 

[12] Zakout, G., A. (2024) Unjustified Partiality or Impartial Bias? Reckoning with Age and Disability Discrimination in Cancer Clinical Trials. The Journal of law, medicine & ethics : a journal of the American Society of Law, Medicine & Ethics 52 3, 717-730

 

[13] Schwartz, J., B., Covinsky, K. (2024) "Unjustified Partiality or Impartial Bias? Reckoning with Age and Disability Discrimination in Cancer Clinical Trials". The Journal of law, medicine & ethics : a journal of the American Society of Law, Medicine & Ethics 52 3, 731-733

Copy link
WhatsApp
Facebook
Nextdoor
Email
X