Petition updateRegulate the Use of AI in Talent SoftwareAI Employment Bias Maze: Legal Compliance Guidelines and Strategies
Maria RochaPA, United States
21 Jan 2025

Navigating the AI Employment Bias Maze: Legal Compliance Guidelines and Strategies
Lena Kempe, Apr 10, 2024

https://www.americanbar.org/groups/business_law/resources/business-law-today/2024-april/navigating-ai-employment-bias-maze/

AI Employment Discrimination
Companies have increasingly used AI tools to screen and analyze résumés and cover letters; scour online platforms and social media networks for potential candidates; and analyze job applicants’ speech and facial expressions in interviews.

  1. In addition, companies are using AI to onboard employees, write performance reviews, and monitor employee activities and performance.
  2. AI bias can occur in any of the above use cases, throughout every stage of the employment relationship—from hiring to firing and everything in between—and can result in discrimination lawsuits. In one notable example, the Equal Employment Opportunity Commission ( “EEOC”) settled its first AI hiring discrimination lawsuit in August 2023.
  3. In Equal Employment Opportunity Commission v. iTutorGroup, Inc., the EEOC sued three companies providing tutoring services under the “iTutorGroup” brand name (“iTutorGroup”) on the basis that iTutorGroup violated the Age Discrimination in Employment Act of 1967 (“ADEA”) because the AI hiring program it used “automatically reject[ed] female applicants age 55 or older and male applicants age 60 or older,” resulting in screening out over 200 applicants because of their age. Subsequently, iTutorGroup entered into a consent decree with the EEOC, under which iTutorGroup agreed to pay $365,000 to the group of automatically rejected applicants, adopt antidiscrimination policies, and conduct training to ensure compliance with equal employment opportunity laws.
  4. The ongoing Mobley v. Workday, Inc.  litigation, one of the first major class-action lawsuits in the United States alleging discrimination through algorithmic bias in applicant screening tools, presents another warning. The plaintiff, an African-American man over the age of forty with a disability, claims that Workday provides companies with algorithm-based applicant screening software that unlawfully discriminated against job applicants based on protected class characteristics of race, age, and disability and thus violated Title VII of the Civil Rights Act of 1964, the Civil Rights Act of 1866, the ADEA, and the ADA Amendments Act of 2008 (“ADAAA”). On January 19, 2024, the court granted Workday’s motion to dismiss the case, with leave for the plaintiff to amend the complaint. On February 21, 2024, the plaintiff filed an amended complaint outlining further details to support his claims. 

EEOC 2023 Guidance on Title VII and AI
In May 2023, the EEOC issued new technical guidance on how to measure adverse impact when AI tools are used for employment selection, titled “Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.” Under this guidance, if the selection rate of individuals of a particular race, color, religion, sex, or national origin, or a “particular combination of such characteristics” (e.g., a combination of race and sex), is less than 80 percent of the rate of the non-protected group, then the selection process could be found to have a disparate impact in violation of Title VII, unless the employer can show that such use is “job related and consistent with business necessity” under Title VII.

If the AI tool is found to have an adverse impact under Title VII, the employer can take measures to reduce the impact or select a different tool. Failure to adopt a less discriminatory algorithm that was considered during the design process may subject the employer to liability.

Under both EEOC guidance documents discussed here, an employer will be held liable for the actions or inactions of an outside vendor who designs or administers an algorithmic decision-making tool on its behalf and cannot rely on the vendor’s assessment of the tool’s disparate impact."

 

Navigating the AI Employment Bias Maze: Legal Compliance Guidelines and Strategies Lena Kempe

Copy link
WhatsApp
Facebook
Nextdoor
Email
X