Petition updateRegulate the Use of AI in Talent SoftwareAlgorithmic bias can be manifested in terms of gender, race, color, and personality
Maria RochaPA, United States
Jan 22, 2025

Theme III. Which algorithmic recruitment discrimination exists
In the recruitment process, algorithmic bias can be manifested in terms of gender, race, color, and personality

Gender
Gender stereotypes have infiltrated the “lexical embedding framework” utilized in natural language processing (NLP) techniques and machine learning (ML). Munson’s research indicates that “occupational picture search outcomes slightly exaggerate gender stereotypes, portraying minority-gender occupations as less professional” ((Avery et al., 2023; Kay et al., 2015).

The impact of gender stereotypes on AI hiring poses genuine risks (Beneduce, 2020). In 2014, Amazon developed an ML-based hiring tool, but it exhibited gender bias. The system did not classify candidates neutrally for gender (Miasato and Silva, 2019). The bias stemmed from training the AI system on predominantly male employees’ CVs (Beneduce, 2020). Accordingly, the recruitment algorithm perceived this biased model as indicative of success, resulting in discrimination against female applicants (Langenkamp et al. 2019). The algorithm even downgraded applicants with keywords such as “female” (Faragher, 2019). These findings compelled Amazon to withdraw the tool and develop a new unbiased algorithm. However, this discrimination was inadvertent, revealing the flaws inherent in algorithmic bias that perpetuates existing gender inequalities and social biases (O’neil, 2016).

Race
Microsoft’s chatbot Tay learned to produce sexist and racist remarks on Twitter. By interacting with users on the platform, Tay absorbed the natural form of human language, using human tweets as its training data. Unfortunately, the innocent chatbot quickly adopted hate speech targeting women and black individuals. As a result, Microsoft shut down Tay within hours of its release. Research has indicated that when machines passively absorb human biases, they can reflect subconscious biases (Fernández and Fernández, 2019; Ong, 2019). For instance, searches for names associated with Black individuals were more likely to be accompanied by advertisements featuring arrest records, even when no actual records existed. Conversely, searches for names associated with white individuals did not prompt such advertisements (Correll et al., 2007). A study on racial discrimination revealed that candidates with white names received 50% more interview offers than those with African-American names.

Skin color
In 2015, Google’s photo application algorithm erroneously labeled a photo of two black people as gorillas (Jackson, 2021). The algorithm was insufficiently trained to recognize images with dark skin tones (Yarger et al., 2023). The company publicly apologized and committed to immediately preventing such errors. However, three years later, Google discontinued its facial identification service, citing the need to address significant technical and policy issues before resuming this service. Similarly, in 2017, an algorithm used for a contactless soap dispenser failed to correctly identify shades of skin color, resulting in the dispenser only responding to white hands and not detecting black and brown ones. These cases serve as examples of algorithmic bias (Jackson, 2021).

Personality
The algorithm assesses word choice, tone shifts, and facial expressions (using facial recognition) to determine the candidate’s “personality” and alignment with the company culture (Raso et al., 2018). Notable examples include correlating longer tenure in a specific job with “high creativity” and linking a stronger inclination towards curiosity to a higher likelihood of seeking other opportunities (O’neil, 2016). Additionally, sentiment analysis models are employed to gauge the level of positive or negative emotions conveyed in sentences.

Copy link
WhatsApp
Facebook
Nextdoor
Email
X