
Let's discuss the impact of AI in Talent Acquisition and the damage it has caused. While many agreed, one person argued that AI is a great predictor of success. There are many layers to this conversation, but I’ll attempt to break it down:
1. AI is still considered experimental and comes with warnings about its errors.
2. It hasn’t even mastered something as fundamental as spelling.
3. AI cannot accurately interpret human emotion through video or voice biometrics—this has been proven.
4. AI is not infallible, nor is it omnipotent.
Talent software companies claim AI can predict the likelihood of success, but in reality, it takes successful people and profiles their success away. Just last week, a TA software company published a blog discussing an apparent "talent shortage" and advocating for skills-based hiring. However, they rely on exploitative methods to assess skills, leaving nearly two million people trapped in forced poverty.
Major talent acquisition firms engage in data brokerage, buying and selling personal information to create detailed candidate profiles. These data brokers aggregate information from public records (voter registrations, property records, court documents, census data) and commercial sources (social media activity, online purchases, loyalty programs) to score candidates based on profiling, keywords, and AI-generated predictions—all without transparency to the individuals being evaluated or the companies they've contracted with.
What’s more concerning is that this happens behind the scenes, making it "virtually invisible" to clients and candidates—yet TA companies shift all liability to their clients, which was also included in the blog. Something to the effect of it not being their fault if comes use the software maliciously. In essence, technology is being misused (often buried in fine print) and causing widespread harm.
This is not what AI was meant for. AI is not a god, nor should it be treated as one.