
Florida driver suing Toyota Motor Corporation Progressive Insurance Connected Analytic Services over alleged data sharing
'The problem with the premise is the consumer is unaware it is happening,' attorney John Yanchunis says.
#PublicAwareness & #Employer Action
As courts begin to address the #unauthorized use of personal data in automated decisions, employers using AI hiring systems must pause and ask: Who built these models? Where does the data come from? And who’s liable when they discriminate or get it wrong?
Just like the Toyota–Progressive lawsuit, where driver data was shared without consent, AI hiring tools increasingly rely on black-box databases and third-party profile scoring—without candidate knowledge, consent, or accuracy checks.
Legal liabilities may await employers adopting AI-driven hiring systems that rely on opaque data sourcing, unverified third-party #profiling, and automated #candidatescoring. These practices, widely deployed by systems such as Workday #SkillsCloud and LinkedIn’s AI-powered hiring tools, parallel the legal issues at the heart of the Toyota–Progressive data sharing lawsuit, where personal information was allegedly used without informed consent or disclosure.
Lightcast IO, through integrations and partnerships with #Workday and all major #talentsoftware companies, use #APIs and #SDKs to provide a continuous stream of illegally obtained, unverified personal and consumer #data that includes health data, #data scraped from the internet, political leanings, census data and more to #Profile #candidates and #score #jobseekers ... gatekeeping who has access to the #workforce .
Some systems process 625 billion data points (#Workday) , use 55,000 AI-inferred "skills", and scrape third-party information at the moment a #jobapplication is received.
Employers are not shielded from liability if using vendors. If your AI system screens out qualified people, relies on biased or unverified data, or denies someone a job based on a proxy score they can't see or dispute—you could be on the hook.
The reliance on unverified employment "fit" scores, #behavioralinference models (scientifically proven as inaccurate), and #resumeranking #algorithms—often shaped by proxy variables—can generate disparate outcomes without lawful justification.
Courts and regulators are beginning to treat the unauthorized use of #personaldata in #algorithmic decision-making as a serious legal and #ethical #breach. #Employers must not assume that contracting with #vendors insulates them from liability—they are responsible for the tools they use and the outcomes they generate.
Let’s lead with accountability. Let’s hire with transparency.
It’s time to demand ethical AI hiring—or leave it behind.
#FairAIHiring #EthicalAI #Data #HRCompliance #Workday #AIandLaw #LinkedInHiring
ACLU Electronic Frontier Foundation (EFF)