TLDR: The Mobley v. Workday lawsuit, now certified as a collective action, alleges that Workday’s AI-powered applicant screening tools discriminate against job seekers based on age, race, and disability. This pivotal case highlights the urgent need for HR leaders to understand AI’s mechanisms, ensure fair use, and implement robust governance to prevent algorithmic bias in hiring processes.
A groundbreaking lawsuit, Mobley v. Workday, is poised to redefine the landscape of artificial intelligence in human resources, as a federal judge in California recently certified it as a collective action in May 2025. This pivotal case alleges that Workday, a prominent provider of AI-powered applicant tracking systems, utilizes tools that discriminate against job seekers based on protected characteristics, including age, race, and disability.
The plaintiff, Derek Mobley, a Black man over 40 who also experiences anxiety and depression, claims he was rejected from over 100 job applications submitted through companies using Workday’s AI screening tools since 2017. His lawsuit specifically highlights a ‘disparate impact’ on applicants over the age of 40, suggesting that Workday’s algorithms, which can include personality and cognitive tests, automatically reject or advance candidates in ways that disadvantage certain groups.
Workday has argued that it does not make the final hiring decisions, positioning itself as a software provider rather than an employer. However, the court’s ruling countered this, asserting that Workday’s software is actively involved in recommending or rejecting candidates, potentially making the company liable as an ‘agent’ of employers under anti-discrimination laws. The court emphasized that for a disparate impact claim, applicants do not need to prove qualification or likelihood of hire; the harm lies in being denied a fair chance to compete.
This case is sending clear signals across the HR and technology sectors. Experts view it as a potential landmark in regulating the use of AI in hiring decisions, sparking critical conversations about fairness, accountability, and governance in recruitment practices. Regardless of the lawsuit’s ultimate outcome, it serves as a stark call to action for HR leaders. They must become proficient in understanding AI’s applications, inherent risks, and necessary governance frameworks.
Also Read:
- California Legislators Express Concerns Over AI’s Impact on Entry-Level Tech Employment
- Artificial Intelligence Reshapes UK Job Market: Significant Declines in Tech and Entry-Level Hiring
The implications extend beyond Workday, affecting all HR tech vendors and employers. The Mobley v. Workday case signals a ‘new era of accountability’ for HR technology, making proactive governance, transparency, and compliance non-negotiable. Vendors now face direct exposure to liability for alleged AI discrimination, necessitating increased bias testing, transparency, and explainability in their tools. Employers, in turn, may face greater scrutiny and potential litigation, underscoring their ultimate responsibility to ensure non-discriminatory hiring practices, even when utilizing third-party technology. Screening algorithms, therefore, must be rigorously audited for bias, with both pre- and post-deployment testing becoming standard practice, complemented by essential human oversight to ensure equitable opportunities for all candidates.


