AI Governance on Trial: The Workday Lawsuit and What It Means for Employers
- Oludare Ogunlana
- Jun 28
- 2 min read
A federal judge in California has ruled that a discrimination lawsuit against Workday, Inc. may proceed as a collective action, highlighting the legal and ethical risks associated with using artificial intelligence in employment decisions. The case, Mobley v. Workday, was brought by Derek Mobley, a 50-year-old Black IT professional with a disability, who claims he was unfairly rejected over 100 times by employers using Workday’s AI-powered applicant tracking system.
Workday’s software reportedly screened more than one billion job applications during the relevant period. While the company argues that its system simply matches applicants to job-specific keywords and does not determine hiring outcomes, the court highlighted that even neutral screening criteria—such as gaps in employment history or the number of years since graduation—can result in indirect discrimination against older applicants and those with disabilities.
The case has drawn national attention as a potential turning point in how courts and regulators assess the accountability of AI vendors. Legal experts have noted that even when companies do not directly make final hiring decisions, they may still be held responsible if their AI tools contribute to systemic exclusion or disparate impact.
This development raises urgent questions for employers and technology vendors alike: How should AI be governed in high-stakes decision-making environments? What safeguards are necessary to prevent automated bias from undermining equal opportunity?
At Ogun Security Research and Strategic Consulting (OSRS), we believe that AI governance must be treated as a strategic priority, not an afterthought. AI tools are increasingly embedded in core business operations—from hiring and credit scoring to surveillance and resource allocation. As such, companies must adopt proactive governance measures to mitigate legal, reputational, and ethical risks.
To help organizations avoid the type of legal challenge now facing Workday, OSRS offers tailored AI governance solutions, including:
Development of comprehensive AI governance frameworks aligned with global standards.
Bias testing and algorithmic audits to detect and mitigate discriminatory behavior in AI systems.
Risk assessments and data lineage reviews to identify the origin and impact of biased data.
Regulatory readiness consulting, including compliance with emerging laws such as New York City’s Local Law 144 and the European Union AI Act.
Executive and staff training on responsible AI design, deployment, and oversight.
The consequences of poor governance are no longer hypothetical; they are a reality. The Workday lawsuit illustrates how AI, if left unchecked, can replicate and scale bias, leading to exclusionary outcomes and significant liability. Companies must move beyond compliance checklists and invest in active oversight, continuous monitoring, and ethical review of their AI systems.
Executives in every sector should take this moment seriously. AI is not neutral by design—it reflects the values, assumptions, and data on which it is built. Responsible organizations must ensure that those values uphold fairness, transparency, and accountability.
At OSRS, we stand ready to support businesses in building trustworthy, legally sound AI solutions. To learn more about how we can help you protect your organization and strengthen your AI governance posture, visit us at www.ogunsecurity.com or contact us at contact@ogunsecurity.com.
Comments