AI Hiring Bias Lawsuits Are Escalating

Expert analysis from

Fisher Phillips
September 21, 2025

What the Sirius XM case means for employers—and 10 steps to stay out of court

Context

AI is speeding up hiring, but it’s also speeding up lawsuits. On August 4, Sirius XM was hit with a federal discrimination suit alleging its AI-powered applicant tracking system downgraded a candidate based on proxies for race. The case, Harper v. Sirius XM Radio, is the latest in a wave of legal challenges targeting algorithmic bias in employment decisions.

Why It Matters

For business leaders, the stakes are clear: a single biased algorithm can expose an organization to class-action litigation, reputational damage, and multimillion-dollar liability. Employers don’t get a free pass just because the bias comes from a vendor’s software. Courts and regulators are increasingly holding companies accountable for how AI is deployed in hiring.

Core Idea

AI is not a shield from discrimination law. If your hiring tools replicate bias—whether intentional or not—you could be the next test case.

What You Need to Know

  • The Sirius XM Lawsuit: Filed by a pro se applicant, the complaint alleges the iCIMS hiring platform used historical data and demographic proxies—like zip code and school history—that disproportionately penalized Black candidates. The plaintiff is seeking damages, class certification, and an injunction to halt the system.
  • Not an Isolated Case: Similar claims are moving through the courts and agencies. Workday faces an age-bias class action, Aon and HireVue are under scrutiny from the ACLU, and even Epic Games is battling union charges over AI replacing human voice actors. The pattern is unmistakable: AI is under legal siege.
  • Two Legal Theories at Play: Plaintiffs are alleging both disparate treatment (intentional design bias) and disparate impact (unintentional but discriminatory outcomes). Both are actionable under federal law.

10 Action Steps for Employers Using AI in Hiring

  1. Establish Governance: Put guardrails in place before deploying AI, aligned with NIST’s AI Risk Management Framework.
  2. Vet Vendors Rigorously: Demand bias testing, data transparency, and contractual indemnification.
  3. Be Transparent With Candidates: Clearly disclose how and when AI is used.
  4. Offer Accommodation Paths: Provide alternatives or human review where possible.
  5. Tie AI Criteria to Job Needs: Ensure all prompts and evaluations relate directly to job functions.
  6. Keep Humans in the Loop: Train HR teams to audit and override AI outputs.
  7. Document Decisions: Keep clear records of objective criteria and overrides.
  8. Audit Accessibility: Check for disability compliance regularly.
  9. Monitor Disparate Impact: Run periodic checks for bias across age, race, gender, and disability.
  10. Stay Ahead of the Law: Track EEOC guidance, new rulings, and pending legislation.

Closing Thought

AI is not a legal blind spot. It’s a magnifying glass. Employers who fail to build fairness and accountability into their systems are inviting scrutiny, lawsuits, and reputational fallout. The playbook is clear—govern, audit, and document now, before a plaintiff does it for you.

‍

About

Fisher Phillips

Fisher Phillips, founded in 1943, is a leading law firm dedicated to representing employers in labor and employment matters. With nearly 600 attorneys across 38 U.S. and 3 Mexico offices, it combines deep expertise with innovative solutions to help businesses navigate workplace challenges.

Read more

Recommended

Related articles
Logo The AI Report