Case Law On Criminal Liability For Algorithmic Bias Leading To Financial Or Legal Harm

Case 1: Mobley v. Workday, Inc. (U.S.)

Facts:
A job‐applicant, Mr. Mobley, alleged that he applied to more than 80‑100 jobs via employers using the screening software of vendor Workday, Inc.. He claimed the AI‑powered platform automatically rejected qualified candidates—including himself—for age, race and disability reasons. The software screened, scored, filtered out applicants without human review in many cases.
Legal Issues:

Whether the vendor (Workday) can be liable under U.S. anti‐discrimination laws (Title VII, ADEA, ADA) for bias in its algorithmic screening tool (rather than a traditional employer).

Whether algorithmic decision‐making tools that perform traditional hiring functions – screening, filtering, ranking – trigger liability as an “agent” or “employment agency” under anti‑discrimination law.

How the disparate impact theory (bias in outcome) applies to algorithmic systems rather than overt human decision‐makers.
Holding/Outcome:
A U.S. federal district court refused to dismiss the plaintiff’s disparate‐impact claims and held that the alleged facts plausibly show that Workday’s software performs a traditional hiring function and thus may act as an agent of the employer, subject to liability. The case is proceeding toward possible class/collective certification.
Implications:

It signals that algorithm vendors cannot hide behind “just supplying the software” to avoid liability when bias exists.

Companies using AI tools must evaluate models for protected‐class bias, not just human decision‐makers.

Vendors must design, deploy, audit algorithmic tools to avoid disparate impact, or face class actions.

Although not criminal, the financial liability (damages, injunctive relief, remediation) is significant.

Case 2: Louis et al. v. SafeRent Solutions (U.S. Housing/tenant‑screening)

Facts:
Prospective renters using housing vouchers (primarily low‑income, disproportionately racial/ethnic minority) were denied tenancy based on scores produced by the algorithmic tenant‑screening tool of vendor SafeRent Solutions. The algorithm weighed credit history, non‑rental debt, voucher status, and produced a “tenant score”. The suit alleged that Black and Hispanic applicants who used vouchers were disproportionately assigned low scores, leading indirectly to rental denials.
Legal Issues:

Whether algorithmic scoring systems in housing constitute a discriminatory practice under the Fair Housing Act (or state equivalents) via disparate impact.

Whether the vendor/provider has legal liability for biased outcomes produced by the algorithm.

The standard of proof when bias is embedded in data/training rather than explicit feature use.
Holding/Outcome:
SafeRent settled the class action by paying a multi‑million‑dollar settlement and agreeing to stop using automated scores for voucher‐users for a defined period and change its screening model. While no admission of wrongdoing was publicly declared, the settlement and required changes mark an enforcement result.
Implications:

Demonstrates algorithmic bias can cause financial/legal harm (denial of housing) and trigger enforcement.

Vendors and users of automated scoring must test for protected‐class bias, especially when decisions affect basic needs (housing).

Enforcement may involve injunctive changes (model redesign), not just damages.

Case 3: Regulatory Action by Consumer Financial Protection Bureau (U.S. Credit Models)

Facts:
Credit‑lending institutions increasingly rely on machine learning / complex algorithmic “black box” models to approve or deny consumer credit. The CFPB issued a circular reminding creditors that the Equal Credit Opportunity Act (ECOA) requires explanation of adverse decisions, and that using opaque algorithms does not absolve liability for discriminatory impact.
Legal Issues:

Whether relying on complex, non‐transparent algorithmic credit models violates consumer protection and anti‑discrimination law when adverse action (credit denial) occurs.

Whether “black box” algorithms produce disparate impact (race, age, gender, income) and how law handles that.
Holding/Outcome:
While not a single case with a judicial decision, the CFPB’s formal guidance constitutes regulatory liability for algorithmic models: creditors must provide specific reasons when denying credit even if algorithmic, and must ensure models do not produce disparate impact.
Implications:

Algorithmic bias in financial decision‑making can lead to legal/regulatory harm (denial of credit, financial exclusion).

Creditors/vendors must validate models for fairness, produce explanations, audit model outcomes.

The regulatory threat (enforcement, fines, supervisory action) is real even absent full court precedent.

Case 4: Use of Risk‑Assessment Algorithm in Criminal Sentencing – State v. Loomis (Wisconsin)

Facts:
The sentencing of a defendant included reference to a proprietary risk‑assessment algorithm (COMPAS) which assigned a “high risk” recidivism score. The defendant challenged the use of the algorithm, arguing it lacked transparency, individualisation, and could incorporate demographic bias (gender, race).
Legal Issues:

Whether algorithmic risk scores used in sentencing violate due process (right to challenge, fairness) and equal protection when bias may exist.

What duty courts have to disclose algorithm limitations, allow challenge, ensure human oversight.
Holding/Outcome:
The Wisconsin Supreme Court held that use of COMPAS did not violate due process per se, but emphasised safeguards: algorithmic scores cannot be determinative, must be supplemented by individualized evidence, and defendants must be warned of tool limitations.
Implications:

While not directly about financial harm, this case shows how algorithmic bias in criminal justice can cause legal harm (longer sentences).

Establishes that algorithms influencing legal outcomes must be transparent, auditable, and subject to oversight.

Vendors and courts must consider fairness/bias in algorithmic tools impacting liberty or legal rights.

Case 5: Child Welfare Algorithm Scrutiny – Allegheny Family Screening Tool (U.S.)

Facts:
A child welfare agency in Pennsylvania used an algorithmic tool (AFST) to assess risk of maltreatment and decide which families to investigate further. Concerns were raised that the algorithm disproportionately flagged families with disabilities or low socio‑economic status, and lacked transparency/disclosure, raising civil rights issues.
Legal Issues:

Whether algorithmic decision‐systems used by public agencies producing disparate impact on protected classes pose liability for discrimination.

Whether affected individuals (families) suffer legal/financial harms (investigations, removals) from biased algorithmic decisions and what remedies exist.
Holding/Outcome:
While not a full judicial adjudication of liability yet, the U.S. Department of Justice initiated civil rights scrutiny; complaints were filed and press reported potential discrimination. The case demonstrates regulatory oversight of algorithmic systems with legal harms.
Implications:

Algorithmic bias in public decision making (child welfare) can lead to legal or financial harm for vulnerable groups.

Demonstrates evolving enforcement: even absent criminal liability, agencies can be held accountable via civil rights law, regulatory supervision.

Suggests that algorithmic tools used by government agencies have heightened scrutiny.

Key Themes & Future Risks

Liability is increasingly real: Vendors and users of algorithmic decision‑systems are facing legal/regulatory exposure when bias leads to adverse outcomes (hiring, housing, credit, public‐benefit decisions).

Financial/legal harm is tangible: Denial of housing, credit, employment, or legal rights due to biased algorithms constitutes meaningful harm and triggers action.

Criminal liability remains rare, but not impossible: While most actions are civil/regulatory, as algorithms take on higher‑stakes decisions (e.g., criminal justice, public welfare) the risk of criminal or regulatory sanctions (fraud, civil rights violations) may rise.

Transparency, auditability, human oversight matter: Courts/regulators emphasise that algorithms must be tested for fairness, make outputs explainable, allow human oversight, and provide opportunity to challenge them.

Disparate impact doctrine is central: Bias often is not intentional discrimination but arises via algorithmic training data, proxies for protected attributes, legacy bias—thus disparate impact frameworks apply.

Vendor liability expanding: Technology suppliers (e.g., hiring software vendors) may be treated as agents or employment decision‐makers if their software effectively takes over traditional human functions.

Emerging regulatory frameworks: Guidance (e.g., credit, housing) and statutory rules are being developed to address algorithmic decision‐systems. Vendors/users must proactively comply.

Concluding Thoughts

Although there is not yet a large volume of criminal prosecutions for algorithmic bias, the civil liability and regulatory enforcement landscape shows fast‐emerging precedent for holding actors accountable when algorithmic systems cause legal or financial harm via bias. Organisations deploying algorithmic decision‑systems should treat bias mitigation, audit/validation, transparency, human oversight as critical compliance areas.

LEAVE A COMMENT