Case Law On Misuse Of Ai Profiling Leading To Racial Discrimination Prosecutions

Case 1: Bridges v South Wales Police (UK)

Facts:
The case involved the use by the South Wales Police of an automated facial‑recognition system (“AFR Locate”) of an AI‑type profile system: the technology matched live camera images of passers‑by with watch‑lists of persons of interest. The claimants argued that the deployment of this system disproportionately targeted or risked disadvantaging people of racial minorities.
Legal Issues:

Whether the use of the automated facial‑recognition system amounted to indirect discrimination on grounds of race under the UK’s Equality Act 2010 (specifically the Public Sector Equality Duty).

Whether the police had satisfied the duty to assess and mitigate bias in the AI system (i.e., did they test the system for racial bias, check its data‑training sets, monitor error‑rates across ethnicities).
Outcome & Reasoning:

The Court of Appeal held that the South Wales Police had not satisfied their obligation: they had not kept sufficient data to assess error‑rates by race (e.g., images of non‑matches were deleted so bias assessment was impossible), and they could not verify whether the system’s training data had demographic imbalance.

The Court therefore found that the police failed to do what was reasonable in checking for racial bias in the automated decision system — so the profiling system risked discrimination even if actual discriminatory outcomes were not fully proved.
Key Take‑aways:

Even where the AI system is facial recognition rather than “profiling” in the broader sense, the reasoning addresses misuse of AI in decisions affecting individuals and the obligation of public bodies to check for bias.

The case shows that organisations must document, audit and monitor AI systems for differential error‑rates across protected groups.

Although not a criminal prosecution for racial discrimination (in the sense of a criminal court convicting someone), it is a landmark legal decision about AI profiling and race discrimination liability.

Case 2: P E Manjang v Uber Eats UK Ltd (UK)

Facts:
A Black courier (“Mr Manjang”) working for Uber Eats in the UK was required by the company’s app to pass a facial‑recognition “selfie verification” step (using AI/algorithmic biometric software) in order to access shifts. The algorithm repeatedly failed to recognise him, resulting in his access being blocked and eventual dismissal. The driver alleged that the facial‑recognition software was racially biased (i.e., menace to people of African descent or darker skin tones).
Legal Issues:

Whether the company’s requirement to use the AI facial‑recognition system, and the fact that driver was repeatedly asked to resubmit photos and then dismissed, amounted to indirect race discrimination.

Whether the employer had taken reasonable steps to ensure the algorithm did not disadvantage a racial minority group (i.e., had they audited/validated the software for bias).
Outcome & Reasoning:

The case did not conclude in a full tribunal decision on merits (it settled). Nevertheless, legal commentary and reports treat it as a successful push for redress: Uber agreed to a payout (settlement) with the courier to end the claim of indirect racial discrimination due to algorithmic facial‑recognition bias.

The settlement and the public attention signalled that companies using AI/biometrics in employment must be alert to racial‑bias risks.
Key Take‑aways:

AI profiling (via facial‑recognition verification) in employment settings can lead to discriminatory outcomes affecting racial minorities; companies using such systems have legal risk.

Although not a criminal case, it demonstrates actionable discriminatory profiling via AI and suggests avoidance of mis‑use requires fairness audit, transparency.

It underscores that algorithmic decisions (in app‑driven gig‑work) can impact employment access and must be scrutinised.

Case 3: AI‑Based Rental Screening Algorithm – SafeRent Solutions (U.S.)

Facts:
In Massachusetts, a Black woman (and others) alleged that they were denied tenancy because of an algorithmic screening tool used by SafeRent Solutions. The algorithm scored applicants for rental properties; the claim was that the screening system disadvantaged people of colour and those on housing vouchers (thus creating a disparate‑impact on protected racial groups).
Legal Issues:

Whether use of algorithmic profiling in housing (credit/Renter‑risk screening) that disproportionately rejects minorities constitutes racial discrimination (via the Fair Housing Act or equivalent).

Whether algorithmic profiling counts as a “decision‑making system” under discrimination law (i.e., the firm’s reliance on the automated tool rather than human discretion).
Outcome & Reasoning:

The case concluded with a settlement in which SafeRent agreed to pay over US$2.2 million and alter its screening system, even though there was no publicly published judicial opinion finding liability.

The settlement indicates regulator and plaintiff pressure for algorithmic screening systems to be justifiable and free of disparate racial impact.
Key Take‑aways:

AI profiling in non‑employment contexts (housing/rental) can likewise lead to outcomes disadvantaging racial minorities.

Even without a full court decision, this case acts as a precedent for algorithmic‑bias risk in housing.

It highlights that algorithmic decision‑making systems must be audited for racial fairness or face discrimination claims.

Case 4: AI Predictive Policing – Amnesty International Report & UK Law Enforcement (UK)

Facts:
While not a classical court case with a settled judgment, the UK has seen litigation and advocacy challenging the use of algorithmic/predictive‑policing tools by law‑enforcement, which rely on historical crime data and profiling of individuals/areas; critics argue these algorithms perpetuate racial bias (Black and minority ethnic communities more often targeted). Amnesty International’s report argued that predictive‑policing systems in UK amounted to “modern racial profiling”.
Legal Issues:

Whether use of algorithmic profiling by police, predicting crime risk and allocating stop‑and‑search or surveillance based on algorithmic score, amounts to racial discrimination under the Equality Act or human‑rights law (right to non‑discrimination, equal protection).

Whether law enforcement’s use of data-driven profiling and algorithmic risk assessment must satisfy fairness, transparency, accountability obligations, and avoid disparate impact on racial groups.
Outcome & Reasoning:

In the UK context, while no broad criminal conviction of the police for algorithm‑bias has yet been publicly recorded, advocacy and litigation have pressured reform; some local forces suspended or reviewed their predictive‑policing tools.

The case scenario demonstrates legal risk: algorithmic profiling by public bodies (police) must be subject to rigorous review for racial discrimination.
Key Take‑aways:

The misuse of AI profiling in policing can perpetuate racial discrimination; public authorities using such systems face legal and reputational risk.

Key legal principle: even if algorithmic profiling is “data‑driven”, if it disproportionately targets a protected group, it may breach anti‑discrimination law.

The case underlines the need for transparency about algorithmic profiling tools in public enforcement contexts.

Summary of Key Themes and Legal Principles

From these cases, the following broad legal‑analytic insights emerge:

Profiling + algorithmic decision‑making = heightened risk of racial discrimination: When AI or automated systems make decisions or act as gate‑keepers (employment access, service access, law enforcement stops), they can replicate or amplify historical bias and thus lead to discriminatory outcomes.

Protected characteristic & disparate impact matter: Even if the decision‑maker (algorithm) does not explicitly use race, a system that disproportionately disadvantages a racial group may constitute indirect discrimination. Claimants must show statistical disparity, and respondents must show justification and mitigation. Cases such as Manjang and SafeRent illustrate this.

Human actors remain responsible: The fact that an AI or algorithm made the decision does not relieve the employer, service‑provider or public authority of liability. They must ensure the system is fair, monitor its outputs, test for bias, provide redress. The Bridges case emphasises that public authorities must audit for bias.

Transparency, auditing and data‑governance are key: Because AI systems often operate opaquely, legal normalisation demands that entities using them have validation and audit processes to ensure they do not disadvantage racial minorities. Failure to test/trial/training‑data‑check can lead to liability (Bridges).

Context matters: employment, housing, policing: Discriminatory profiling via AI occurs across contexts. Employment (Manjang), housing (SafeRent), law enforcement (predictive policing) each raise unique issues but share the core concept of algorithmic profiling leading to racial disadvantage.

Criminal prosecution vs civil/regulatory liability: Most cases so far have been civil or regulatory (employment tribunals, settlements). Full criminal convictions of entities for racial discrimination via AI profiling are rarer. But the legal exposure exists, especially for public‐authority profiling (law enforcement) and possibly for systemic corporate algorithmic discrimination.

LEAVE A COMMENT