Case Studies On Wrongful Imprisonment Due To Flawed Predictive Policing Software

Case 1: State of Wisconsin v. Eric Loomis (COMPAS Risk Assessment)

Facts:

Eric Loomis was sentenced in Wisconsin in 2013. The court used COMPAS, an AI risk-assessment tool, to assess the likelihood of recidivism.

The COMPAS score indicated a high risk, influencing the judge to impose a longer sentence.

Loomis later argued that the algorithm was biased, opaque, and violated his constitutional rights.

Legal Issues:

Reliance on proprietary AI software in sentencing raises due process concerns.

Alleged racial bias in AI outputs, disproportionately classifying Black defendants as high-risk.

Lack of transparency: the defendant could not examine how COMPAS determined the score.

Outcome:

In State v. Loomis (2016), the Wisconsin Supreme Court upheld the sentence but noted that COMPAS could not be the sole factor in sentencing.

The Court warned against over-reliance on black-box AI in criminal justice.

Significance:

Landmark case on AI in sentencing; highlighted risks of wrongful imprisonment due to biased algorithms.

Set a precedent for transparency and careful use of AI in legal decisions.

Case 2: Northpointe/COMPAS Bias Allegations (ProPublica Study)

Facts:

A 2016 investigative study by ProPublica analyzed COMPAS, used in several U.S. jurisdictions, and found it disproportionately flagged Black defendants as high-risk for recidivism.

In some cases, defendants were sentenced longer or denied bail based on flawed AI predictions.

Legal Issues:

Violation of equal protection principles under the 14th Amendment.

Raises questions about algorithmic fairness, accountability, and the right to challenge AI-based evidence.

Outcome:

No single defendant’s sentence was overturned directly by the study, but the findings prompted courts, legislators, and advocacy groups to reconsider AI risk tools.

Led to increased scrutiny, audits, and calls for transparency in predictive policing software.

Significance:

Demonstrates how systemic flaws in AI can lead to wrongful imprisonment indirectly.

Influenced reforms and caution in AI adoption in the criminal justice system.

Case 3: Illinois Bail Reform – Harm from Algorithmic Risk Scores

Facts:

Illinois implemented AI-based pretrial risk assessment tools to predict likelihood of flight or re-offense.

Several defendants with low actual risk were classified as high-risk and denied bail, effectively incarcerating them unnecessarily before trial.

Legal Issues:

Potential violation of due process: detention based on predictive models rather than individualized assessment.

Risk of algorithmic bias affecting racial minorities disproportionately.

Outcome:

Court reviews and media investigations led to revisions in the software and policies limiting reliance solely on AI for pretrial detention.

Some defendants challenged detention, citing flawed AI risk assessments, leading to adjusted releases.

Significance:

Illustrates wrongful imprisonment arising from pretrial AI tools.

Highlights the need for human oversight and independent verification of AI outputs.

Case 4: Chicago Predictive Policing (Strategic Subject List – SSN)

Facts:

The Chicago Police Department developed a predictive policing system known as the Strategic Subject List (SSL) to identify individuals at high risk of being involved in violent crime.

Some individuals were repeatedly flagged, leading to increased police scrutiny, stops, and arrests, even when they had no criminal intent or prior conviction.

Legal Issues:

AI-driven policing potentially violated Fourth Amendment rights (unreasonable searches and seizures).

Risk of entrenching racial bias and wrongful targeting based on flawed predictive metrics.

Outcome:

Investigations and lawsuits prompted Chicago to revise the system and reduce reliance on AI predictions for proactive policing.

No high-profile criminal conviction was overturned, but individuals experienced wrongful criminal exposure due to AI misclassification.

Significance:

Demonstrates the danger of predictive policing causing indirect wrongful imprisonment.

Shows systemic impact: entire communities may face increased policing due to biased AI outputs.

Case 5: Arizona Pretrial Risk Algorithm Challenges

Facts:

Arizona implemented pretrial AI algorithms to determine bail eligibility and flight risk.

A study found the software systematically misclassified some defendants as high-risk, leading to extended pretrial detention for individuals who posed minimal actual risk.

Legal Issues:

Potential violations of constitutional rights, including due process and equal protection.

Over-reliance on opaque AI models created conditions for wrongful imprisonment.

Outcome:

Courts and legislators began requiring transparency and validation of AI pretrial risk tools.

Some defendants were released after challenging their pretrial detention based on algorithmic scores.

Significance:

Reinforces the importance of auditability and accountability in AI criminal justice tools.

Highlights how flawed predictive models can contribute directly to wrongful detention and imprisonment.

Key Observations Across Cases

AI Bias Can Cause Real Harm: Flawed predictive models often overestimate risk for minorities, leading to wrongful incarceration.

Transparency Is Critical: Black-box algorithms prevent defendants from challenging scores that affect liberty.

Human Oversight Required: Courts and police must not rely solely on AI; individualized review is essential.

Legal Challenges Are Growing: Cases like State v. Loomis set important precedents for regulating AI in criminal justice.

Systemic Risk: Even without overturning convictions, predictive policing and risk assessments can cause repeated wrongful stops or pretrial detention.

LEAVE A COMMENT