Case Studies On Wrongful Convictions Due To Reliance On Flawed Forensic Algorithms

Case 1: Robert Lee Stinson (Wisconsin, USA)

Facts: In 1985, Robert Lee Stinson was convicted of the rape and murder of 63‑year‑old Ione Cychosz. The key forensic evidence: a forensic odontologist testified that bite‑mark impressions on the victim matched Stinson (on the basis that the attacker was missing his front tooth).
Flawed Forensic Algorithm/Methodology: Although not a computer algorithm, the method (bite‑mark analysis) lacked scientific validity: the forensic dentist claimed “match to reasonable scientific certainty” though the field later was deemed unreliable.
Forensic Investigation & Re‑assessment: Years later DNA testing excluded Stinson as the perpetrator and pointed to another individual. The bite‑mark testimony was re‑evaluated and found to rest on questionable foundations.
Legal Outcome: Stinson’s conviction was overturned in 2009 after about 23 years in prison. All charges were formally dismissed.
Key takeaway: A forensic technique once presented as “scientific” but later discredited can result in prolonged wrongful imprisonment.

Case 2: Robert DuBoise (Florida, USA)

Facts: DuBoise was arrested in 1983 for the rape and murder of Barbara Grams; convicted in 1985 and spent roughly 37 years in prison. The prosecution heavily relied on bite‑mark evidence and a jail‑house informant.
Flawed Forensic Algorithm/Methodology: Again bite‑mark analysis — which later forensic science has shown to have high error rates. The expert claimed certainty, but the method lacked robust scientific foundations.
Forensic Investigation & Re‑assessment: DNA testing decades later excluded DuBoise and linked two other men to the crime. The forensic odontologist’s testimony was discredited.
Legal Outcome: Conviction vacated in August 2020; in 2024 he received a large settlement for wrongful imprisonment.
Key takeaway: When forensic “pattern matching” is presented as science without strong validation, innocent people risk decades behind bars.

Case 3: Eric Loomis v. Wisconsin (USA)

Facts: Loomis pleaded guilty to eluding a police officer in 2013. At sentencing, the court considered his “risk score” generated by a proprietary software tool (the “COMPAS” algorithm) that assessed his likelihood of re‑offending, which placed him in a high risk category.
Flawed Forensic/Algorithmic Evidence: The risk assessment algorithm’s inner workings were secret (closed source), and Loomis argued that relying on it violated his due‑process rights because he could not challenge the algorithm’s validity or meaning. While not a conviction solely based on the algorithm, the case raises algorithmic forensic/assessment issues.
Legal Outcome: The Wisconsin Supreme Court upheld the sentencing but mentioned the algorithm should not be the sole factor. The U.S. Supreme Court declined to review. The case became a landmark for algorithmic fairness in criminal justice.
Key takeaway: Use of opaque algorithms in sentencing or forensic assessment can undermine transparency and fairness in the justice system.

Case 4: Robert Julian‑Borchak Williams (Michigan, USA)

Facts: In January 2020, Williams was wrongly arrested in Detroit for shoplifting five watches, based on a surveillance image matched using a facial‑recognition algorithm to his driver’s license photo. He spent about 30 hours in custody before the case was dropped.
Flawed Algorithmic Evidence: The facial‑recognition software produced a “match” despite the image being low quality, the database lacking demographic fairness, and known higher error‑rates for non‑white faces. Investigators treated the algorithm output as strong evidence though the tool had documented bias and inaccuracy.
Legal Outcome: Charges dismissed; Williams later sued the Detroit Police Department and reached a settlement. The case spurred policy changes: the department announced it would no longer make arrests based solely on facial‑recognition matches.
Key takeaway: Algorithmic forensic tools (here biometric matching) can produce wrongful arrests if their error‑rates, biases, and limitations are ignored.

Case 5: Porcha Woodruff (Michigan, USA)

Facts: In February 2023, Woodruff—eight‑months pregnant at the time—was arrested for alleged carjacking based substantially on a facial‑recognition match and then a photo lineup generated after the algorithm flagged her. She spent ~10–11 hours in custody and was released after charge dismissal.
Flawed Algorithmic/Forensic Evidence: The facial‑recognition match used an older driver’s‑license photo rather than a recent one; the algorithm’s reliability and error‑rates were not disclosed; there was no corroborating physical evidence. The case shows how algorithmic leads can trigger investigations that disproportionately affect individuals.
Legal Outcome: Charges dropped; the arrest prompted reforms in Detroit’s use of facial‑recognition tech. She has not (as of public information) been convicted, but the case is illustrative of algorithmic risk.
Key takeaway: Even short detentions due to flawed algorithmic forensic tools cause harm; the stakes are high when such tools influence investigations or prosecutorial decisions.

Summary of Key Issues

Reliance on unvalidated forensic techniques or black‑box algorithms (bite‑mark matching, closed‑source risk tools, facial‑recognition algorithms) can lead to wrongful convictions or arrests.

Transparency and error‑rates matter: when tools’ error rates or demographic biases are unknown or unchallenged, the risk increases.

Quality of the data and images matters: low‑quality surveillance, poor image resolution, or mismatched identifiers degrade reliability.

Due‑process implications: defendants often cannot challenge proprietary algorithms or forensic claims presented as scientific certainty.

Policy & reform triggers: These cases have prompted changes (e.g., stopping arrests based solely on facial‑recognition output, limiting unverified forensic pattern‑matching techniques).

Forensic investigations in appeals/exonerations often require new testing (DNA, better science), review of expert testimony, and transparency of algorithmic tools.

LEAVE A COMMENT