Case Law On Criminal Liability For Algorithmic Decision-Making In Governance

Case 1: Loomis v. Wisconsin (Wisconsin, 2016)

Facts:
A defendant, Eric Loomis, pleaded guilty to an offence; during his sentencing a risk‑assessment algorithm (the tool COMPAS) was used, classifying him as “high risk” of recidivism. The algorithm’s internal logic was proprietary/closed‑source. The sentencing court considered the risk score along with other factors. Loomis challenged the sentence, arguing that use of the algorithm violated his due process rights because he could not review or challenge how the score was calculated.

Legal Issues:

Does use of an opaque algorithm in a criminal justice context violate due‑process rights when it influences sentencing?

What is the role of human judgment vs algorithmic output in governance/justice decisions?

Who is accountable if the algorithm is incorrect or biased?

Outcome:
The Wisconsin Supreme Court held that the algorithm’s use did not per se violate due process, provided that (i) the defendant is informed of its use, (ii) they may challenge the accuracy of the data used, and (iii) the algorithm is not used as the determinative element of the sentence—human judgment must remain. The court emphasised: algorithmic score can assist, but cannot replace human discretion.

Significance:
This case is one of the earliest in criminal‑justice governance to impose constraints on how algorithms may be used in decision‑making. It establishes that although the algorithm’s output may influence decisions, ultimate responsibility remains with the human decision‑maker. It also raises the accountability question: if an algorithm influences a harmful decision (e.g., unfair sentence), liability is not automatically on the algorithm but potentially on the human or institution deploying it.

Case 2: Gonzalez v. Google LLC (U.S. Supreme Court, 2023)

Facts:
Family of Nohemi Gonzalez sued Google LLC for alleged aiding of terrorism, arguing that YouTube’s recommender algorithm steered users toward Islamist extremist content which led to recruitment and her death. The case asked whether YouTube’s algorithmic recommendation system could be held liable under the Justice Against Sponsors of Terrorism Act (JASTA) for content suggested by its algorithm and whether algorithmic moderation falls under immunity.

Legal Issues:

Does the algorithmic recommendation system—that uses user profile data to push certain content—constitute “providing material support” for terrorist organisations?

How do liability exemptions (e.g., Section 230 immunity) apply when algorithmic systems actively shape content exposure rather than just hosting it?

What is the accountability for algorithmic governance mechanisms (recommendation engines) in relation to foreseen harm?

Outcome:
The Supreme Court vacated the prior decision and remanded the case for further evaluation in light of another decision (Twitter v. Taamneh). The case is ongoing and signals that algorithmic governance systems (recommendation engines) may be subject to liability depending on how active vs passive their role is treated.

Significance:
This case pushes the frontier of algorithmic decision‑making governance into liability territory: not just human moderators, but algorithmic curation systems may be assessed for accountability when they contribute to harm. It demonstrates that governance via algorithm (content recommendation) is scrutinised for legal responsibility.

Case 3: State v. Loomis (Wisconsin) — alternate reference (see Case 1)

Note: this is essentially the same as Loomis v. Wisconsin; I mention it here to emphasise governance context rather than purely sentencing. The key point remains: the use of an algorithmic decision‑support tool in government decision‑making (criminal justice) triggers questions of transparency, accountability and reviewability.

Case 4: Dyroff v. Ultimate Software Group, Inc. (U.S., 2017)

Facts:
Kristanalea Dyroff sued Ultimate Software Group (owners of social network “Experience Project”) after her adult son died from a drug overdose via a dealer who had used the site. The plaintiff’s claim relied in part on the allegation that the site used data‑mining algorithms and recommendation systems to direct users to illicit drug‑related discussion groups, which facilitated the deadly contact.

Legal Issues:

Whether an algorithmic recommendation system that surfaces certain communities or content can render the platform liable for harm caused by those communities.

Does governance responsibility extend to automated decision‑making systems that publics rely upon?

Outcome:
The case did not result in a criminal conviction but is relevant for civil liability. The court’s discussion raised the idea that algorithmic systems used in public or semi‑public platforms may need to consider their governance responsibilities: algorithmic design, transparency, and foreseeable harm.

Significance:
While not criminal law, this case is significant for algorithm governance liability. It shows that when public facing algorithmic decision‑making systems generate or promote harmful networks, liability (at least civil) is being explored. It sets precedent for algorithmic governance oversight.

Case 5: Transco plc v. HM Advocate (Scotland, 2003)

Facts:
Though not about algorithmic decision‑making per se, this corporate criminal liability case involved the company being prosecuted for culpable homicide after a gas explosion. The relevance here is that corporate entities may be held criminally liable when systems under their control cause harm.

Legal Issues:

Corporate responsibility for systems and processes causing fatal harm.

Governance of systems (even non‑algorithmic) in corporate structures.

Outcome:
Transco plc was convicted of corporate culpable homicide. The case established that companies can be criminally liable for failures of systems under their control.

Significance:
This case informs algorithmic governance: when firms deploy algorithmic decision‑making systems, they may bear criminal liability for failures (particularly if foreseeability and negligence can be shown). It provides a governance precursor for algorithmic systems accountability.

Case 6: Emerging Administrative‐Law Case – Algorithmic Decision in Public Service (Egypt, 2023)

Facts:
In Egypt, an administrative court annulled a promotion denial decision which was based on an automated evaluation system. The court held that because the applicant had no way of understanding the algorithm’s logic or receiving meaningful explanation, the decision violated transparency and principle of legality in administrative action.

Legal Issues:

Accountability when public decisions (governance) are made based on algorithmic systems without human reasoning or transparency.

Whether algorithmic decision‑making counts as “executive/administrative action” subject to judicial review or criminal liability frameworks.

Outcome:
The court annulled the administrative decision and emphasised the public body remains liable for algorithmic decision‑making lacking transparency.

Significance:
Though not a criminal prosecution, it marks governance liability at the administrative level: when algorithmic decision‑making systems make public governance decisions, the entity deploying them must ensure transparency, explainability and human oversight. It suggests a pathway to criminal liability if harm and negligence are severe.

Key Observations & Analytical Themes

Human Oversight & Accountability: In all these cases, the algorithmic system alone was not prosecuted; rather the human institution, developer or decision‑maker deploying/using the algorithm was subject to liability or challenge.

Governance of Algorithmic Systems: Liability often arises when algorithmic decision‑making is used in public/governmental or quasi‑public contexts (sentencing, public service, content moderation) without sufficient transparency, human in the loop, or capacity to challenge.

Transparency, Explainability & Due Process: A recurring theme is the challenge of opaque “black‑box” algorithms in governance decisions that affect rights or impose punishment (Loomis, Egypt administrative case). If decision‑making is delegated to algorithms without review or explanation, accountability is undermined.

Liability vs. Usefulness Tension: Algorithms may improve efficiency, but their use in governance raises risk of harm, bias, and lack of recourse. Courts are emphasising that algorithmic decision‑making must remain under human oversight to avoid liability.

Corporate & Institutional Liability for Algorithmic Governance: Cases like Transco and Dyroff show that institutions deploying decision‑making systems (algorithmic or otherwise) may bear criminal/civil liability if systems are faulty, cause harm, or lack governance controls.

Criminal vs Administrative Liability: While many algorithmic decision‑making cases are administrative or civil, the foundational principles apply for criminal liability: foreseeability of harm, governance failure, duty of care, system design defects.

Concluding Thoughts

While fully fledged criminal prosecutions of autonomous algorithmic decision‑making systems (without human control) remain rare, the liability landscape for algorithmic governance is evolving fast. The cases above illustrate that:

Courts expect human decision‑makers to remain accountable even when algorithms are used.

Institutions that deploy algorithmic systems in governance or justice contexts must ensure transparency, traceability, human oversight and fairness.

Failure to govern algorithmic decision‑making systems appropriately can result in liability (civil, administrative or criminal) for the institutions deploying them.

As algorithmic decision‑making becomes more prevalent in governance (sentencing, regulation, public service), legal frameworks will increasingly treat governance failure in algorithmic systems as a matter of accountability, not mere technical error.

LEAVE A COMMENT

0 comments