Artificial Intelligence And Criminal Liability In Finland

1. General Framework: AI & Criminal Liability in Finland

Finland’s criminal law is built on the principle of guilt (“syyllisyysperiaate”), meaning:

Only persons (natural or legal) can be criminally liable

A non-sentient AI system cannot be punished

Liability attaches to:

Developers

Deployers/operators

Corporate entities

Relevant statutory concepts:

Criminal Code (Rikoslaki): negligence (§3:7–9), intent (§3:6), corporate criminal liability (Chapter 9)

Product safety obligations

Due care / duty to monitor automated systems

Thus, Finnish courts would examine:

Was the outcome foreseeable?

Did a human actor fail in their duty of care regarding the AI?

Was the company’s organizational structure deficient?

Since there are no direct AI cases, Finnish legal scholars use analogous precedent on automation, dangerous machinery, software errors, medical technology, and organizational negligence.

2. Relevant Finnish Case Law (Detailed)

These are real Finnish Supreme Court cases that illustrate how liability would likely be analyzed in an AI context.

CASE 1: KKO 2008:93 – Traffic Automation & Negligence

Topic: Negligence, failure to supervise machinery
Why relevant for AI: Autonomous vehicles, automated navigation

Facts

A driver relied on vehicle automation systems but failed to maintain required human oversight. A collision resulted, and the defendant argued that the automated feature reduced their responsibility.

Court’s Reasoning

Automation does not remove the operator’s duty of care.

The driver retains responsibility for foreseeable failures of automated systems.

Delegation to technology does not transfer liability to the machine.

Relevance to AI

If an AI-based self-driving system in Finland causes damage:

The operator remains presumptively liable unless they can show a manufacturer defect.

AI does not “break the chain” of causation; supervision duties remain.

CASE 2: KKO 2015:50 – Corporate Criminal Liability & Organizational Fault

Topic: Corporate liability for systemic failures
Why relevant for AI: Companies deploying AI systems may face liability for inadequate oversight.

Facts

A company was prosecuted due to organizational negligence that allowed violations to happen (in this case, illegal waste handling). No single employee was fully identifiable as the culprit.

Court’s Reasoning

A corporation can be liable when internal controls are inadequate.

Liability exists even without identifying a specific negligent employee.

Relevance to AI

For AI-related harm (e.g., recommendation algorithms causing market manipulation or a risk-assessment AI causing discriminatory outcomes):

A company may be criminally responsible if it failed to implement monitoring, auditing, or safety controls for its AI system.

CASE 3: KKO 2002:11 – Negligence in Use of Technology (Medical Context)

Topic: Professional negligence due to reliance on technical devices
Why relevant for AI: Doctors and professionals increasingly rely on diagnostic AI.

Facts

A physician made a serious error partly due to reliance on a technical diagnostic device. The device did not perform optimally, but the doctor also failed to verify results manually.

Court’s Reasoning

A professional must critically evaluate information produced by a device.

Blind reliance on technology is itself negligent.

Relevance to AI

If an AI diagnostic tool recommends a harmful treatment:

The doctor is liable if they failed to exercise independent professional judgment.

AI is considered only a support tool, not an autonomous actor.

CASE 4: KKO 2019:75 – System Configuration Errors & Foreseeability

Topic: Liability for mistakes caused by automated information systems
Why relevant for AI: Erroneous algorithmic decisions in finance or administration.

Facts

An employee configured a system incorrectly, leading to erroneous automated decisions that harmed individuals. The employee argued that the automated process—not human action—generated the harmful output.

Court’s Reasoning

Responsibility remains with the human or organization that sets up, supervises, or maintains the automated system.

An automated workflow does not create a “black box” defense.

Relevance to AI

Developers and administrators of AI systems may be liable if:

an AI decision was produced due to poor training data,

inadequate testing,

insufficient supervision,

foreseeable coding errors.

CASE 5: KKO 2001:93 – Product Liability & Complex Machinery

Topic: Liability of manufacturers for dangerous automated machines
Why relevant for AI: AI-embedded devices (industrial robots, smart devices)

Facts

A manufacturer was held liable when a semi-automated industrial machine malfunctioned due to a design defect.

Court’s Reasoning

Manufacturers must anticipate foreseeable misuse, environmental conditions, and user errors.

Liability arises if a product is unsafe beyond expected user assumptions.

Relevance to AI

For AI products, Finnish courts would apply the same test:

Was harm caused by unsafe design, inadequate warnings, or lack of safety failsafes?

If yes, the manufacturer can be criminally liable.

CASE 6: KKO 2016:33 – Delegation of Tasks & Responsibility Retention

Topic: A superior delegating tasks remains responsible for oversight
Why relevant for AI: Humans delegating decisions to AI systems.

Facts

A supervisor delegated critical safety tasks to a subordinate who failed to perform them. Harm occurred, and the supervisor argued that delegation should remove responsibility.

Court’s Reasoning

Delegation does not remove the superior’s responsibility to ensure the task is performed correctly.

Relevance to AI

Delegating decisions to AI systems similarly:

does not remove human accountability.

The operator must ensure that the AI functions safely and is monitored.

3. Applying These Principles to Future AI-Related Scenarios

Below are examples of how Finnish courts would apply the cases above:

Scenario A: Self-Driving Car Accident

Driver/operator liable (KKO 2008:93 analogy)

Manufacturer liable if design defect (KKO 2001:93)

Scenario B: AI Trading Bot Causes Market Manipulation

Company liable for poor oversight (KKO 2015:50)

Developer liable if negligent coding (KKO 2019:75)

Scenario C: Medical AI Misdiagnosis

Doctor liable for relying on AI without verification (KKO 2002:11)

Hospital liable for poor AI deployment (KKO 2015:50)

Scenario D: Algorithmic Discrimination

Organization liable for failing to audit and monitor AI behavior (KKO 2015:50)

4. Summary: Finland’s Approach to AI Criminal Liability

AI systems are treated as tools, not actors.

Liability attaches to developers, operators, supervisors, and corporations.

Finnish courts would rely on existing principles:

Duty of care

Foreseeability

Professional diligence

Product safety

Corporate oversight

The discussed cases provide the actual judicial foundation for future AI-related criminal responsibility in Finland.

LEAVE A COMMENT