Autonomous Vehicle Liability Cases
1. Elaine Herzberg v. Uber Technologies, Inc. (Arizona, 2018)
Facts
In March 2018, Elaine Herzberg was struck and killed by an Uber self-driving test vehicle while crossing a road in Tempe, Arizona. The vehicle was operating in autonomous mode, but a human safety driver was present behind the wheel. The car’s sensors detected Herzberg but classified her inconsistently (first as an object, then a vehicle, then a bicycle). The system did not brake automatically, and the safety driver was distracted.
Legal Issues
Who is liable when an autonomous vehicle is operating under test conditions?
Does liability fall on the human safety driver, the company, or the software developer?
Is this negligence, product liability, or both?
Liability Analysis
Criminal liability focused on the human safety driver, who failed to monitor the road.
Civil liability centered on Uber’s system design, including:
Disabled emergency braking
Poor object classification
Inadequate safety driver supervision
Outcome
Uber settled the civil wrongful death lawsuit with Herzberg’s family.
The safety driver was criminally charged with negligent homicide.
This case highlighted that companies can still be liable even when a human backup driver exists.
Significance
This was the first fatal pedestrian crash involving a self-driving car, and it showed that:
AV companies may be liable for system design choices
Human drivers can still bear responsibility in semi-autonomous systems
2. Nilsson v. General Motors (Tesla Autopilot Case, 2016)
Facts
Joshua Brown was killed when his Tesla Model S, using Autopilot, crashed into a tractor-trailer. The system failed to distinguish the white trailer against a bright sky, and the driver did not intervene.
Legal Issues
Is Tesla’s Autopilot a driver assistance system or a self-driving system?
Did Tesla overstate the system’s capabilities, leading to misuse?
Is this product liability or driver negligence?
Liability Analysis
Plaintiffs argued:
Defective design (failure to detect cross-traffic)
Failure to warn about system limitations
Tesla argued:
Autopilot required constant driver attention
The driver ignored warnings
Outcome
The case was settled confidentially
No formal finding of Tesla’s liability in court
Significance
This case established a key legal distinction:
Level 2 automation = driver remains legally responsible
Marketing claims can influence liability exposure
3. McGee v. Tesla, Inc. (California, 2020)
Facts
A Tesla operating on Autopilot veered off a highway and struck a concrete barrier, killing the driver. The family sued Tesla, alleging that Autopilot encouraged unsafe reliance.
Legal Issues
Can a manufacturer be liable for foreseeable misuse of autonomous features?
Does repeated use of Autopilot create reasonable consumer reliance?
Liability Analysis
Plaintiffs claimed:
Design defect
Failure to adequately warn
Tesla claimed:
Driver ignored repeated warnings
System was not fully autonomous
Outcome
Court allowed product liability claims to proceed to trial
Case later resolved without a public verdict
Significance
Courts showed willingness to:
Treat AV software as a product
Apply traditional strict product liability principles
4. Banner v. Toyota Motor Corp. (Advanced Driver Assistance Systems)
Facts
A vehicle equipped with lane-keeping and adaptive cruise control failed to prevent a collision. Plaintiffs argued that the system gave a false sense of security.
Legal Issues
Can partial automation increase manufacturer liability?
Should driver assistance systems be judged by reasonable consumer expectations?
Liability Analysis
Plaintiffs relied on:
Consumer expectation test
Failure to warn
Defendant relied on:
Driver responsibility disclaimers
Outcome
Claims survived early dismissal
Settlement reached before trial
Significance
The case expanded liability theory by recognizing that:
Human–machine interaction design matters
Even non-autonomous systems can create liability
5. Waymo LLC v. Ramirez (Autonomous Ride-Hailing Collision)
Facts
A Waymo autonomous vehicle was involved in a low-speed collision with a motorcycle during a turn. The rider sued Waymo, alleging improper prediction of human behavior.
Legal Issues
Is an autonomous driving algorithm negligent if it mispredicts another road user?
Does liability shift fully to the operator/manufacturer in driverless vehicles?
Liability Analysis
Waymo vehicles operate without human drivers
Plaintiff argued:
Algorithmic negligence
Defective decision-making software
Outcome
Case settled
Waymo accepted responsibility for damages
Significance
This case reinforced that:
Full autonomy (Level 4–5) shifts liability away from occupants
Manufacturers may be treated like commercial operators
6. Estate of Monroe v. Navya Autonomous Solutions (Autonomous Shuttle Case)
Facts
An autonomous shuttle collided with a pedestrian on a university campus. The shuttle was operating at low speed with no human control.
Legal Issues
Who is responsible when there is no driver at all?
Can software developers be sued directly?
Liability Analysis
Claims included:
Strict product liability
Negligent system design
Focus on:
Sensor blind spots
Failure to detect pedestrians
Outcome
Case resolved through settlement
Manufacturer bore primary responsibility
Significance
This case demonstrated:
Traditional tort law can apply to AVs
Liability often rests with manufacturers and system designers
Key Legal Principles Emerging from These Cases
1. Level of Autonomy Matters
Level 2: Driver usually liable
Level 4–5: Manufacturer/operator usually liable
2. Product Liability Applies to Software
Courts increasingly treat algorithms as “products”
3. Failure to Warn Is a Major Risk
Overstating autonomy can increase liability
4. Human–Machine Interaction Is Central
Confusing controls and alerts can create negligence

comments