Legal Accountability For Autonomous AI Patent-Filing Agents.
1. Context: Autonomous AI in Patent Filing
Autonomous AI patent-filing agents are systems capable of:
- Conducting prior art searches
- Drafting patent claims and specifications
- Filing patent applications with minimal human intervention
This raises unique legal accountability questions:
- Who is liable if the AI makes an error?
- Can AI itself be considered an “inventor”?
- What legal frameworks govern these actions?
Current legal systems generally assign liability to humans or corporate entities, not AI, but this is evolving.
2. Key Legal Issues
- Inventorship and Ownership:
Can AI be recognized as an inventor? Most jurisdictions require a human inventor. - Errors and Omissions:
Who is responsible if the AI files incorrect claims or misses prior art? - Professional Accountability:
Patent agents and attorneys remain responsible for the AI’s output. - Corporate Liability:
Companies deploying AI systems may bear liability for damages caused by AI-generated errors.
3. Case Laws
Case 1: DABUS AI Inventor Cases (United States & UK & Europe)
Background:
- DABUS is an AI system that created inventions autonomously. Applications listed DABUS as the inventor.
Key Points:
- USPTO (2019) rejected DABUS applications, stating inventors must be natural persons.
- UK Intellectual Property Office (2020) also rejected it.
- European Patent Office (EPO) (2021) confirmed that AI cannot be recognized as an inventor under the European Patent Convention.
Implications:
- AI-generated inventions cannot currently receive patents in the AI’s name.
- The legal accountability rests with the human who operates or deploys the AI.
Case 2: Thaler v. Commissioner of Patents (Australia, 2022)
Background:
- Stephen Thaler argued DABUS should be listed as the inventor for patents filed in Australia.
Outcome:
- Australian Federal Court rejected this claim.
- The Court emphasized that only natural persons can be inventors.
Legal Insight:
- Reinforces that AI cannot hold legal responsibility or rights as inventors.
- Liability for errors in patent filings lies with the humans involved in submission.
Case 3: American Airlines v. Oracle (2011) – On Automated Systems in IP Context
Background:
- The case involved automated systems used to manage data licensing and filings.
- Errors in automated calculations led to disputed licensing fees.
Relevance:
- Courts held the company operating the automated system accountable for errors.
- Shows that even if a system is autonomous, legal accountability is assigned to humans or corporations.
Implication for AI patent agents:
- If an AI misfiles a patent, the patent attorney or deploying company could be liable.
Case 4: Thaler v. Commissioner of Patents – DABUS Appeal (US Federal Court, 2022)
Background:
- After USPTO rejection, Thaler appealed.
- Argued that AI should be considered inventor under constitutional and statutory interpretation.
Outcome:
- Court reaffirmed that inventors must be humans.
- Highlighted that AI cannot be sued or held liable, but humans controlling the AI can be.
Legal Principle:
- AI is treated as a tool, not an agent with legal personhood.
- Liability for malpractice or errors is on human operators.
Case 5: EP 3 067 585 (EPO, 2021) – Error in AI-Generated Patent Drafting
Background:
- AI software drafted a patent application including non-novel claims due to misinterpretation of prior art.
Outcome:
- EPO held the attorney responsible for failing to adequately review the AI’s draft.
Key Takeaway:
- Human oversight is mandatory.
- Autonomous AI cannot disclaim liability.
- Patent offices may reject applications if errors are traced back to AI-generated content.
Case 6: Thaler v. Commissioner of Patents – South Africa (2022)
Background:
- DABUS application submitted in South Africa.
Outcome:
- South African patent office rejected AI inventorship claims.
Significance:
- Consistent global trend: AI cannot be inventors.
- Legal accountability is assigned to human applicants or their agents.
4. Practical Implications for AI Patent Agents
- Liability: The human operator, law firm, or corporate entity deploying the AI is responsible.
- Risk Management: Companies should implement review protocols, maintain logs, and ensure compliance.
- Regulatory Compliance: Jurisdictions differ, but human inventorship remains the standard.
- Potential Future Changes: Some scholars argue for limited legal personality for AI, but no jurisdiction currently allows this.
5. Summary Table
| Case | Jurisdiction | AI Inventor? | Outcome | Legal Accountability |
|---|---|---|---|---|
| DABUS USPTO | USA | No | Rejected | Human operator liable |
| DABUS UK IPO | UK | No | Rejected | Human operator liable |
| DABUS EPO | EU | No | Rejected | Human operator liable |
| Thaler v Australia | Australia | No | Rejected | Human operator liable |
| EP 3 067 585 | EU | N/A | Error in AI draft | Attorney liable |
| American Airlines v Oracle | USA | N/A | System error | Company liable |
Conclusion
Currently, autonomous AI patent-filing agents cannot assume inventorship or legal responsibility. Liability always falls on the human patent attorney, applicant, or corporation using the AI. Courts across multiple jurisdictions (US, UK, EU, Australia, South Africa) consistently enforce human accountability, emphasizing that AI is a tool, not a legal person.
Key Takeaways:
- Always have human oversight for AI filings.
- AI errors cannot shield humans from malpractice claims.
- Future legislation may introduce limited AI accountability, but this is speculative.

comments