Patent Appeal Trends Concerning Neural-Network Explainability.

1. Ex Parte Hannun — PTAB (USPTO)

Jurisdiction: United States — Patent Trial and Appeal Board
Focus: Neural-network-based speech recognition under 35 U.S.C. § 101 (patent eligibility)

Facts

The invention claimed a method for converting speech to text using a trained neural network.

Steps included: converting audio to spectrogram frames, processing through a neural network, and decoding predicted text outputs.

The examiner rejected the claims as abstract, arguing they recited only mathematical operations and mental processes.

PTAB Decision

The PTAB reversed the rejection.

Key reasoning:

The neural network steps could not be performed mentally or manually.

The invention applied the algorithm to a practical improvement in speech recognition.

Outcome: Claims found patent-eligible because they demonstrated a technical improvement, not just an abstract idea.

Significance

Shows that describing how the neural network works—not just that it exists—helps overcome § 101 rejections.

Important precedent for claims involving AI systems with tangible technical effects.

2. Comptroller-General v. Emotional Perception AI Ltd — UK Court of Appeal

Jurisdiction: United Kingdom — Court of Appeal
Focus: Patentability of an artificial neural network (ANN) for personalized media recommendations

Facts

The company claimed a neural network that analyzed user preferences to suggest media.

The High Court initially found the invention patentable, reasoning that it provided a technical contribution.

Court of Appeal Decision

The appeal court reversed, holding that the invention was excluded as a "computer program as such" under UK law.

Reasoning:

Improvements were related to data processing for subjective outputs (recommendations).

No technical effect beyond normal computation was present.

Significance

In the UK and EU, simply improving outputs or explainability is insufficient; the invention must have a technical effect on computing itself.

Highlights the stricter approach to neural network and AI patentability outside the U.S.

3. Ex Parte Desjardins — PTAB / USPTO Appeals Review Panel

Jurisdiction: United States — PTAB / USPTO
Focus: Machine-learning training methods

Facts

Claims described a method for training machine-learning models to reduce forgetting and improve multi-task learning.

The examiner rejected the claims as abstract under § 101.

Decision

The USPTO vacated the rejection.

Key reasoning:

The claimed methods provided a concrete improvement in machine learning operations.

Claims described how the algorithm improves model performance, not just abstract math.

Significance

Set a precedent for examining AI claims based on technical improvements to computing.

Influenced subsequent PTAB decisions like Ex Parte Carmody, which applied the same reasoning.

4. Enfish, LLC v. Microsoft Corp. — Federal Circuit

Jurisdiction: United States — Federal Circuit
Focus: Software eligibility under 35 U.S.C. § 101

Facts

Enfish claimed a self-referential database table that improved database performance.

Microsoft challenged eligibility as abstract.

Decision

Federal Circuit ruled the claims were eligible.

Reasoning:

The invention improved the operation of a computer (faster and more efficient data storage and retrieval).

Outcome: Software and AI-related inventions can be patent-eligible if they improve computing technology itself.

Significance

Provides a framework for arguing technical improvements in AI and neural-network claims.

Frequently cited in PTAB AI decisions to support eligibility.

5. Ex Parte Carmody — PTAB

Jurisdiction: United States — PTAB
Focus: Neural-network patent eligibility

Facts

Claims involved neural-network architectures for image recognition.

Examiner rejected claims as abstract.

Decision

PTAB reversed the rejection.

Key reasoning:

Claims included specific architectural improvements and algorithmic modifications.

Demonstrated practical improvements in neural network operation.

Significance

Reinforces the idea that describing architecture and operational improvements helps neural-network patents survive appeals.

Builds on Desjardins framework.

6. General PTAB AI Appeal Patterns

Analysis of ~50 AI-related appeals shows:

Only about 20% of AI/ML claims initially rejected for abstractness were ultimately deemed eligible.

Neural-network claims survived when they showed technical improvements (e.g., faster training, less memory use, improved accuracy).

Claims focused on outputs or explainability alone without tying them to technical improvement were generally rejected.

Trend

Across the U.S., PTAB increasingly evaluates AI patents based on technical effect and system-level improvement, not simply the use of AI or neural-network explainability.

Key Takeaways for Neural-Network Explainability Patents

Demonstrate technical effect: Explainability itself is not enough unless it improves how the system operates (e.g., debugging, model optimization, efficiency).

Detail specific architectural or algorithmic steps: Claims must focus on how the model works to achieve the improvement.

Emphasize improvements over prior methods: Highlight measurable benefits (speed, accuracy, reduced resources).

Draft claims carefully for jurisdiction: U.S. PTAB focuses on § 101 improvements; UK/EU requires technical contribution.

Explainability as a feature: Tie explainability to system-level improvements, like model transparency leading to better automated performance.

LEAVE A COMMENT