Machine-Learning Explainers As Ancillary Patentable Inventions.
I. Conceptual Foundation: What Are ML Explainers?
1. Machine Learning Systems
Machine learning (ML) systems use statistical models to learn patterns from data and make predictions or decisions. Many modern models (e.g., deep neural networks, ensemble methods) are often described as “black boxes” because their internal reasoning is opaque.
2. ML Explainers
ML explainers are systems or methods that:
Interpret model outputs
Attribute importance to features
Provide local or global interpretability
Generate counterfactual explanations
Provide saliency maps or decision pathways
Examples:
SHAP-based feature attribution
LIME local explanations
Counterfactual instance generation
Attention visualization tools
Confidence calibration modules
3. Ancillary Patentable Inventions
An ancillary invention is not the core ML model itself but an additional technical system that:
Improves model reliability
Enhances transparency
Reduces computational load
Improves human-machine interaction
Enables regulatory compliance (e.g., explainability in medical/financial AI)
Thus, the key question is:
Can ML explainers be patented independently as technical inventions?
To answer this, we examine patent eligibility jurisprudence.
II. Patent Eligibility Framework (United States)
Under U.S. law (35 U.S.C. §101), patentable subject matter includes:
Processes
Machines
Manufactures
Compositions of matter
However, judicial exceptions exclude:
Abstract ideas
Laws of nature
Natural phenomena
Most AI/ML inventions are challenged as abstract ideas, particularly when framed as algorithms.
The governing test comes from:
Alice Corp. v. CLS Bank (2014)
Two-step framework:
Is the claim directed to an abstract idea?
If yes, does it contain an “inventive concept” that transforms it into patent-eligible subject matter?
We now examine case law that shapes how ML explainers may be treated.
III. Key Case Laws in Detail
1. Alice Corp. v. CLS Bank International (2014)
Facts
Alice claimed a computerized method for mitigating settlement risk using an intermediary.
Holding
The Supreme Court held that implementing an abstract idea (intermediated settlement) on a generic computer is not patentable.
Legal Principle
Merely automating a fundamental practice using a computer does not make it patentable.
Must add “significantly more” than the abstract idea.
Relevance to ML Explainers
If an ML explainer is claimed merely as:
“A method of explaining a prediction using a mathematical formula”
It may be considered an abstract mathematical method unless:
It improves computer functionality, or
It solves a specific technological problem.
Thus, ML explainers must be framed as technical improvements, not abstract mathematical post-processing.
2. Mayo Collaborative Services v. Prometheus Laboratories (2012)
Facts
Claims related to measuring metabolite levels and adjusting drug dosage.
Holding
Claims were invalid because they applied a law of nature with routine steps.
Legal Principle
Adding conventional steps to a natural law is not patentable.
Must include an inventive concept beyond routine implementation.
Application to ML Explainers
If an explainer simply:
Applies known statistical techniques
Uses conventional computing
Does not alter system architecture
It risks invalidation under the Mayo reasoning.
However, if the explainer:
Modifies system training architecture
Reduces computational complexity in novel ways
Changes internal representation learning
Then it may pass the “inventive concept” threshold.
3. Diamond v. Diehr (1981)
Facts
The invention used a mathematical formula (Arrhenius equation) in a rubber-curing process.
Holding
The claim was patentable because it improved an industrial process.
Legal Principle
Mathematical formulas are not patentable alone.
But applying them in a technological process that improves industrial performance is patentable.
Relevance to ML Explainers
This is a crucial case.
If an ML explainer:
Improves system stability
Optimizes hardware utilization
Enhances autonomous decision safety
Controls industrial machinery with interpretable safeguards
Then it may be patentable as an improvement to technological processes.
Diehr supports patentability when:
The algorithm is integrated into a technical process.
4. Enfish, LLC v. Microsoft Corp. (2016)
Facts
Enfish patented a self-referential database model.
Holding
The Federal Circuit held the invention patentable because it improved computer functionality itself.
Legal Principle
An invention is patent-eligible if:
It improves the functioning of a computer,
Rather than merely using a computer as a tool.
Application to ML Explainers
If the explainer:
Improves memory architecture
Changes neural network training pipelines
Enhances internal representation efficiency
Reduces model instability
It can be argued that the invention improves computer functionality.
Thus, under Enfish:
Structural improvements to ML system architecture via explainability modules can be patentable.
5. McRO, Inc. v. Bandai Namco Games (2016)
Facts
Automated lip synchronization using rules.
Holding
Patent eligible because the rules created a technological improvement in animation.
Legal Principle
Automation of a previously manual process can be patentable if:
The rules are specific and not generic,
They improve technological output.
Relevance to ML Explainers
If an explainer:
Automates interpretability in safety-critical systems
Replaces manual auditing of AI decisions
Applies specific transformation rules to internal model states
Then under McRO, it may be patentable.
This is especially strong for:
Autonomous vehicle safety explainers
Medical diagnostic validation modules
Financial risk interpretability engines
6. BASCOM Global Internet Services v. AT&T Mobility (2016)
Facts
Content filtering at a specific ISP network location.
Holding
Although filtering was abstract, the ordered combination of elements was inventive.
Legal Principle
An inventive concept may arise from:
A non-conventional and non-generic arrangement of known elements.
Application to ML Explainers
Even if:
Feature attribution is known,
Statistical modeling is known,
A novel system architecture that integrates:
Training module
Real-time explanation module
Confidence threshold engine
Hardware-optimized processing layer
May be patentable under BASCOM reasoning.
7. DDR Holdings v. Hotels.com (2014)
Facts
Website behavior that retained visitors when clicking third-party links.
Holding
Patent eligible because it solved a problem unique to computer networks.
Legal Principle
If the invention addresses a problem rooted in computer technology, it may be patentable.
Application to ML Explainers
Black-box opacity is a problem unique to advanced computing systems.
If an ML explainer:
Solves instability in deep networks,
Addresses adversarial vulnerability through explanation,
Enhances distributed AI accountability,
It may fall under DDR reasoning.
8. Thales Visionix Inc. v. United States (2017)
Facts
Method for tracking motion using sensors in unconventional configuration.
Holding
Patent eligible because it improved tracking accuracy.
Legal Principle
Even mathematical calculations are patentable if applied in a specific technological improvement.
Application to ML Explainers
If explanation methods:
Improve sensor fusion accuracy,
Enhance robotic control precision,
Reduce false positives in surveillance AI,
They may qualify under Thales.
9. SAP America v. InvestPic (2018)
Facts
Statistical analysis of financial data.
Holding
Ineligible because it was an abstract mathematical analysis.
Warning for ML Explainers
If claimed as:
“Applying a statistical method to explain financial predictions”
Without technological improvement, it may be invalid under SAP.
Thus, purely analytical explainers risk being abstract.
IV. When Are ML Explainers Patentable?
An ML explainer is more likely patentable if it:
1. Improves Computer Functionality
(Enfish standard)
2. Improves a Technological Process
(Diehr, Thales)
3. Uses a Non-Conventional Architecture
(BASCOM)
4. Solves a Technology-Specific Problem
(DDR Holdings)
5. Uses Specific Rule-Based Transformations
(McRO)
V. When Are They Not Patentable?
They are likely rejected if:
Merely mathematical post-processing
Generic “apply explainability algorithm”
No structural change to computing system
Only conceptual or statistical insight
Conventional hardware implementation
(Under Alice, Mayo, SAP)
VI. Comparative Perspective (Brief)
European Patent Office (EPO)
Under Article 52 EPC:
Mathematical methods excluded “as such”
But patentable if they provide a technical effect
EPO accepts AI inventions when:
They improve computer efficiency
Control industrial processes
Enhance signal processing
Thus, ML explainers that:
Reduce computational burden
Improve data compression
Improve hardware resource allocation
May be patentable in Europe.
VII. Doctrinal Synthesis
ML explainers can qualify as ancillary patentable inventions when they:
Are not claimed as abstract mathematical formulas.
Are integrated into a technical system.
Improve computing performance or safety.
Modify system architecture in non-conventional ways.
Solve problems specific to AI systems.
They are not patentable when:
Claimed as pure data analysis.
Framed generically without technical contribution.
VIII. Conclusion
Machine-learning explainers occupy a legally nuanced space.
Under case law:
Alice and Mayo impose restrictions.
Diehr, Enfish, McRO, BASCOM, DDR, and Thales provide pathways to patentability.
SAP warns against purely analytical claims.

comments