Protection Of Algorithmically Generated Moral Codes As Intellectual Frameworks
1. Conceptual Foundation
“Algorithmically generated moral codes” refers to ethical or normative decision-making frameworks produced by computational systems—such as AI models, machine learning classifiers, or rule-based algorithms. These systems may generate:
- Content moderation rules (what is “harmful” or “acceptable”)
- Automated sentencing or bail risk scores
- Ethical decision trees in autonomous vehicles
- Recommendation systems embedding value judgments
- AI-generated “guidelines” for behavior or compliance
The legal question is:
Can such algorithmically generated moral frameworks be protected as intellectual property (IP) or treated as protectable intellectual works?
They sit at the intersection of:
- Copyright law (original expression)
- Trade secrets (confidential algorithms)
- Patent law (technical processes)
- Public policy (ethical governance and accountability)
Courts globally have not directly recognized “moral codes” generated by algorithms as a standalone IP category, but relevant principles emerge from software, AI authorship, and computational output cases.
2. Key Case Laws (Detailed Analysis)
Case 1: Feist Publications v. Rural Telephone Service (1991, USA)
Issue
Whether a simple compilation of data (telephone directory) can be protected by copyright.
Principle Established
- Copyright requires “minimal creativity”
- Mere “sweat of the brow” (effort) is not enough
- Facts and systems are not protected—only original expression is
Relevance to Algorithmic Moral Codes
Algorithmically generated moral frameworks often consist of:
- Rules derived from data
- Statistical weighting of ethical outcomes
- Automated classifications (e.g., “toxic content”)
👉 Under Feist, such outputs are likely not protected unless there is human creative input in selecting or structuring them.
Impact
If an AI generates a moral decision matrix automatically:
- It is likely treated as functional data/system, not copyrightable expression.
Case 2: Computer Associates v. Altai (1992, USA)
Issue
Whether non-literal elements of software (structure, sequence, organization) can be copyrighted.
Principle
Introduced the Abstraction-Filtration-Comparison Test:
- Abstract structure of software
- Filter out unprotectable elements (ideas, functionality)
- Compare remaining expressive elements
Relevance
Algorithmic moral codes are typically:
- Structured decision trees
- Logic-based outputs
- Embedded policy rules
👉 Courts would likely:
- Strip away functional ethical rules (“if harm > threshold → block content”)
- Protect only expressive human-designed structure, if any
Key Insight
Purely functional ethical logic = not protectable expression
Case 3: SAS Institute Inc. v. World Programming Ltd. (2010, UK/EU influence)
Issue
Whether functionality of a software system and programming language behavior can be copyrighted.
Holding
- Copyright does NOT protect:
- Programming languages
- Data file formats
- Functionality of software
- Only source code expression is protected
Relevance
If an AI system generates moral codes (e.g., moderation policies or scoring rules):
- The underlying ethical functionality is not protected
- Only literal code (if original) may be protected
Impact
This strongly limits IP protection over:
- Algorithmic ethical systems
- Automated governance rules
Case 4: Naruto v. Slater (Monkey Selfie Case, 2018, USA)
Issue
Whether a non-human can own copyright.
Holding
- Only humans can be authors under US copyright law
- Animal-created works are not protected
Relevance to AI Moral Codes
If AI generates moral frameworks autonomously:
- No “author” in legal sense
- Therefore, no copyright ownership unless:
- A human can be identified as the creative controller
Key Principle
Non-human generated content lacks copyright ownership.
Impact
Fully autonomous moral algorithms = public domain by default
Case 5: Thaler v. Perlmutter (2023, USA)
Issue
Whether AI-generated artwork can be copyrighted without human authorship.
Holding
- Copyright Office and courts confirmed:
- Human authorship is mandatory
- AI alone cannot be an author
Relevance
Algorithmically generated moral codes (e.g., AI ethics models producing rules):
- Not protectable unless:
- A human meaningfully designed or selected outputs
Important Distinction
- AI-assisted moral code → possibly protected
- AI-autonomous moral code → not protected
Case 6: Nova Productions v. Mazooma Games (2007, UK)
Issue
Whether frames generated by a computer program in a game could be copyrighted.
Holding
- The computer is merely an “extension of human skill”
- Copyright belongs to the human creator of the system
- Individual outputs generated by software are not separately authored works
Relevance
If an AI generates evolving ethical rules:
- The system owner may claim ownership only if:
- They designed the system
- Outputs are foreseeable results of human input
Key Principle
AI output is legally treated as:
“mechanical extension of human creativity”
Case 7: Google LLC v. Oracle America (2021, USA Supreme Court)
Issue
Whether copying software APIs constitutes fair use.
Holding
- Google’s use of Java API code was fair use
- Functional interfaces are less protectable than expressive code
Relevance
Algorithmic moral codes often function like APIs:
- “If X, then ethical action Y”
- Structured decision interfaces
Impact
- Ethical rule systems functioning as interfaces are likely:
- Weakly protected or unprotected
- Especially if they serve functional interoperability
3. Legal Synthesis: Protection of Algorithmic Moral Codes
Based on these cases, the legal position can be summarized:
A. Copyright Protection
Only possible if:
- Human authorship exists
- There is creative selection or arrangement
- Output is not purely functional or data-driven
Not protected if:
- Fully AI-generated
- Pure ethical logic systems
- Automated decision trees without human creativity
B. Patent Protection (Possible but limited)
Algorithmic moral codes may be patentable only if:
- They produce a technical effect
- Solve a technical problem (not abstract ethics)
- Are implemented in a novel system (e.g., autonomous vehicle safety system)
However:
- Abstract ethical reasoning is NOT patentable in most jurisdictions
C. Trade Secret Protection (Most realistic protection)
Companies often protect:
- Content moderation algorithms
- Ethical ranking systems
- Bias adjustment models
Requirements:
- Secrecy maintained
- Economic value derived from secrecy
- Reasonable protection measures
👉 This is currently the strongest protection route for algorithmic moral frameworks.
D. Public Policy Limitations
Courts are cautious because:
- Moral codes affect rights and liberties
- Transparency is required in governance systems
- AI ethics decisions may require accountability
Therefore:
- Over-protection is discouraged
- Open scrutiny often preferred in regulatory systems
4. Final Legal Conclusion
Algorithmically generated moral codes currently:
❌ Not independently protected as a distinct IP category
⚖️ Partially protected only through:
- Human authorship (copyright)
- Functional invention (patents)
- Confidentiality (trade secrets)
📌 Core Judicial Trend:
Courts consistently reject protection of:
- Pure functionality
- Machine-generated outputs
- Abstract systems of rules without creative human authorship

comments