Research On Ai-Assisted Phishing Campaigns Targeting Smes And Multinational Corporations
1. U.S. v. Lori Drew (2008) – Cyber Fraud and Online Deception
Facts:
Lori Drew created a fake MySpace profile to harass a teen, which led to the teen’s suicide. While not a financial phishing case, the court examined online deception and computer fraud principles.
The case involved using online tools to deliberately mislead someone, analogous to how phishing uses digital communications to defraud victims.
Holding:
Drew was convicted of violating the Computer Fraud and Abuse Act (CFAA), though the conviction was later overturned on appeal due to narrow interpretation of the CFAA.
Analysis for AI-assisted Phishing:
Courts focus on intent to defraud and use of digital platforms to commit deception.
If AI is used to craft phishing messages, liability lies with the human operators because AI cannot form intent. The principles of knowing deception and use of computers to commit fraud apply directly.
2. U.S. v. Coscia (2015) – Market Manipulation / Automated Tools
Facts:
Coscia used high-frequency trading algorithms to manipulate market prices via rapid order placement and cancellations.
This is analogous to AI-assisted phishing in that an automated tool executed actions intended to defraud or manipulate outcomes.
Holding:
Convicted of commodities fraud and wire fraud.
The court held that automated tools do not shield the user from criminal liability.
Analysis for AI-assisted Phishing:
The key principle is human intent plus automated execution. Even if AI sends phishing emails, the operator is criminally liable because they intended the fraudulent outcome.
This case also establishes precedent for prosecuting crimes executed through automated systems.
3. Experi-Metal v. Comerica Bank (2011) – Phishing and Commercial Reasonableness
Facts:
Experi-Metal fell victim to a phishing attack resulting in $1.9 million being transferred from its account without authorization.
The bank initially refused to reimburse the loss. The case examined whether the bank’s security procedures were “commercially reasonable.”
Holding:
Court found the bank partly liable because it had not followed reasonable commercial standards for verifying wire transfers.
Analysis for AI-assisted Phishing:
Establishes a duty of care standard for institutions against phishing.
Companies and banks may need to adapt security practices to anticipate AI-assisted phishing, such as enhanced email verification, AI detection systems, and employee awareness.
4. SEC v. Shkreli (2015) – Wire Fraud and Misrepresentation
Facts:
Martin Shkreli was accused of using false statements and fraudulent misrepresentation to manipulate stock prices and defraud investors.
Although not AI-assisted, it illustrates how misrepresentation in digital communications can lead to liability.
Holding:
Convicted of securities fraud and conspiracy.
Analysis for AI-assisted Phishing:
If AI generates fraudulent emails or documents, misrepresentation principles still apply.
Liability attaches to the human who instructed or benefited from the AI, not the AI itself.
Demonstrates that even sophisticated campaigns using technology do not shield perpetrators from criminal prosecution.
5. U.S. v. Raj Rajaratnam (2011) – Insider Trading Using Technology
Facts:
Rajaratnam used electronic communications, emails, and technology to receive insider information and trade stocks illegally.
AI could theoretically automate the analysis of such communications to detect trading opportunities.
Holding:
Convicted of securities fraud and conspiracy; sentenced to 11 years in prison.
Analysis for AI-assisted Phishing:
Demonstrates that courts hold humans responsible for crimes assisted by technology.
In AI phishing, if AI analyzes data and generates deceptive messages for financial gain, criminal liability would fall on the human operator.
Key Takeaways Across Cases
| Principle | Case Illustration | Relevance to AI-assisted Phishing |
|---|---|---|
| Human intent is key | Lori Drew, Coscia, Rajaratnam | AI cannot form intent; humans directing AI are liable |
| Automated tools do not absolve liability | Coscia | AI sending phishing emails is treated like automated systems in fraud |
| Duty of care for institutions | Experi-Metal v. Comerica | Companies must implement reasonable defenses against AI-assisted phishing |
| Misrepresentation and fraud principles apply | Shkreli | AI-generated content used to deceive is treated as human deception |
| Technology as facilitation, not excuse | Rajaratnam | AI assistance enhances scope but does not mitigate criminal responsibility |
These five cases collectively provide a strong legal framework for analyzing criminal liability in AI-assisted phishing:
Liability rests on human intent, not the AI itself.
Automated tools or AI do not shield operators from prosecution.
Organizations must maintain reasonable security procedures to mitigate risk and liability.

comments