Research On Prosecution Strategies For Ai-Enabled Cyber Harassment
Key Prosecution Strategy Themes
Before we turn to the cases, here are major strategic themes that prosecutors are applying in AI‑enabled cyber‑harassment cases:
Attribution and digital forensic evidence – Identifying the harasser or bot, linking them to automated tools or generative‑AI systems, logging metadata (IP addresses, device identifiers, chatbot logs, model usage).
Applying traditional harassment / stalking / defamation statutes but adapting them to AI‑context – e.g., non‑consensual intimate images generated by AI, deepfake impersonation, automated bots harassing victims.
Liability for platform or intermediary failures – Holding platforms/intermediaries responsible for failing to remove harassment or for hosting AI‑generated harassment content.
Urgent injunctive relief and takedown orders – Because AI‑enabled harassment can scale rapidly, courts and prosecutors use interim orders to block access, require disclosure of uploader info, preserve evidence.
Hybrid charges combining cybercrime statutes + harassment/fraud/defamation – For example, combining computer misuse/unauthorised access or hacking with harassment when bots or automation are used.
Recognition of new modalities of harm – AI‑generated deepfakes, morphing of images, fake‑bot chat harassment, impersonation, doxxing via automation. Prosecutors are beginning to treat these as serious forms of cyber‐harassment requiring tailored investigation.
Cross‑border cooperation and jurisdictional issues – Because AI‑tools and platforms often operate across jurisdictions, international cooperation becomes essential.
Procedural readiness and training – Investigators and prosecutors need training in generative‑AI evidence, deepfake detection, chain‑of‐custody of bot logs, etc.
With that framework, here are the case examples showing how these strategies are being executed.
Case Examples
Case 1: Suhas Katti v. Tamil Nadu (India, 2004)
Facts: The accused sent obscene and defamatory messages in a Yahoo messenger group, impersonating the female victim, and set up fake email accounts to send messages in her name. The victim’s reputation suffered, and many people phoned her as if she were soliciting sex work.
Legal Issues / Strategy: Although this case pre‑dated modern AI tools, its key elements remain relevant: impersonation, fraudulent accounts, online harassment. The prosecution traced digital evidence (IP addresses, messaging logs) to the accused. The court accepted electronic evidence under Section 65B of the Indian Evidence Act.
Outcome: Conviction under IPC offences (forgery, impersonation) and under the IT Act Section 67 for transmission of ’obscene’ electronic message.
Significance: Sets an early precedent for how online harassment and impersonation can be prosecuted and how digital forensics can link online acts to a perpetrator—a strategy extended into AI‑enabled cases.
Case 2: People v. Marquan M. (New York, 2014)
Facts: A 16‑year‑old created a Facebook page under a pseudonym and posted photos of classmates with malicious commentary. The law under which he was charged criminalised cyberbullying of children.
Legal Issues / Strategy: This case highlights free speech constraints, but also how electronic harassment (via social media) can be criminalised. It helps inform strategy in AI cases where the harassment is automated or amplified by bots: prosecutors can rely on social media logs, account creation metadata, and harassment content.
Outcome: The court invalidated the local law as overly broad under the First Amendment.
Significance: Important caution for prosecutors: drafting statutes must carefully target harassment without infringing protected speech—especially relevant when AI generates content at scale. For AI‑enabled harassment prosecutions, precision in statute and charge matters.
Case 3: Recent U.S. Federal Case – AI Chatbot for Stalking (Massachusetts, 2025)
Facts: A man used AI chatbot creation platforms (e.g., CrushOn.ai, JanitorAI) to impersonate a university professor and create chatbots that lured strangers to her home address, plus shared manipulated images of her, stole her underwear, used her personal info, and harassed other women and a minor.
Legal Strategy: Although not yet fully reported as a published precedent, the prosecution strategy included: tracing chatbot logs and platform usage, tracking bot responses programmed with the victim’s personal data, linking physical stalking with digital tool usage, and using counts of cyberstalking + possession of child pornography.
Outcome: The defendant agreed to plead guilty to seven counts of cyberstalking and one count of child pornography possession.
Significance: Illustrates prosecution of AI‑generated impersonation and harassment using chatbots—a new frontier. Strategy emphasises forensic capture of chatbot configuration, linking personal data to chatbot prompts, and integrating harassment/physical safety risks with digital tools.
Case 4: Landmark Delhi High Court Ruling – AI‑Generated Deepfakes & Harassment (India, 2025)
Facts: A public figure (academic/activist) was targeted by a large‑scale online harassment campaign involving morphed images, AI‑generated pornographic deepfakes, defamatory texts, distributed via social media. The court issued injunctions against platforms (X Corp, Meta, Google) to remove content, identify uploaders, block URLs, and preserve anonymity of victim.
Prosecution/Remedy Strategy: Although technically civil/regulatory rather than criminal, this decision influences strategy for criminal harassment too: it shows the importance of urgent interim relief, takedown orders, platform disclosure of user data, blocking of websites and ISPs, and recognition of AI‑generated media as a form of harassment.
Outcome: The court ordered platforms to remove content, block access, disclose uploader info, and protect victim’s identity.
Significance: Setting precedent that harassment via AI‑generated content (deepfakes) is actionable; the remedy side influences prosecution strategy (evidence preservation, platform cooperation). Prosecutors can adopt similar takedown & disclosure orders, then build criminal case.
Case 5: Irish “Coco’s Law” – Harassment, Harmful Communications and Related Offences Act 2020 (Ireland, 2021 implementation)
Facts: While not a single case, the Act criminalises non‑consensual distribution of intimate images and harmful communications, with penalties up to 7 years’ imprisonment. It is relevant for AI‑enabled harassment (e.g., deepfake intimate images) since Italy’s law covers “taking, distribution, publication or threat to distribute intimate images without consent …”
Strategy Implication: This legislation provides a model for prosecutors dealing with AI‑generated intimate image abuse: charges can apply to non‑consensual distribution even if images are synthetically generated and even if no original consensual image existed. The law emphasises intent to cause harm, and the act of publication or threat of publication.
Significance: Demonstrates legislative strategy: drafting new statutes enabling effective prosecution of AI‐enabled harassment (deepfakes, AI‑morphed images) rather than relying solely on old statutes designed for “real” images.
Case 6: Early Indian Cyber Harassment Case – Suhas Katti Case + IT Act Application
Facts: As described in Case 1, the original case involved impersonation, harassing emails, online messages, and reputation damage.
Strategy Implication: In India, prosecutors used a combination of IPC (for forgery, impersonation) and IT Act (for transmission of obscene messages). For AI‑enabled harassment, the strategy extends: prosecutors may rely on IT Act cybersecurity provisions, data protection laws, intermediary liability rules, and digital evidence under Section 65B. Ensuring chain of custody of electronic records is key.
Significance: Shows that even in jurisdictions without full AI‑specific statutes, cyber‑harassment prosecutions can leverage existing frameworks—an important transitional strategy.
Emerging Strategic Insights for AI‑Enabled Harassment Prosecution
From these cases and strategies, we can extract actionable insights for prosecutors:
Ensure forensic readiness for AI‑generated content: including metadata of generative‑AI tools, logs of chatbot prompt/response, AI model usage records, timestamps, device logs.
Preserve evidence rapidly and secure takedowns: because AI‑generated harassment often spreads fast and may be removed; use interim orders to freeze content, obtain disclosure from platforms.
Target intent and harm, not just technology: Charges should emphasise intent to harass, harm, intimidate, compel the victim, rather than simply the fact of automation.
Leverage hybrid charges: Combine harassment/stalking offences with computer misuse, impersonation, defamation, data‑protection breaches or intimate‑image distribution laws.
Platform cooperation is critical: Ability to get uploader IPs, chatbot logs, AI system access from platforms is essential. The Delhi case shows injunction against platforms to produce user info.
Develop appropriate statutes and guidance: Where existing laws lack focus on AI‑enabled harassment (deepfakes, bots, synthetic intimate images), legislative reform is required to criminalise these modalities explicitly.
Training and technical expertise: Prosecutors, law‑enforcement, cyber‑cells must understand generative‑AI, deepfake detection, forensic chain‐of‐custody of bot logs, etc.
Victim‑centred approach: Given scale and trauma of AI‑enabled harassment, prosecutors should integrate victim protection (anonymity, expedited relief) alongside criminal process.
International jurisdiction & cooperation: Because many AI harassment tools and platforms operate globally, cross‑border evidence, MLATs, platform compliance across jurisdictions matter.
Limitations and Gaps
Many jurisdictions lack explicit AI‑harassment statutes (e.g., India still has gaps in deepfake‑harassment laws). legalcyfle.in+1
Evidence‑gathering for AI‑generated content poses new forensic challenges: identifying synthetic origin, distinguishing human vs AI‑bot proxied harassment.
Free speech concerns: As in People v. Marquan M., over‑broad anti‑harassment statutes can fall foul of constitutional protections. Prosecutors must be careful.
Rapid technology evolution: Law and enforcement systems often lag behind development of new generative‑AI tools used for harassment.

comments