Research On Ai-Assisted Financial Fraud Detection And Prosecution

Emerging AI-Related Offenses Under Singapore Cybercrime Law: Case Analysis

In Singapore, AI-related offenses are usually prosecuted under existing frameworks such as the Computer Misuse Act (CMA) 1993, the Personal Data Protection Act (PDPA), and professional conduct rules. While there are no AI-specific statutes yet, courts have increasingly encountered cases involving AI tools in digital crime, data theft, or legal misconduct.

1. Koh Keng Leong Terence v. Zhang Changjie (2023) – Insider Data Theft

Facts:

Zhang, a former employee of a trading firm, copied thousands of proprietary files (trading algorithms, macros, and customer data) to personal cloud storage before resigning.

He attempted to delete traces of his copying.

Legal Issue:

Whether unauthorized access and copying of company data constitutes an offense under Section 3(1) of the CMA.

Court Decision:

Zhang was convicted under the CMA for unauthorized access and copying of computer material.

Penalty: A fine of SGD 5,000.

Significance:

First successful private prosecution in Singapore for internal data theft under CMA.

Confirms that insider misuse of company data — even without hacking in the classical sense — falls under the CMA.

Relevant to AI-era workplaces where proprietary data (e.g., training datasets, algorithms) can be easily exfiltrated.

2. Tajudin bin Gulam Rasul & Anor v. Suriaya bte Haja Mohideen (2025, SGHCR 33) – AI-Generated Fictitious Case Citations

Facts:

Lawyers filed submissions citing cases that were entirely fabricated, likely generated by an AI tool.

Opposing counsel discovered the citations were fictitious.

Legal Issue:

Can submitting AI-generated but false legal authorities constitute professional misconduct?

Court Decision:

Lawyers were sanctioned with personal costs orders for failing to verify AI-generated citations.

Court emphasized that AI cannot replace professional diligence.

Significance:

First Singapore case explicitly sanctioning AI-generated “hallucinations” in legal filings.

Establishes that misuse of AI in legal work is actionable under professional conduct rules.

3. Lalwani Anil Mangan v. Opponent (2025) – Similar AI-Generated Citation Misuse

Facts:

A junior lawyer submitted court documents with references to a non-existent case generated by AI.

Court Decision:

Lawyer ordered to pay SGD 800 in costs for wasting judicial resources.

Court stressed that all AI-generated content must be independently verified.

Significance:

Reinforces judicial intolerance for unverified AI content in legal proceedings.

Highlights professional responsibility when using AI tools.

4. Unauthorized Access & Hacking Offenses (2025 Cases)

Facts:

A 34-year-old man was arrested for hacking multiple online accounts and committing fraudulent purchases while abroad.

Separate case: three foreign nationals were convicted for participating in a cybercrime syndicate with malware and unauthorized access in Singapore.

Legal Issue:

Unauthorized access to computer material and possession of programs capable of committing cybercrime under Sections 3(1) and 6 of CMA.

Court Decision:

Convictions were secured; penalties included fines and imprisonment depending on severity.

Significance:

Shows CMA remains robust against classical and AI-assisted cybercrime.

Illustrates that cybercrime laws can cover scenarios where AI tools might assist in hacking or automation.

5. Misuse of AI in Financial and Commercial Contexts (Hypothetical / Analytically Derived from Existing Trends)

While there are no fully reported criminal convictions yet specifically involving AI as the principal instrument, cases like Zhang Changjie demonstrate the potential applicability:

If an employee uses AI to automate the exfiltration of data (e.g., scraping confidential datasets or automating insider trading), CMA provisions on unauthorized access and intent to cause loss could be applied.

The courts interpret “computer material” broadly, so AI-assisted manipulation or theft falls under existing statutes.

Significance:

AI does not create immunity; misuse can be prosecuted under existing cybercrime laws.

Provides guidance for firms to implement monitoring and compliance for AI-related data handling.

Key Legal Takeaways for Emerging AI-Related Offenses

CMA is broadly interpreted: Unauthorized access, copying, or exfiltration of data is actionable, even if AI assists the offender.

Professional diligence is mandatory: Lawyers and professionals must verify AI-generated content. Failing to do so can lead to personal cost orders and sanctions.

No AI-specific statutes yet: Courts use existing law creatively to address new AI-enabled risks.

Insider threats are prominent: Automated or AI-assisted theft by employees can trigger CMA liability.

Cybercrime enforcement remains strong: Traditional hacking, malware, and fraud remain prosecutable, including scenarios where AI tools are used.

✅ Conclusion

Singapore law has yet to define AI-specific offenses. However, courts have already dealt with AI-related risks under existing statutes, particularly the CMA and professional ethics rules. The trend shows:

AI-assisted misconduct is actionable.

Judicial scrutiny of AI-generated content is increasing.

Insider misuse, data theft, and automated cybercrime are covered under current cybercrime laws.

This creates a strong framework for regulating emerging AI-related offenses even without new legislation.

LEAVE A COMMENT