Magistrate warns of AI “hallucinations” after self-represented father cited fictitious cases in PPO application
A Singapore family court has dismissed a man’s application for a Personal Protection Order (PPO) after finding that he cited 14 non-existent legal cases generated by ChatGPT in his submissions.
Fake Cases in PPO Application
The case involved two former spouses who sought PPOs against each other and their two daughters after their divorce. Magistrate Soh Kian Peng noticed that the father’s written submissions cited 14 legal precedents, none of which existed.
Some cases were “quite obviously fictitious,” while others initially appeared legitimate but proved false upon scrutiny. The man admitted he had relied on ChatGPT to identify relevant precedents but had failed to verify them before submission.
Court’s Position on AI Use
Magistrate Soh highlighted that court users remain fully responsible for all content in documents submitted. While AI tools can assist in legal research, users must ensure accuracy. Failure to do so could result in documents being disregarded or costs imposed.
To guard against future misuse, the magistrate ordered the man to declare in writing if he uses generative AI in preparing court documents, and to confirm compliance with existing guidelines.

Outcome of the Case
After reviewing CCTV footage of alleged incidents, the magistrate concluded that neither parent committed family violence and that both cared for their children in their own ways. All PPO applications were dismissed.
However, the father was ordered to pay S$1,000 in legal costs to his ex-wife. He was also cautioned that citing false or AI-generated cases undermines the legal system, as courts must be able to trust precedents presented to them.
Broader Legal Warning
Soh emphasized that fictitious case citations erode confidence in the common law system. “If common law is to continue operating in the modern age alongside technology such as generative AI, the courts must be able to take, at face value, the cases which parties have cited,” he said.
He stressed that using generative AI is not prohibited but must be done responsibly. The judgment reflects Singapore’s effort to strike a balance between technological adoption and legal integrity.
The case highlights both the promise and pitfalls of AI in legal proceedings. While generative AI may aid litigants, unchecked reliance on its output risks undermining the credibility of court processes. Singapore’s measured approach signals that technology may assist justice—but never replace diligence and accountability.
Sources: CNA (2025) , Must Share News (2025)
Keywords: Singapore Court, ChatGPT Cases, AI Hallucinations, Family Protection Order, Fake Citations, Legal Responsibility











