AI Hallucinations Strike Again: Lawyers Caught Submitting Bogus Legal Documents
Another case of legal professionals relying on AI without fact-checking has surfaced, this time in Australia. Lawyers Rishi Nathwani and Amelia Beech, representing a teenager in a murder trial, submitted court documents filled with glaring errors—fabricated citations and a misquoted parliamentary speech among them. The blunders highlight the growing problem of professionals blindly trusting AI tools without verifying their output.
The incident is part of a troubling trend where lawyers, paralegals, and other white-collar workers use AI to cut corners, only to be exposed when the technology’s notorious hallucinations lead to embarrassing—and sometimes legally consequential—mistakes. In this case, the errors were caught by prosecutors, forcing the defense team to scramble for corrections.
While AI can streamline research and drafting, its tendency to invent facts remains a critical flaw. Legal professionals, in particular, have a duty to ensure accuracy, yet some continue to treat AI as infallible. The consequences can range from reputational damage to jeopardizing entire cases.
This isn’t the first time AI has misled lawyers. Similar incidents have occurred in the U.S. and elsewhere, where attorneys submitted fake case law generated by chatbots. Courts are increasingly taking notice, with some jurisdictions considering stricter rules around AI use in legal filings.
The broader lesson is clear: AI is a tool, not a replacement for due diligence. Whether in law, finance, or crypto, relying on unchecked AI output is a gamble—one that can backfire spectacularly. Professionals who use these tools must double-check every claim, citation, and fact. Otherwise, they risk more than just embarrassment—they risk losing credibility when it matters most.


