From US to European courtrooms, AI hallucinations in law are becoming a growing concern, and Lawyers who used ChatGPT refer to the tool to speed up research and writing but warn of its fabrication of information.
AI legal briefs have become an alarm for accuracy and accountability. Many lawyers are finding that the careful review expected in the practice of law simply cannot be replaced with AI.
Firms and judges are questioning the ethics and reliability of AI legal work. Judges are now sending out warnings, and at times, fining lawyers for failure to double-check what their intelligent tools generate.
Lawyers Using ChatGPT
Court records now show a growing number of cases associated to AI hallucinations in law. French attorney and researcher, Damien Charlotin, has tracked 490 legal filings holding false or misleading information produced by generative AI.
“Even the more sophisticated player can have an issue with this. AI can be a boon. It’s wonderful, but also there are these pitfalls,” Charlotin explained.
His database shows that lawyers are relying too heavily on technology without enough AI tools legal due diligence.
In one US case, counsel for MyPillow Inc. filed a brief with 30 fabricated citations, showing everything that could go wrong with AI legal slop. A clearly irritated judge who presided over the case called out technology misuse and reminded legal professionals that accuracy is non-negotiable, and there is no room for error.
Judges on AI filings are taking such practices with a grain of salt, with many issuing warnings, and even light penalties, against lawyers who do not double-check their AI work.
Lawyers say that soon, not implementing a cohesive review will eventually amount to AI malpractice law, as courts begin to outline new limits to liability in automation.
Legal AI Teaching Lawyers to Work Smarter
Despite the proliferation of the stated risks, some firms are treating the challenge as an opportunity to improve AI legal research tools.
One of the world’s largest law firms, Latham & Watkins, held a two-day “AI Academy” in Washington DC, training more than 400 new lawyers on the smart and ethical use of AI.
“Turning away from it as opposed to embracing it is just not an option,” said partner Michael Rubin.
Rubin emphasizes that AI’s use does improve efficiency and client service, only if done responsibly. In parallel, senior partner Fiona Maclean emphasized that legal AI does boost productivity, but it must never replace human judgment.
Lawyers and other professionals across the industry are gradually learning that AI slop in court can damage reputations and ultimately erode trust in the professional’s capacity to handle the case. Firms are reminding employees that lawyers using AI must check every fact, case, and citation before submission.
The key, experts say, is using AI for lawyers to support, not replace, the expertise of trained attorneys.
“These cases are damaging the reputation of the bar. Lawyers everywhere should be ashamed of what members of their profession are doing,” says Stephen Gillers, an ethics professor at NYU.
It was always an inevitability that AI hallucinations in law will affect the profession. But with careful oversight, due diligence, and stringer training, generative AI can still be trusted as a partner in law.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.