When Lawyers Trust ChatGPT More Than They Should 

A Utah attorney was sanctioned for submitting a ChatGPT lawyer prompt legal brief containing fictitious AI citations.

On May 29, the case of the ChatGPT lawyer prompt, ignited the internet after a lawyer from Utah was sanctioned for filing a legal brief full of generated deceptive citations, using OpenAI’s chatbot, according to The Salt Lake Tribune. 

According to the Utah Court of Appeals, a document reviewed by ABC4, the brief cited a court case that does not exist. Richard Bednar, along with another attorney based in Utah, Douglas Durbano, petitioned an interlocutory appeal. But the brief, which was put together by an unlicensed law assistant, contained citations that have never appeared in any legal database.  

For example, the brief cited a case titled “Royer v Nelson,” which exists only in response to ChatGPT and not in any legal database. The respondent’s counsel highlighted that some parts of the petition appeared AI-generated, containing inaccurate quotations and unrelated case references. 

Finding a lawyer in trouble for using ChatGPT is becoming more common, creating a threat of a broader trend where attorneys use large language models (LLMs) for legal research, only to encounter AI hallucinations, with fabricated cases and citations that don’t exist.  

Most lawyers do not realize the mistake until it is pointed out by opposing lawyers or judges. In fact, on some occasions, such as a 2023 aviation lawsuit, lawyers have had to pay penalties for submitting documents containing AI-generated hallucinations. 

What Happened to the Lawyer Who Used ChatGPT 

After a lawyer used ChatGPT and the errors were found, Bednar admitted using ChatGPT and apologized.

When he appeared before a judge in an April hearing, he and his attorney both conceded that the faulty legal references were obtained from ChatGPT and took responsibility for the brief. 

Bednar explained that the law clerk who prepared the ChatGPT lawyer prompt 
was not licensed and had not reviewed the the information’s accuracy before submitting it.  

The Utah Court of Appeals emphasized that while ChatGPT and lawyers may collaborate in legal research, lawyers themselves have an obligation to verify the accuracy, and legality, of their work.  

“We agree that the use of AI in the preparation of pleadings is a legal research tool that will continue to evolve with advances in technology. However, we emphasize that every attorney has an ongoing duty to review and ensure the accuracy of their court filings,” the court said in a statement to ABC4

This mismanagement of the AI tool has put the lawyer in trouble for using ChatGPT. As a consequence of the damaged citations, Bednar was also ordered to repay the other party’s attorney fees, refund his client for time spent on the faulty filing, and give $1,000 to a Utah legal charity organization called And Justice for All. Bednar, therefore agreed to pay back any associated legal fees in recompense. 

This ChatGPT lawyer case showcases the risks of complete and blind reliance on AI without any human touch, especially in such a major field as law where truth and accuracy come into play. 

Final Thoughts 

A lawyer sanctioned for using ChatGPT provides one foundational truth around how LLMs, while quite impressive, can generate information that may appear convincing at first glance, but can still deliver false or fabricated content. Meaning, no matter how advanced AI becomes, there is an irreplaceable value to humanity’s critical thinking, legal expertise, and some judiciary professions, ethical judgment. 

Technology should be seen as a beneficial tool, not a final authority. Human verification remains essential to ensure accuracy, especially in fields like law where mistakes can have serious consequences.  


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Tech sections to stay informed and up-to-date with our daily articles.