Sept 16 (Reuters) – As more and more phony case citations and other artificial intelligence errors show up in legal filings, some judges are experimenting with alternatives to traditional court sanctions to deter lawyers from relying on chatbots and other AI tools without checking their work.
Sign up here.
Cozen represents internet services provider Uprise in the civil case, which stemmed from a failed fiber optic project. The order came after Hardy identified at least 14 case citations submitted by the lawyers that appeared fictitious, and others that were misquoted or misrepresented.
The attorneys and Cozen O’Connor apologized to the judge and explained that one of the lawyers, associate Daniel Mann, used ChatGPT to help draft and edit the document and accidentally filed an early, uncorrected draft.
Mann has been terminated from the firm, according to a filing. He and the other Cozen lawyer, Jan Tomasik, did not immediately respond to requests for comment.
A Cozen spokesperson said the firm doesn’t comment on employee status. The firm said it has a “strict and unambiguous” AI policy that includes a ban on publicly-available AI tools for client work, technology training and a requirement that lawyers verify their work.
“We take this responsibility very seriously and violations of our policy have consequences,” the firm said. Uprise did not immediately respond to a request for comment.
Judges have issued warnings and sanctions, including hefty fines, fee awards to opposing counsel and disciplinary referrals, in dozens of cases against lawyers whose filings contained made-up citations, misquoted material and other so-called “hallucinations” produced using AI.
Professional conduct rules don’t bar lawyers from using AI, but attorneys in every state can be disciplined for failing to vet court submissions.
“There’s this increased frustration by many judges that these continue to occur and proliferate,” said Gary Marchant of Arizona State University’s law school, who gives talks to judges and lawyers about AI. Judges may be “looking for creative ways to really, not only punish the lawyers involved, but to send a strong message that would hopefully deter others from being so sloppy.”
Hardy isn’t the first judge to try new approaches. In at least two cases, courts have required lawyers to alert judges whom they falsely cited as authors of non-existent cases. In August, an Arizona federal judge said warning her counterparts about the AI hallucination would “minimize potential harms of those fictitious cases to those jurists.” The judge also removed the lawyer from the case.
In another recent case, Alabama U.S. District Judge Anna Manasco not only reprimanded three lawyers at law firm Butler Snow for misusing AI and disqualified them from the case she was overseeing, but also ordered them to share a copy of her decision with their clients and the judges and opposing counsel in their other cases pending anywhere in the country, and also to every lawyer at Butler Snow.
Reporting by Sara Merken
Our Standards: The Thomson Reuters Trust Principles., opens new tab
Sara Merken reports on the business of law, including legal innovation and law firms in New York and nationally.