A.I. Abuse Is Reinventing the Law
A.I. Abuse Is Reinventing the Law
Homepage   /    science   /    A.I. Abuse Is Reinventing the Law

A.I. Abuse Is Reinventing the Law

🕒︎ 2025-11-07

Copyright The New York Times

A.I. Abuse Is Reinventing the Law

Earlier this year, a lawyer filed a motion in a Texas bankruptcy court that cited a 1985 case called Brasher v. Stewart. Only the case doesn’t exist. Artificial intelligence had concocted that citation, along with 31 others. A judge blasted the lawyer in an opinion, referring him to the state bar’s disciplinary committee and mandating six hours of A.I. training. That filing was spotted by Robert Freund, a Los Angeles-based lawyer, who fed it to an online database that tracks legal A.I. misuse globally. Mr. Freund is part of a growing network of lawyers who track down A.I. abuses committed by their peers, collecting the most egregious examples and posting them online. The group hopes that by tracking down the A.I. slop, it can help draw attention to the problem and put an end to it. While judges and bar associations generally agree that it’s fine for lawyers to use chatbots for research, they must still ensure their filings are accurate. But as the technology has taken off, so has misuse. Chatbots frequently make things up, and judges are finding more and more fake case law citations, which are then rounded up by the legal vigilantes. “These cases are damaging the reputation of the bar,” said Stephen Gillers, an ethics professor at New York University School of Law. “Lawyers everywhere should be ashamed of what members of their profession are doing.” Since the introduction of ChatGPT in 2022, professionals in fields from medicine to engineering to marketing have wrestled with how and when to use chatbots. Many companies are experimenting with the technology, which can come tailored for workplace use. For lawyers, a federal judge in New York helped set the standard when he wrote in 2023 that “there is nothing inherently improper” about using A.I., although they must check its work. The American Bar Association agreed, adding that lawyers “have a duty of competence.” Still, according to court filings and interviews with lawyers and scholars, the legal profession in recent months has increasingly become a hotbed for A.I. blunders. Some of those stem from people’s use of chatbots in lieu of hiring a lawyer. Chatbots, for all their pitfalls, can help those representing themselves “speak in a language that judges will understand,” said Jesse Schaefer, a North Carolina-based lawyer who contributes cases to the same database as Mr. Freund. But an increasing number of cases originate among legal professionals, and courts are starting to map out punishments of small fines and other discipline. The problem, though, keeps getting worse. That’s why Damien Charlotin, a lawyer and researcher in France, started an online database in April to track it. Initially he found three or four examples a month. Now he often receives that many in a day. Many lawyers, including Mr. Freund and Mr. Schaefer, have helped him document 509 cases so far. They use legal tools like LexisNexis for notifications on keywords like “artificial intelligence,” “fabricated cases” and “nonexistent cases.” Some of the filings include fake quotes from real cases, or cite real cases that are irrelevant to their arguments. The legal vigilantes uncover them by finding judges’ opinions scolding lawyers. Peter Henderson, a Princeton computer science professor who started his own A.I. legal misuse database, said his lab was working on ways to find fake citations directly rather than relying on hit-or-miss keyword searches. The lawyers say they don’t intend to shame or harass their peers. Mr. Charlotin said he avoided prominently displaying the offenders’ names for that reason. But Mr. Freund said a benefit of a public catalog was that anyone could see whom they “might want to avoid.” And in most cases, Mr. Charlotin added, “the attorneys are not very good.” Eugene Volokh, a law professor at the University of California, Los Angeles, blogs about A.I. misuse on The Volokh Conspiracy. He has written about the issue more than 70 times, and contributes to Mr. Charlotin’s database. “I like sharing with my readers little stories like this,” Mr. Volokh said, “stories of human folly.” One involved Tyrone Blackburn, a New York lawyer focusing on employment and discrimination, who used A.I. to write legal briefs that contained numerous hallucinations. At first he thought the defense’s allegations were bogus, Mr. Blackburn said in an interview. “It was an oversight on my part.” He eventually admitted to the errors and was fined $5,000 by the judge. Mr. Blackburn said he had been using a new legal A.I. tool and hadn’t realized it could fabricate cases. His client, who he was representing for free, fired him and filed a complaint with the bar, Mr. Blackburn added. (In an unrelated matter, a New York grand jury indicted Mr. Blackburn last month on allegations he rammed his car into a man trying to serve him legal documents. Attempts to reach Mr. Blackburn for additional comment failed.) Court-ordered penalties “are not having a deterrent effect,” said Mr. Freund, who has publicly flagged more than four dozen examples this year. “The proof is that it continues to happen.”

Guess You Like

What Were Those 2 Spooky Flashes That Lit Up the Moon?
What Were Those 2 Spooky Flashes That Lit Up the Moon?
Last week, a handful of watchi...
2025-11-05
Character.AI to Ban Children Under 18 From Using Its Chatbots
Character.AI to Ban Children Under 18 From Using Its Chatbots
Character.AI said on Wednesday...
2025-10-29
Free and cheap things to do in November
Free and cheap things to do in November
Colorado Lottery bonus The Col...
2025-10-31
'Pluribus' review: Vince Gilligan on the human right to be unhappy
'Pluribus' review: Vince Gilligan on the human right to be unhappy
Vince Gilligan, of “Breaking B...
2025-11-07