Please ensure Javascript is enabled for purposes of website accessibility

Judge sanctions attorney misled by AI

By: Bridgetower Media Newswires//February 29, 2024//

attorney misled by AI

Judge sanctions attorney misled by AI

By: Bridgetower Media Newswires//February 29, 2024//

Listen to this article

A seasoned Massachusetts practitioner recently joined a growing, unwelcome, involuntary professional association composed of lawyers sanctioned for introducing false or misleading information into court after being led astray by artificial intelligence systems.

In this attorney’s case, the AI tool had “hallucinated,” responding to a query by supplying at least four case names and corresponding citations that simply did not exist.

Unlike another legal team famously ensnared by the new technological trap, the Massachusetts attorney expressed contrition and sought to correct the record quickly, which factored into a monetary sanction — $2,000 — that Superior Court Judge Brian A. Davis called “mild given the seriousness of the violations that occurred.”

The issue arose in the case Smith v. Farwell, et al., brought by the aunt and personal representative of the estate of a 23-year-old woman who died by apparent suicide in February 2021, after years of allegedly being sexually exploited by three Stoughton police officers and the town’s animal control officer.

In reviewing the plaintiff’s four oppositions to the defendants’ motions to dismiss, Davis noticed that the legal citations concerning the elements of a wrongful death action “seemed amiss.” The judge searched for several hours but was unable to locate three of the cases cited in the filings, and a fourth bogus case came to light after a “more searching review.”

He followed up in a Nov. 6 letter to the court, acknowledging that the oppositions “inadvertently” included citations to multiple cases that “do not exist in reality.” Members of his office had used an unidentified “AI system,” he explained.

At a sanctions hearing on Dec. 7, the attorney further explained that the oppositions had been drafted by a trusted associate and two recent law school graduates who had not yet passed the bar. When he reviewed the documents, the attorney said he had checked them for style, grammar and flow, but not the accuracy of the citations.

The attorney again apologized and represented that there had been no intent to deceive the court.

Davis opened his discussion of the appropriate sanction by referencing an order issued on June 22 by U.S. District Court Judge P. Kevin Castel in the New York case Mata v. Avianca, Inc., the most notorious of the reported cases involving attorney AI misuse that has emerged thus far.

“Many harms flow from the submission of fake opinions,” Castel had written.

AI “hallucinations” are deceptive and difficult to detect, the judge noted. In some cases, AI has even falsely identified real people as accused parties in lawsuits or fictitious scandals, he added.

Unlike in the case before him, Davis said the attorneys in Mata had not easily retreated from their numerous citations to fictitious cases. Those lawyers were fined $5,000 and ordered to file with the court copies of letters they had mailed to their client with a copy of the court’s sanction decision and to each of the federal judges who had been cited falsely.

Davis concluded that the attorney in Smith had violated his duty under Rule 11 to undertake a “reasonable inquiry” into the accuracy of the case citations.

“Simply stated, no inquiry is not a reasonable inquiry,” he wrote.

Davis said his “restrained sanction” reflects the mitigating circumstances in the case, including the attorney’s unfamiliarity with AI technology and lack of knowledge that an AI system had been used in preparing the oppositions. The fictitious case citations had been included in the oppositions “in error and not with the intention of deceiving the court,” the judge wrote.

But the “broader lesson” for the entire bar is that all attorneys are obligated under Massachusetts Rules of Civil Procedure 11 and 7 to know whether AI technology is being used in the preparation of court papers and to ensure that appropriate steps are taken to verify the truthfulness and accuracy of any AI-generated content before the papers are submitted, the judge explained.

“The blind acceptance of AI-generated content by attorneys undoubtedly will lead to other sanction hearings in the future, but a defense based on ignorance will be less credible, as the dangers associated with the use of Generative AI systems become more widely known,” the judge wrote.

John F. Weaver, who chairs the artificial intelligence practice at McLane Middleton, says the judge performed an “immense kindness” in refusing to name the attorney in Smith, noting that the Mata lawyers’ “unfortunate contributions” to the development of AI law will be in the first paragraphs of their obituaries.

The Woburn, Massachusetts, lawyer suspects there may be a follow-up action by the Board of Bar Overseers, which Davis seemed to be teeing up with a footnote referencing the Rules of Professional Conduct implicated by AI use.

With the BBO, too, the attorney in Smith could benefit from his contrition and swift response, unlike in Mata, where the attorneys “had multiple exit ramps and refused to get off the highway,” Weaver says.

In a sense, the Smith case was “the perfect storm,” involving a younger, more tech-savvy associate inclined to experiment with AI, supervised by a lawyer with decades of experience still clinging to hard copy sources for his legal research, as the decision notes he only subscribed to LEXIS once the issue with the oppositions came to light, Weaver says.

“Hopefully, this kind of decision gets publicized, and more bar associations publish pieces about it, there are more CLEs, and people who are in those demographics get the warnings that they need,” Weaver says.

Though the judge’s admonition is timely, the role of the use of AI in the Smith case is “a bit of a red herring,” says Boston professional liability attorney Michael J. Rossi.

“The lawyer in this case was not sanctioned because he or someone in his firm used AI to draft a brief,” the Conn, Kavanaugh, Rosenthal, Peisch & Ford partner says. “This case is really about the importance of staying on top of technological changes in the practice of law, whether those changes have to do with data privacy, electronic discovery, AI or other technologies that are available to us.”

There is a “branch in the evolution” of AI tools, notes Timothy V. Fisher, a patent attorney and electrical engineer who is part of the artificial intelligence and machine learning practice group at Pierce Atwood in Boston.

Publicly available AI tools do not apply rules but rather imitate training data, he says.

“It doesn’t have a conceptual understanding of what it’s giving you,” he says. “It just knows that it looks good.”

On the other “evolutionary branch” are tools tailored for the legal profession, which in time will have “guardrails” to make sure that they are quoting cases exactly and returning the right citation for them.

“But again, an attorney has a responsibility to make sure what they file with the court has been checked and is accurate, and it doesn’t matter how it was created or written or generated,” Fisher says.

Polls

What kind of stories do you want to read more of?

View Results

Loading ... Loading ...

Legal News

See All Legal News

WLJ People

Sea all WLJ People

Opinion Digests