In Ontario, Family Lawyers have a host of obligations to their clients and to the court. These arise from numerous sources, including the mandates set out by the Law Society of Ontario, which is the profession’s regulator. Under that body’s Rules of Professional Conduct (in s. 3.1-1), a “competent lawyer” is defined as one who “has and applies relevant knowledge, skills and attributes in a manner appropriate to each matter undertaken on behalf of a client including: … (i) legal research; … (iv) writing and drafting.”
Although the Rules are similar in the U.S., at least two lawyers in that country have tried to take some shortcuts – courtesy of the AI-driven ChatGPT – and they’ve been called out on it.
As reported on the website of the American Bar Association Journal two New York lawyers are facing possible sanctions because they submitted documents to the court that were created by ChatGPT – and contained reference to prior court rulings that didn’t actually exist.
The lawyers had been hired to represent a plaintiff in his lawsuit against an airline, sparked by the personal injuries he suffered from being struck by a metal serving cart in-flight. In the course of representing their client, the lawyers filed materials that the presiding judge realized were “replete with citations to nonexistent cases”. They referenced at least six decisions that were entirely fake, and contained passages citing “bogus quotes and bogus internal citations”.
All of this was uncovered after the judge asked one of the lawyers to provide a sworn Affidavit attaching copies of some of the cases cited in the filed court materials.
The one lawyer’s explanation was simple (in a pass-the-buck kind of way): He said he had relied on the work of another lawyer at his firm; that lawyer – who had 30 years of experience – explained that while he had indeed relied on ChatGPT to “supplement” his legal research, he had never used the AI platform before, and did not know that the resulting content could be false.
The judge is now ordering them to appear in a “show cause” hearing to defend their actions and explain why they should not be sanctioned by their regulator.
As an interesting post-script: In the aftermath of these accusations, one of the lawyers typed a query into the ChatGPT platform, asking if the earlier-provided cases were real. ChatGPT confirmed (incorrectly) they were, adding that they could be found in “reputable legal databases”. Apparently, the judge was not impressed.
Additional coverage:
- The judge’s ruling in the matter: Mata v. Avianca, Inc. (May 4, 2023 order)
- A New York Times article
- The Volokh Conspiracy website (here and here)