Photo Credit: Courtesy of: TSViPhoto/ Shutterstock
In a legal case that underscores the growing role of artificial intelligence in the legal profession, an airline found itself challenging the credibility of references cited in a plaintiff’s legal brief. The presiding judge, Judge Castel, raised concerns over the authenticity of six judicial decisions cited in the brief, suggesting that they might be fabricated. This revelation opened a Pandora’s box of inquiries into the accuracy and legitimacy of legal research in the age of AI.
What sets this case apart is the revelation that the legal research in question was not conducted by the plaintiff’s attorney, Peter LoDuca, but rather by a colleague from the same law firm, Steven A. Schwartz. Mr. Schwartz, an attorney with over three decades of experience, had taken the unusual step of employing ChatGPT, a powerful AI tool, to search for relevant precedents. His use of AI in a legal context was unusual, given that this was his first foray into AI-driven research, and he was, as he states in his written statement, “unaware that its content could be false.”
The AI in question, ChatGPT, is known for its ability to provide answers in a conversational, human-like manner, making it an attractive tool for various industries, including law. When the airline’s attorneys questioned the authenticity of specific legal cases, including Varghese v. China Southern Airlines Co Ltd, ChatGPT assured Mr. Schwartz that these cases were genuine and could be found in established legal reference databases, such as LexisNexis and Westlaw. This affirmation led to Mr. Schwartz incorporating these references into the plaintiff’s legal brief.
Both Mr. LoDuca and Mr. Schwartz, who practice law at the firm Levidow, Levidow & Oberman, now face the prospect of disciplinary action. An 8 June hearing has been scheduled to address the matter. This case has ignited a conversation in the legal community about the implications and potential hazards of relying on AI for legal research. It serves as a stark reminder that while AI can be a valuable tool, the responsibility for verifying the authenticity of the information it generates ultimately rests with human practitioners.
This incident highlights the broader concerns surrounding the use of AI in the legal field. While AI-powered tools like ChatGPT offer speed and convenience, they also raise important questions about the potential for misinformation and bias. This case serves as a cautionary tale for legal professionals who may be tempted to embrace AI without thoroughly vetting the information it provides, as well as a call for greater awareness and scrutiny in the evolving landscape of AI-assisted legal research.