Following an unconventional situation involving artificial intelligence and legal briefs, two lawyers from New York have come under penalty for falsely referencing non-existent cases in a client’s court proceedings.
Steven Schwartz and Peter LoDuca, who are a part of the law firm Levidow, Levidow & Oberman, included citations of six fictitious cases supposedly generated by the AI chatbot, ChatGPT, in a legal brief.
When AI assistance goes awry
In an interesting twist, the ChatGPT AI was utilized by Schwartz to aid in the research for a personal injury case against Avianca, the Colombian airline.
Unfortunately, Schwartz confessed he inadvertently included the erroneous citations in the case brief, with LoDuca being the sole signatory on the document.
The problem emerged when Avianca’s legal representatives expressed their inability to locate several cases cited in the brief, as early as March. This led to a thorough examination, ultimately resulting in the revelation that some of the cited cases were fictitious.
Legal repercussions and ethical implications
As a response to this unprofessional conduct, U.S. District Judge P. Kevin Castel in Manhattan mandated Schwartz, LoDuca, and their firm to pay a penalty amounting to $5,000.
The judge criticized the lawyers for their “acts of conscious avoidance and false and misleading statements to the court.”
Even though the lawyers, who faced this disciplinary action, argued they acted in good faith and were astonished by the capability of AI to fabricate cases, Judge Castel ruled against them.
The court maintained that while the use of AI for assistance isn’t inherently wrong, ethical responsibilities lay upon lawyers to verify the accuracy of their filings.
The judge further pointed out the attorneys’ refusal to disown the faux citations, even after being challenged about their authenticity by the court and Avianca.
Moreover, he directed the attorneys to inform all the ‘real’ judges who were wrongly credited with these non-existent cases about the sanctions imposed.
Despite the controversy stirred by the use of ChatGPT, Avianca’s legal team applauded the court’s final verdict. Bart Banino, representing the Colombian airline, expressed his satisfaction with the dismissal of the personal injury case, which he stated was lodged belatedly.
Levidow, Levidow & Oberman, the law firm in the midst of this storm, responded to the court order, stating that they ‘respectfully’ disagreed with the judgment about them acting in bad faith.
Schwartz refrained from commenting on the court’s decision, whereas LoDuca did not respond to requests for comments. Their lawyers are currently scrutinizing the court’s ruling.
In summary, this incident serves as a powerful reminder of the critical role of ethical conduct in the legal profession, particularly when integrating emerging technologies like AI.
It also underlines the necessity for thorough fact-checking and a certain level of skepticism when dealing with AI-generated information.
As AI continues to advance, professionals across all sectors must ensure they handle these technologies responsibly and ethically to avoid similar pitfalls.