top of page
Writer's pictureRashanda Michelle Mc Kenna

A New York Lawyer's Use of ChatGPT Leads to Embarrassing Fake Case Citations & $5,000 in Sanctions.

Updated: Nov 11

"A New York Lawyer's Use of ChatGPT Leads to Embarrassing Fake Case Citations: A Cautionary Tale for Legal Professionals"


chatgpt

Stock Image by Solen Feyissa


Artificial Intelligence, in its myriad manifestations, has progressively seeped into a vast array of sectors, offering unprecedented efficiencies and cutting-edge solutions. Yet, when these potent tools are mishandled or misconstrued, particularly within the intricate labyrinth that is legal practice, the ramifications can be strikingly profound. A recent matter involving distinguished attorney Steven Schwartz and the AI-driven entity, ChatGPT, magnifies this quandary. Acting on behalf of a client in a personal injury litigation against Avianca Airlines, Schwartz harnessed the capabilities of this advanced AI system for legal research, unsuspectingly submitting spurious case citations to the court. This incident, now etched in legal history as the 'ChatGPT Counterfeit Case Citation Debacle,' has erected a monumental precedent at the crossroads of AI innovation and legal practice. As we navigate through the ensuing sections, we will dissect the details of this occurrence, explore its far-reaching repercussions for the legal industry, and extract critical insights pertaining to the employment and oversight of AI within the juridical landscape.


The series of unfortunate events commenced with a legal affidavit tendered on the 25th of April, 2023. This document pertained to a litigation initiated in the year 2019 by an individual named Roberto Mata. The lawsuit was lodged against Avianca Airlines, with the assertion that Mata suffered injuries while on board as a consequence of an employee's negligence.


Mata alleged that his knee was struck by a metal service cart during a flight from El Salvador to New York in August 2019, resulting in injury.


Spearheading Mata's legal representation was Steven Schwartz, a seasoned attorney possessing over thirty years of professional experience, affiliated with the renowned law firm of Levidow, Levidow & Oberman. Originally, the lawsuit appeared to follow the conventional progression of personal injury litigations. However, the case subsequently deviated from its anticipated course, veering off in an entirely unforeseen direction.


Schwartz, in his quest to prepare a robust legal brief, decided to incorporate an innovative tool into his legal research process: ChatGPT, an artificial intelligence program developed by OpenAI. ChatGPT had been making waves across numerous industries, and Schwartz believed that the tool would serve as an effective resource for unearthing relevant case precedents.


In response, Avianca appealed to Presiding Judge Kevin Castel of the Southern District of New York to dismiss the lawsuit on the grounds of an expired statute of limitations. Countering this, Mata's legal team presented a 10-page brief containing more than six referenced court decisions. Among the multiple case citations submitted in Schwartz's brief, at least six were later revealed as non-existent. These included cases like Varghese v. China South Airlines, Martinez v. Delta Airlines, and Petersen v. Iran Air, which baffled both the presiding judge, Kevin Castel of the Southern District of New York, and the defence lawyers representing Avianca Airlines.


Upon review of the brief, Avianca's legal counsel was unable to find these cases, Judge Castel directed Mata's attorneys to furnish copies. The lawyers then submitted a compilation of the referenced decisions. Surprisingly, it was discovered that these cases did not exist.


Swartz’s reliance on the AI-powered tool led to a significant professional blunder. Not only did ChatGPT fabricate these cases, it also provided completely fictitious quotes and internal citations, which had inadvertently been included in Schwartz's research and subsequent court submission. This was a stark departure from the norm, where legal research tools provide access to authenticated databases of judicial decisions.


Schwartz later confessed in a declaration filed with the court this week that he was introduced to ChatGPT through his college-friend, although he had never utilised it in a professional capacity and that he had believed ChatGPT had greater reach than standard databases.


Unfolding the Controversy


Attorney Peter LoDuca


On delivering the determination that the cases were in fact fake, Castel's reaction was one of disbelief and concern. In an order he penned, he noted, "The court is presented with an unprecedented circumstance." Never before had a case come before him, or likely any other judge, where the research presented was not only flawed, but entirely fictitious, and the byproduct of an AI tool no less.


At an initial hearing on June 6th, the judge admonished Swartz and his co-counsel Peter De Louca, for being fooled by "legal gibberish." Schwartz admitted in an affidavit on May 24 that he had used ChatGPT "to supplement the legal research performed" and find cases because he had been "unaware of the possibility that its content could be fake."


Swartz’s affidavit openly acknowledged his oversight, expressing regret for his decision to rely on ChatGPT as a legal research source. Notably, Schwartz emphasised that he was "unaware of the possibility that its content could be false," shining a light on a significant gap in the understanding of AI's capabilities and limitations.


Swartz further lamented, "I simply had no idea that ChatGPT was capable of fabricating entire case citations or judicial opinions, especially in a manner that appeared authentic," Schwartz wrote in a declaration on June 6. "I deeply regret my decision to use ChatGPT for legal research, and it is certainly not something I will ever do again." … “I heard about this new site, which I falsely assumed was, like, a super search engine,” Mr. Schwartz said.


Accepting full responsibility for the consequences of his actions, Schwartz faced the legal repercussions head-on. Judge Kevin Castel, clearly concerned about the severity of the situation, called for a sanctions hearing with a purpose of exploring whether Schwartz should face disciplinary action for his use of fabricated case citations.


During the hearing, both Schwartz and his associate, Peter LoDoca, found themselves under intense scrutiny. They were put under oath and questioned aggressively about their actions before and after the document containing the bogus citations was filed. The spectacle was a stark reminder of the weighty consequences of such a misstep.


In defence of Schwartz, his lawyers argued that although his conduct was inexcusable, it did not meet the standard required for sanctions. Their stance was predicated on the idea that Schwartz had not acted in bad faith. Instead, they contended that it was a careless error stemming from a lack of understanding of the AI technology and its potential to generate false information. Despite the scale of the error, the defence insisted there was no malevolent intention underlying Schwartz's actions.


Ultimately, at a Sanctions hearing on Thursday 22, June 2023, US District Judge P. Kevin Castel in Manhattan ordered lawyers Steven Schwartz, Peter LoDuca and their law firm Levidow, Levidow & Oberman to pay a $5,000 (£3,935) fine in total. Judge Castel made it clear they had violated a basic precept of the American legal system.


“Many harms flow from the submission of fake opinions,” the judge wrote. “The opposing party wastes time and money in exposing the deception. The court’s time is taken from other important endeavors.”


The lawyers’ action, he added, “promotes cynicism about the legal profession and the American judicial system. A future litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity.”


The Intersection of Technology and Law: Lessons Learned


The Schwartz case serves as an informative, yet warning beacon at the crossroads of legal operations and AI technology. It emphasises the increasingly influential role of AI within the legal sector, yet concurrently underscores the fundamental need for caution, diligent examination, and comprehensive understanding when deploying these sophisticated tools. This event further underlines the requirement for legal practitioners to stay well-informed about accredited tools explicitly crafted for the legal realm.


This isn't an isolated instance of novel technology surprising legal professionals, but the potential ramifications are notably substantial here. This is primarily due to the judiciary's dependence on precise and authenticated case law to dispense justice.


The misunderstanding regarding ChatGPT's capabilities and outputs by the lawyer underlines a critical takeaway: Although AI systems can enhance human abilities and efficiency in legal research, they cannot substitute human discernment and analysis. The authentication and verification of AI-generated legal research remain an essential duty of the legal practitioner. As illustrated in this case, reliance on AI should not undermine the meticulousness and accuracy that form the foundational pillars of legal practice.


In a broader perspective, this incident sheds light on the extensive challenges the legal profession faces while incorporating and utilising new technologies. Like any other industry, the legal sector is also undergoing digital transformation, and steering through this shift isn't always seamless. This case brings to the forefront the possible pitfalls and growth difficulties that legal professionals may stumble upon. Learning to integrate AI tools into legal practices while upholding the integrity and accuracy of the profession demands a careful balancing act. It calls for continual learning, adaptation, and an in-depth understanding of these technologies.


Conclusively, the event demonstrates that while the fusion of AI and law harbours significant potential, it also comes with its share of risks. As legal professionals persist in exploiting the potential of AI, they must concurrently learn to handle its limitations skillfully. This capability will critically determine how effectively the legal profession can innovate without compromising the fundamental values and standards that ground the practice of law.


Emphasising the availability of certified AI tools for attorneys, the digital legal landscape boasts of reliable tools that leverage the capabilities of verified legal databases, and abide by the regulatory standards of global Legal Bar Associations and Information regulations. These tools, include contract analytic software like eBravia, eDiscovery software, Contract Management Tools like ContractWorks, case management software like Clio, and Matter Management software like MyCase, utilise AI in a secure and purposeful manner that can be immensely beneficial to lawyers.


In this rapidly evolving field, there's much we can learn from incidents like this one. As we move forward, we must take these lessons to heart, fostering a responsible and effective approach to the use of AI in law, thereby shaping a future that is as much about justice and ethics as it is about innovation and progress.


The incident underscores the need for comprehensive training and education to ensure that legal professionals are proficient and cautious users of AI technology. It's crucial that those in the profession not only keep pace with the evolving landscape of legal technology, but also understand the potential consequences of misuse.


It will require the concerted efforts of all stakeholders to strike the right balance between technological innovation and maintaining the highest standards of legal practice.


The future of law lies not in choosing between AI and human practitioners, but in their harmonious collaboration.





144 views0 comments
bottom of page