In a multimillion-dollar conspiracy trial that captivated both the entertainment and political worlds, a surprising player has emerged on the scene – artificial intelligence (AI). Prakazrel “Pras” Michel, a member of the iconic hip-hop group Fugees, is now arguing that his trial lawyer’s use of an “experimental” generative AI program was among the many blunders that marred his defense, ultimately leading to his conviction earlier this year.
Michel’s new legal team filed a motion for a new trial this week, contending that his previous lawyer, David Kenner, was “unqualified, unprepared, and ineffectual.” The generative AI program, developed by EyeLevel.AI, was intended to assist in crafting closing statements, but Michel alleges that it contributed to the failure of his defense. This landmark case has thrust AI into the legal realm, raising questions about its potential impact on the justice system.
Generative AI: The rising contender in legal arenas
Generative AI programs have the capability to produce realistic text, images, and video. Their growing prominence has triggered discussions about misinformation, copyright infringement, and the need for legislative regulations. AI tools, such as ChatGPT, have already reshaped industries like writing and education, and the Michel case could serve as a preview of challenges to come as this technology continues to advance.
This trial marked a historic moment as it became the first federal trial to incorporate generative AI. The startup responsible for the AI system’s development heralded its significance, with Kenner even dubbing it a “game changer for complex litigation.” However, the case took an unexpected turn during Kenner’s closing arguments when he appeared to confuse critical details of the case, mistakenly attributing lyrics to the Fugees when they belonged to rapper Diddy (formerly known as Puff Daddy).
Kenner’s apparent mishap in his final words to the jury raised questions about the AI program’s effectiveness. Michel’s new lawyer, Peter Zeidenberg, argued that the AI program had failed Kenner, resulting in a “deficient, unhelpful, and missed opportunity” that prejudiced the defense.
The company’s defense: AI as a legal tool, not a replacement
EyeLevel.AI, the company behind the generative AI program, vigorously defended its technology. Contrary to Michel’s claim that the program was “experimental,” the company asserted that it was trained exclusively using facts from the case, such as court transcripts, and did not draw upon musical lyrics or online sources. Neil Katz, co-founder and COO of EyeLevel.AI, emphasized that their AI was designed to aid human lawyers in providing fast responses to complex legal questions, not to replace them. Katz vehemently denied allegations that Kenner had a financial interest in the program.
The implications for the legal field
The Michel case is poised to set a precedent for law firms as they increasingly adopt AI technology. Sharon Nelson, president of Sensei Enterprises, a digital forensics, cybersecurity, and information technology firm, noted that a significant number of firms have already integrated AI into their practices, and surveys suggest that more than 50% of lawyers anticipate doing so in the next year. She emphasized the rapid pace at which AI is transforming the legal profession, asserting, “If you don’t work with it, you’re going to be left behind.”
Michel was found guilty in April on all 10 charges, including conspiracy and acting as an unregistered agent of a foreign government, with potential sentences of up to 20 years in prison. He awaits sentencing, which has not yet been scheduled. In Zeidenberg’s motion for a new trial, he argued that the AI program’s failure was just one aspect of a flawed defense, also citing issues with the jury hearing references to the “crime fraud exception” and “co-conspirators.”
The future of AI in the legal profession
While the use of generative AI in the legal field is still in its infancy, its adoption is expected to grow as AI products continue to improve. John Villasenor, a professor of engineering and public policy at the University of California, Los Angeles, commented on the evolving landscape, noting that the American Bar Association is currently studying the issue but has yet to establish guidelines for AI’s use in the legal profession.
Using AI for closing arguments presents unique challenges due to the dynamic nature of trials, and generative AI can occasionally produce inaccuracies, known as “hallucinations.” Villasenor advised that attorneys utilizing AI should exercise caution and diligently fact-check any content generated by the technology.
The ruling on the motion for a new trial remains pending, but the Michel case has undeniably catapulted generative AI into the legal spotlight, sparking debates about its role and impact in the pursuit of justice. As AI technology continues to evolve, its presence in the courtroom is likely to expand, challenging lawyers to adapt to this new frontier in legal practice