Artificial intelligence is permeating every aspect of life, including the American legal system. But as the technology becomes more pervasive, the problem of AI-generated lies and bullshit, aka “illusions,” remains.
These AI hallucinations are at the center of claims by former Fugees member Prakasler “Plus” Michel, who accused AI models created by Eye Level of ruining a multi-million dollar fraud case. – Neil Katz, co-founder and COO of Eyelevel, calls this claim untrue.
In April, Michelle was convicted of a crime The conspiracy trial included 10 charges, including tampering with witnesses, falsifying documents, and acting as an unregistered foreign spy. Michelle could be sentenced to up to 20 years in prison after being found guilty as a Chinese agent, which prosecutors allege raised money to influence U.S. politicians. .
“We were brought in by the attorneys at Pras Michel to do something unique, something that had never been done before,” Katz said. Decryption In an interview.
according to report by Associated PressDuring Michelle’s then-lawyer’s closing argument, defense attorney David Kenner incorrectly quoted lyrics from the Sean “Diddy” Combs song “I’ll Be Missing You,” referring to the song as a song by the Fugees. falsely claimed.
As Katz explained, EyeLevel was tasked with building an AI trained on court records that would allow lawyers to ask complex questions in natural language about what happened during a trial. Ta. For example, he said he did not pull any other information from the Internet.
Trials are known to require a large amount of documentation. The criminal trial of FTX founder Sam Bankman Freed is still ongoing, and hundreds of documents have already been produced.Separately, a bankrupt virtual currency exchange bankruptcy has over 3,300 documents, some spanning dozens of pages.
“This is an absolute game-changer for complex litigation,” Kenner wrote in EyeLevel magazine. blog post. “This system has turned hours or days of legal work into seconds. This is the future of how litigation will be done.”
On Monday, Michelle’s new defense attorney, Peter Seidenberg, motion—Online Poster Reuters—For a new trial in the U.S. District Court for the District of Columbia.
“Mr. Kenner used an experimental AI program to write his closing argument, but the program presented frivolous arguments, confused the plan, and failed to highlight important weaknesses in the government’s case. ” Seidenberg wrote. He added that Michael is seeking a new trial because “numerous errors, many of them caused by incompetent trial lawyers, have undermined confidence in the verdict.”
Mr. Katz disputed that claim.
“What they say didn’t happen. This team has no knowledge whatsoever about artificial intelligence or our specific product,” Katz said. Decryption. “Their claims are full of misinformation. I wish they had used our AI software. They could have written the truth.”
Michelle’s lawyer has not yet responded. of decryption Request for comments. Mr. Katz also disputed claims that Mr. Kenner had a financial interest in iLevel, saying the company was hired to assist Mr. Michel’s legal team.
“The accusations in the filing that David Kenner and his associates have any secret financial interest in our company are completely false,” Katz said. Decryption. “Mr. Kenner wrote a very positive review about the performance of our software because that’s how he felt. He wasn’t paid for it. He wasn’t given any stock.”
Launched in 2019 and based in Berkeley eye level Develop generative AI models for consumers (EyeLevel for CX) and legal professionals (EyeLevel for Law). As Katz explained, EyeLevel, the creator of ChatGPT, was one of the first developers to work with OpenAI, and the company is developing “true AI,” meaning people who don’t have access to money or legal professionals. We said that we aim to provide a hallucinogenic and robust tool for the home. Pay for a large team.
Generative AI models are typically trained on large datasets collected from a variety of sources, including the internet. What makes EyeLevel unique is that the AI model is trained solely on court documents, Katz said.
” [AI] “Mr. Katz was trained only on the transcript, only on the facts of both sides presented in court, and only on what the judge said.” “So when you ask this AI a question, it will only give you factual answers based on what happened, without hallucinations.”
Despite how AI models are trained, experts warn of the programs’ habit of lying and hallucinating. In April, ChatGPT falsely accused American criminal defense attorney Jonathan Turley of sexual assault. Chatbots even went so far as to provide fake links. washington post Article proving that claim.
OpenAI has invested heavily in the fight against AI hallucinations and has also brought in a third-party red team to test its suite of AI tools.
“ChatGPT is not always accurate when users sign up to use our tools, so we strive to be as transparent as possible,” OpenAI says on its site . Website. “However, we recognize that there is still much work to do to further reduce the likelihood of hallucinations and educate the public about the current limitations of these AI tools.”
Editing: Stacey Elliott and Andrew Hayward