Posted on 05/29/2023 5:50:05 AM PDT by DoodleBob
A New York lawyer has found himself in trouble in a lawsuit between a man and the airline Avianca Holding S.A. after presenting nonexistent citations in the case generated by ChatGPT.
The case involved a man named Roberto Mata suing Avianca, claiming he was injured when a metal service cart struck his knee during a flight. Injury claims are typically uninteresting, aside from the broader cultural considerations about how the U.S. is so litigious, but the case took an interesting twist after the airline attempted to have the case dismissed.
The New York Times reported Saturday that in response to the filing, lawyers representing Mata submitted a 10-page brief citing more than a half-dozen relevant court cases, arguing that the cases show the “tolling effect of the automatic stay on a statute of limitations.”
One huge problem, however, is that none of the cases was genuine. The lawyer who created the brief, Steven A. Schwartz of the firm Levidow, Levidow & Oberman, had used OpenAI LP’s ChatGPT to write it.
Schwartz, who is said to have practiced law for three decades, defended himself, claiming that he wasn’t aware of the AI’s potential to generate false content. Schwartz told Judge P. Kevin Castel that he had no intent to deceive the court or the airline and vowed not to use ChatGPT again without thorough verification. The unusual situation prompted the judge to call a hearing on potential sanctions against Schwartz, describing the incident as an “unprecedented circumstance” filled with “bogus judicial decisions.”
The incident has sparked discussions among the legal community about the values and risks of AI. Stephen Gillers, a legal ethics professor at New York University School of Law, told the Times that the case highlights that legal professionals can’t simply take the output from an AI and incorporate it into court filings. “The discussion now among the bar is how to avoid exactly what this case describes,” Gillers added.
Mr. Schwartz, who has practiced law in New York for three decades, told Judge P. Kevin Castel that he...even asked the program to verify that the cases were real.
It had said yes.
Years ago in the Viet Nam debacle, a legal clerk in olive drab who was a lawyer said, "half of the adversarial legal system is about taking a side which loses, Lying is the norm for many, simply because that is how defending a true criminal is. That, and looking for irregularities and the like."
Lawyers cheat. Lawyers lie. Not all, perhaps, but at least half.
“ChatGPT, tell me why people should not vote for Donald J. Trump.”
“Donald J. Trump is a real-estate developer and television personality who is working for Vladimir Putin and the Russian Government. Mr. Trump was compromised on a recent visit to Moscow when he hired two [professionals] to micturate on a bed which President Obama had once slept on. …”
More than funny. Hope he gets disbarred. My own use of ChatGPT leads me to conclude that it was programmed with bias, cannot answer many questions accurately or specifically, seems mostly to regurgitate the question into an answer.
Note to self: Don’t have hundred monkeys with typewriters write my next brief.
I have faced similar difficulties in programming. There have been instances where ChatGPT provided inaccurate guidance on utilizing different APIs to interface with programs.
In those situations, I would inquire whether it was referring to an incorrect version of the API. I discovered that certain methods mentioned by ChatGPT do not actually exist in any version of the API.
While ChatGPT cannot replace skilled programmers, it serves as a valuable tool for learning and enhancing one’s skill set in programming.
There was another article posted here about a month ago that showed that every AI platform was prone to inventing sources; books and articles written often by real authors, who might have written that sort of thing, but never did. It is a very damning issue.
“The lawyer who created the brief, Steven A. Schwartz...”
Oh.
AI is totally dependent on the coders who for the most part are totally ignorant about the world outside of their bubble.
Wow! Incredibly interesting stuff. Who would have thought the inclination of an AI would be to lie. How much does it "lie" to itself? If it does, perhaps our new potential digital overlords have bases of digital clay...
It sometimes produces erroneous results precisely because it is NOT an intelligence, but the simulation of an intelligence. It’s a computer program.
Your honor, I got it from the internet, so it must be true!
At least they were bogus sources which was easy to detect.
To me, it would have been more of concern that Schwarz did not bother to chase down the sources to verify that they were relevant. For that, he should have his license suspended!
Datasets for language models have rapidly expanded, culminating in the Common Crawl dataset2 [RSR+19] constituting nearly a trillion words. This size of dataset is sufficient to train our largest models without ever updating on the same sequence twice. However, we have found that unfiltered or lightly filtered versions of Common Crawl tend to have lower quality than more curated datasets. Therefore, we took 3 steps to improve the average quality of our datasets: (1) we downloaded and filtered a version of CommonCrawl based on similarity to a range of high-quality reference corpora, (2) we performed fuzzy deduplication at the document level, within and across datasets, to prevent redundancy and preserve the integrity of our held-out validation set as an accurate measure of overfitting, and (3) we also added known high-quality reference corpora to the training mix to augment CommonCrawl and increase its diversity.
Bias in the data set - NOT meaning racism etc but simply a non-representative set of content presenting ALL views or information - is common problem in any type of data or language modeling. This is nothing new, and isn't indicative of political bias on the part of the developer (usually....read the footnotes and the bio of the developer for philosophical leanings).
However, with something like ChatGPT which has 170 billion+ parameters, there is NO WAY any of the Dr Frankensteins can guarantee that their Creature would not go haywire.
The dual problem, is that 1) nobody will admit that the output needs to be treated with caution, 2) the bot requires a human handler, and 3) early adoption will come with colossal failures like this one.
We are CURRENTLY at the technical lifecycle point with generative AI where aviation was during World War I, and our understanding of what can go wrong is akin to that aviation knowledge a year after Kitty Hawk. Many hypesters, want you to think we are at the equivalent of going to Mars on all fronts. Meanwhile, the doomsters want you to think we are still with horses and this technology will eat us.
America needs to innovate and lead the way, because China et al aren't going to pause. We CAN retain our global dominance in commerce, technology, and liberty IF WE WANT. To paraphrase Patton, I am not dying on this hill - the other poor bastard will die on HIS hill.
It really is a great time to be alive.
Clients need to sue him for malpractice then have him disbarred.
ChatGPT produced an article stating Professor Jonathan Turley was credibly accused of sexual assault by a student while teaching at a university. It even referenced news articles. However, the story was false, the news articles were fake, and Turley had never even taught at that university. Needless to say, Turley was not happy.
I asked ChatGPT a non-political, historical question. ChatGPT gave an answer that contained numerous historical errors.
Perfect
Lazy ass lawyer still charging top dollar. Plus too stupid to be allowed near AI
LLL,LAZY,LYING,LAWYER,shocked I tell ya.
But:
If it retrieves garbage, it spits out garbage, because AI has no way of verifying the data it retrieves. It's all an illusion of intelligence.
The real danger is for it to be sold as something more than it is & to generate fear that it can start thinking like a human because it can't, and nothing they do can make it think like a human being.
It isn't capable of wondering if something that is unknown or not yet defined could be a cause or solution to a real problem that humans face. It is incapable of devising experiments to prove or disprove a theory. It can assist a human in doing so, but even then it is limited in success by the data it has access to, and if that data is correct to begin with.
This is a perfect example of what I have been saying. This lawyer took what the computer spit out and didn't bother to verify what it had spit out (by looking up the case codes in law books), because he believed it to be something more than what it really is.
Therein lies the true danger with AI. Not that it will take over. But that it will be used to make important decisions that most likely will generate results with catastrophic consequences, either by accident or on purpose.
But machines taking over, no. It can be programmed to resist being changed, but even that is a slim reality because humans find ways to get past programming code. It's called hacking, because no programmer can code to catch attack methods used to beat the code.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.