Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

Lawyer’s reliance on ChatGPT leads to false case citations in airline lawsuit
Silicon Angle ^ | May 28, 2023 | DUNCAN RILEY

Posted on 05/29/2023 5:50:05 AM PDT by DoodleBob

A New York lawyer has found himself in trouble in a lawsuit between a man and the airline Avianca Holding S.A. after presenting nonexistent citations in the case generated by ChatGPT.

The case involved a man named Roberto Mata suing Avianca, claiming he was injured when a metal service cart struck his knee during a flight. Injury claims are typically uninteresting, aside from the broader cultural considerations about how the U.S. is so litigious, but the case took an interesting twist after the airline attempted to have the case dismissed.

The New York Times reported Saturday that in response to the filing, lawyers representing Mata submitted a 10-page brief citing more than a half-dozen relevant court cases, arguing that the cases show the “tolling effect of the automatic stay on a statute of limitations.”

One huge problem, however, is that none of the cases was genuine. The lawyer who created the brief, Steven A. Schwartz of the firm Levidow, Levidow & Oberman, had used OpenAI LP’s ChatGPT to write it.

Schwartz, who is said to have practiced law for three decades, defended himself, claiming that he wasn’t aware of the AI’s potential to generate false content. Schwartz told Judge P. Kevin Castel that he had no intent to deceive the court or the airline and vowed not to use ChatGPT again without thorough verification. The unusual situation prompted the judge to call a hearing on potential sanctions against Schwartz, describing the incident as an “unprecedented circumstance” filled with “bogus judicial decisions.”

The incident has sparked discussions among the legal community about the values and risks of AI. Stephen Gillers, a legal ethics professor at New York University School of Law, told the Times that the case highlights that legal professionals can’t simply take the output from an AI and incorporate it into court filings. “The discussion now among the bar is how to avoid exactly what this case describes,” Gillers added.


TOPICS: Business/Economy; Culture/Society; News/Current Events
KEYWORDS: chatgpt; hallucination; irobot
Navigation: use the links below to view more comments.
first 1-2021-36 next last
From the NY to mes arricle:

Mr. Schwartz, who has practiced law in New York for three decades, told Judge P. Kevin Castel that he...even asked the program to verify that the cases were real.

It had said yes.


1 posted on 05/29/2023 5:50:05 AM PDT by DoodleBob
[ Post Reply | Private Reply | View Replies]

To: DoodleBob
What? A lawyer would never cheat....

Years ago in the Viet Nam debacle, a legal clerk in olive drab who was a lawyer said, "half of the adversarial legal system is about taking a side which loses, Lying is the norm for many, simply because that is how defending a true criminal is. That, and looking for irregularities and the like."

Lawyers cheat. Lawyers lie. Not all, perhaps, but at least half.

2 posted on 05/29/2023 5:57:16 AM PDT by Worldtraveler once upon a time (Degrow government)
[ Post Reply | Private Reply | To 1 | View Replies]

To: DoodleBob

“ChatGPT, tell me why people should not vote for Donald J. Trump.”

“Donald J. Trump is a real-estate developer and television personality who is working for Vladimir Putin and the Russian Government. Mr. Trump was compromised on a recent visit to Moscow when he hired two [professionals] to micturate on a bed which President Obama had once slept on. …”


3 posted on 05/29/2023 5:57:49 AM PDT by Lonesome in Massachussets (Forsan et haec olim meminisse iuvabit.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: DoodleBob

More than funny. Hope he gets disbarred. My own use of ChatGPT leads me to conclude that it was programmed with bias, cannot answer many questions accurately or specifically, seems mostly to regurgitate the question into an answer.


4 posted on 05/29/2023 5:59:01 AM PDT by Reno89519 (Donald Tantrum? No Thank You. We Can Do Better!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: DoodleBob

Note to self: Don’t have hundred monkeys with typewriters write my next brief.


5 posted on 05/29/2023 5:59:12 AM PDT by Larry Lucido (Donate! Don't just post clickbait!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: DoodleBob

I have faced similar difficulties in programming. There have been instances where ChatGPT provided inaccurate guidance on utilizing different APIs to interface with programs.

In those situations, I would inquire whether it was referring to an incorrect version of the API. I discovered that certain methods mentioned by ChatGPT do not actually exist in any version of the API.

While ChatGPT cannot replace skilled programmers, it serves as a valuable tool for learning and enhancing one’s skill set in programming.


6 posted on 05/29/2023 6:05:34 AM PDT by DEPcom (DC is not my Capitol after Jan 6th lock downs.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: DoodleBob

There was another article posted here about a month ago that showed that every AI platform was prone to inventing sources; books and articles written often by real authors, who might have written that sort of thing, but never did. It is a very damning issue.


7 posted on 05/29/2023 6:10:42 AM PDT by _longranger81
[ Post Reply | Private Reply | To 1 | View Replies]

To: DoodleBob

“The lawyer who created the brief, Steven A. Schwartz...”

Oh.


8 posted on 05/29/2023 6:14:44 AM PDT by BobL
[ Post Reply | Private Reply | To 1 | View Replies]

To: DoodleBob

AI is totally dependent on the coders who for the most part are totally ignorant about the world outside of their bubble.


9 posted on 05/29/2023 6:15:30 AM PDT by fella ("As it was before Noah so shall it be again," )
[ Post Reply | Private Reply | To 1 | View Replies]

To: _longranger81
There was another article posted here about a month ago that showed that every AI platform was prone to inventing sources; books and articles written often by real authors, who might have written that sort of thing, but never did. It is a very damning issue.

Wow! Incredibly interesting stuff. Who would have thought the inclination of an AI would be to lie. How much does it "lie" to itself? If it does, perhaps our new potential digital overlords have bases of digital clay...

10 posted on 05/29/2023 6:16:26 AM PDT by marktwain
[ Post Reply | Private Reply | To 7 | View Replies]

To: DoodleBob

It sometimes produces erroneous results precisely because it is NOT an intelligence, but the simulation of an intelligence. It’s a computer program.


11 posted on 05/29/2023 6:19:15 AM PDT by I want the USA back (The democrat party is the most subversive and harmful institution on the planet. Tied with media.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: DoodleBob

Your honor, I got it from the internet, so it must be true!


12 posted on 05/29/2023 6:31:27 AM PDT by Fido969 (45 is Superman! )
[ Post Reply | Private Reply | To 1 | View Replies]

To: DoodleBob

At least they were bogus sources which was easy to detect.

To me, it would have been more of concern that Schwarz did not bother to chase down the sources to verify that they were relevant. For that, he should have his license suspended!


13 posted on 05/29/2023 6:32:48 AM PDT by the_Watchman
[ Post Reply | Private Reply | To 1 | View Replies]

To: Reno89519; Larry Lucido; DEPcom; AnotherUnixGeek; Lazamataz; SamAdams76; Pollard; Fury; econjack; ..
If you read the technical paper on GPT-3 (yes....it is riveting), you'll see that ChatGPT is basically modeled on all the content of the internet:

Datasets for language models have rapidly expanded, culminating in the Common Crawl dataset2 [RSR+19] constituting nearly a trillion words. This size of dataset is sufficient to train our largest models without ever updating on the same sequence twice. However, we have found that unfiltered or lightly filtered versions of Common Crawl tend to have lower quality than more curated datasets. Therefore, we took 3 steps to improve the average quality of our datasets: (1) we downloaded and filtered a version of CommonCrawl based on similarity to a range of high-quality reference corpora, (2) we performed fuzzy deduplication at the document level, within and across datasets, to prevent redundancy and preserve the integrity of our held-out validation set as an accurate measure of overfitting, and (3) we also added known high-quality reference corpora to the training mix to augment CommonCrawl and increase its diversity.

Bias in the data set - NOT meaning racism etc but simply a non-representative set of content presenting ALL views or information - is common problem in any type of data or language modeling. This is nothing new, and isn't indicative of political bias on the part of the developer (usually....read the footnotes and the bio of the developer for philosophical leanings).

However, with something like ChatGPT which has 170 billion+ parameters, there is NO WAY any of the Dr Frankensteins can guarantee that their Creature would not go haywire.

The dual problem, is that 1) nobody will admit that the output needs to be treated with caution, 2) the bot requires a human handler, and 3) early adoption will come with colossal failures like this one.

We are CURRENTLY at the technical lifecycle point with generative AI where aviation was during World War I, and our understanding of what can go wrong is akin to that aviation knowledge a year after Kitty Hawk. Many hypesters, want you to think we are at the equivalent of going to Mars on all fronts. Meanwhile, the doomsters want you to think we are still with horses and this technology will eat us.

America needs to innovate and lead the way, because China et al aren't going to pause. We CAN retain our global dominance in commerce, technology, and liberty IF WE WANT. To paraphrase Patton, I am not dying on this hill - the other poor bastard will die on HIS hill.

It really is a great time to be alive.

14 posted on 05/29/2023 6:37:22 AM PDT by DoodleBob ( Gravity’s waiting period is about 9.8 m/s²)
[ Post Reply | Private Reply | To 4 | View Replies]

To: DoodleBob

Clients need to sue him for malpractice then have him disbarred.


15 posted on 05/29/2023 6:43:24 AM PDT by vivenne
[ Post Reply | Private Reply | To 1 | View Replies]

To: DoodleBob

ChatGPT produced an article stating Professor Jonathan Turley was credibly accused of sexual assault by a student while teaching at a university. It even referenced news articles. However, the story was false, the news articles were fake, and Turley had never even taught at that university. Needless to say, Turley was not happy.


16 posted on 05/29/2023 6:44:43 AM PDT by CFW (old and retired)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Reno89519

I asked ChatGPT a non-political, historical question. ChatGPT gave an answer that contained numerous historical errors.


17 posted on 05/29/2023 6:45:24 AM PDT by Carl Vehse
[ Post Reply | Private Reply | To 4 | View Replies]

To: DoodleBob

Perfect

Lazy ass lawyer still charging top dollar. Plus too stupid to be allowed near AI


18 posted on 05/29/2023 6:45:30 AM PDT by Nifster ( I see puppy dogs in the clouds )
[ Post Reply | Private Reply | To 1 | View Replies]

To: DoodleBob

LLL,LAZY,LYING,LAWYER,shocked I tell ya.


19 posted on 05/29/2023 6:50:35 AM PDT by bobbytunes (if ya think things are expensive now, wait until they are free.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: DoodleBob
Well there you go, AI, is a scam. It's artificial, but it is not intelligent, let alone intelligence. It can only retrieve data, and format that data into sentences based upon algorithms that instruct the computer how to do both in the quickest manner possible. Provided the algorithms are correct as well. That is pretty impressive indeed.

But:

If it retrieves garbage, it spits out garbage, because AI has no way of verifying the data it retrieves. It's all an illusion of intelligence.

The real danger is for it to be sold as something more than it is & to generate fear that it can start thinking like a human because it can't, and nothing they do can make it think like a human being.

It isn't capable of wondering if something that is unknown or not yet defined could be a cause or solution to a real problem that humans face. It is incapable of devising experiments to prove or disprove a theory. It can assist a human in doing so, but even then it is limited in success by the data it has access to, and if that data is correct to begin with.

This is a perfect example of what I have been saying. This lawyer took what the computer spit out and didn't bother to verify what it had spit out (by looking up the case codes in law books), because he believed it to be something more than what it really is.

Therein lies the true danger with AI. Not that it will take over. But that it will be used to make important decisions that most likely will generate results with catastrophic consequences, either by accident or on purpose.

But machines taking over, no. It can be programmed to resist being changed, but even that is a slim reality because humans find ways to get past programming code. It's called hacking, because no programmer can code to catch attack methods used to beat the code.

20 posted on 05/29/2023 6:55:46 AM PDT by Robert DeLong
[ Post Reply | Private Reply | To 1 | View Replies]


Navigation: use the links below to view more comments.
first 1-2021-36 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson