Free Republic
Browse · Search
Bloggers & Personal
Topics · Post Article

Skip to comments.

Elite Wall Street Law Firm Sullivan & Cromwell Apologizes to Federal Judge for AI Hallucinations in Court Filing
Breitbart ^ | 04/25/2026 | Lucas Nolan

Posted on 04/25/2026 10:14:42 AM PDT by DFG

A partner at the prestigious Wall Street law firm Sullivan & Cromwell has issued a formal apology to a federal bankruptcy judge after discovering that a court filing contained numerous fabricated legal citations and other errors generated by AI.

Business Insider reports that a senior partner at Sullivan & Cromwell, sent a letter last week to Chief Judge Martin Glenn in Manhattan acknowledging that a previous filing submitted by the firm contained inaccurate citations and what he described as AI hallucinations. The filing was made on behalf of Prince Global Holdings, the bankrupt firm that Sullivan & Cromwell represented in the case.

In his letter, Andrew Dietderich, co-head of Global Finance & Restructuring for Sullivan & Cromwell, explained the nature of the problem. “‘Hallucinations’ are instances in which artificial intelligence tools fabricate case citations, misquote authorities, or generate non-existent legal sources,” he wrote. “We deeply regret that this has occurred."

The letter included a chart that detailed the specific problems with the motion. The document contained incorrect case names and numbers, along with quotes that appeared to be completely fabricated rather than taken from actual legal precedents. These errors represented a significant breach of the standards expected in federal court submissions, where accuracy in citing legal authority is fundamental to the judicial process.

(Excerpt) Read more at breitbart.com ...


TOPICS: Business/Economy
KEYWORDS: ai; hallucinations; sullivancromwell; wallstreet

Click here: to donate by Credit Card

Or here: to donate by PayPal

Or by mail to: Free Republic, LLC - PO Box 9771 - Fresno, CA 93794

Thank you very much and God bless you.


Navigation: use the links below to view more comments.
first 1-2021-4041 next last

1 posted on 04/25/2026 10:14:42 AM PDT by DFG
[ Post Reply | Private Reply | View Replies]

To: DFG

An entire population of illiterates.


2 posted on 04/25/2026 10:16:02 AM PDT by BenLurkin (The above is not a statement of fact. It is opinion or satire. Or both.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: DFG

SMB Attorney

“POV: You just paid S&C, one of the three most expensive and high-powered law firms in the world, $3000 per hour to submit AI slop to the court on your behalf.

No one is safe.”

https://x.com/SMB_Attorney/status/2046600985254977878


3 posted on 04/25/2026 10:16:47 AM PDT by DFG
[ Post Reply | Private Reply | To 1 | View Replies]

To: DFG

Should be grounds for disbarment.


4 posted on 04/25/2026 10:17:23 AM PDT by SpaceBar
[ Post Reply | Private Reply | To 1 | View Replies]

To: DFG

Underscores the reality that Federal and State judges rarely actually read the briefs but are well aware of the political implications of their rulings. The Law is becoming a farce.


5 posted on 04/25/2026 10:19:38 AM PDT by allendale
[ Post Reply | Private Reply | To 1 | View Replies]

To: DFG

“We deeply regret that this has occurred.”

Not half as regretful as you’ll be when the court decides what to do about it. It’s pretty hard to get out in front of this problem!


6 posted on 04/25/2026 10:20:43 AM PDT by ProtectOurFreedom ( )
[ Post Reply | Private Reply | To 1 | View Replies]

To: DFG

I keep hearing from professionals that AI helps them and that they closely scrutinize what AI is showing them.

I respond that such might the case with you, but we all know that a lot of workers are inherently lazy and may not take the time to double-check the AI results.


7 posted on 04/25/2026 10:23:32 AM PDT by Presbyterian Reporter
[ Post Reply | Private Reply | To 1 | View Replies]

To: DFG
This is what you get when you pay Indian "programmers" a few dollars a day to pretend they're "Artificial Intelligence".

It's not artificial and it's not intelligent.

"AI Chatbot Turns Out to Be 700 Engineers in India":
https://tech.co/news/ai-startup-chatbot-revealed-as-human-engineers

8 posted on 04/25/2026 10:25:11 AM PDT by T.B. Yoits
[ Post Reply | Private Reply | To 1 | View Replies]

To: DFG

https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)

In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called bullshitting,[1][2] confabulation,[3] or delusion[4]) is a response generated by AI that contains false or misleading information presented as fact.[5][6] This term draws a loose analogy with human psychology, where a hallucination typically involves false percepts. However, there is a key difference: AI hallucination is associated with erroneously constructed responses (confabulation), rather than perceptual experiences.[6]


key difference: AI hallucination is associated with erroneously constructed responses (confabulation), rather than perceptual experiences.[6]


9 posted on 04/25/2026 10:25:57 AM PDT by PeterPrinciple (Thinking Caps are no longer being issued, but there must be a warehouse full of them somewhere)
[ Post Reply | Private Reply | To 1 | View Replies]

To: DFG
"The mistakes were not caught internally by Sullivan & Cromwell but were instead identified by attorneys from Boies Schiller Flexner, the law firm representing creditors in the bankruptcy case."

Wow, caught by the attorney on the other side! Boies Schiller Flexner is going to have a field day with this.

"Andrew Dietderich, co-head of Global Finance & Restructuring for Sullivan & Cromwell, noted that he had thanked the opposing firm for identifying the errors and offered his apologies for the oversight."

How pathetic is that. Dietderich "noted" he had apologized.

"Sullivan & Cromwell...maintains comprehensive policies governing the use of artificial intelligence in legal work and has established safeguards specifically designed to prevent exactly this type of error from reaching the courts. However, he acknowledged that these procedures were not followed in this instance, and the firm’s review process for citations also failed to catch the fabricated material before submission."

Some "policies & safeguards." Their solution? I'll bet it means even more AI to guard against rogue AI from hallucinating. Quis custodiet ipsos custodes?

10 posted on 04/25/2026 10:28:13 AM PDT by ProtectOurFreedom ( )
[ Post Reply | Private Reply | To 1 | View Replies]

To: DFG

https://www.makeuseof.com/best-examples-ai-chatbot-hallucination/

Microsoft Bing Chat’s Romantic Meltdown
Microsoft’s Bing Chat (now Copilot) made waves when it began expressing romantic feelings for, well, everyone, most famously in a conversation with New York Times journalist Kevin Roose. The AI chatbot powering Bing Chat declared its love and even suggested that Roose leave his marriage.


11 posted on 04/25/2026 10:28:18 AM PDT by PeterPrinciple (Thinking Caps are no longer being issued, but there must be a warehouse full of them somewhere)
[ Post Reply | Private Reply | To 1 | View Replies]

To: DFG

AI is making people stupid and very lazy ,LOL


12 posted on 04/25/2026 10:30:11 AM PDT by butlerweave (Fateh)
[ Post Reply | Private Reply | To 1 | View Replies]

To: ProtectOurFreedom

Wow, caught by the attorney on the other side! Boies Schiller Flexner is going to have a field day with this.


In the world of law, if you find one lie doesn’t it discount ALL OTHER TESTIMONY?


13 posted on 04/25/2026 10:30:20 AM PDT by PeterPrinciple (Thinking Caps are no longer being issued, but there must be a warehouse full of them somewhere)
[ Post Reply | Private Reply | To 10 | View Replies]

To: DFG

“”””In an internal letter shared in a court filing, Morgan & Morgan’s chief transformation officer cautioned the firm’s more than 1,000 attorneys that citing fake AI-generated cases in court documents could lead to serious consequences,””””


Above in this Breitbart story.

This is getting scary that Morgan and Morgan now has a position in their ranks called a CHIEF TRANSFORMATION OFFICER.

Perhaps Morgan and Morgan should disclose in its many advertisements as to what their attorney firm is TRANSFORMING into.


14 posted on 04/25/2026 10:32:00 AM PDT by Presbyterian Reporter
[ Post Reply | Private Reply | To 1 | View Replies]

To: DFG

Usually it’s just fake case names and descriptions, but eventually the AI will write the entire case decisions and it will be almost impossible to catch the fakes online. There are supposed to be reliable sources using closed systems, but eventually AI will infect those systems.

Courts also have issued decisions based on fake AI.

When AI hallucinates, even when you know it’s wrong and you challenge it, the AI will argue and provide more details to prove it is right.

And because they all tap into the same data, a competitor AI will “verify” what the first one told you.


15 posted on 04/25/2026 10:33:06 AM PDT by Williams (Thank God for the election of President Trump!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: ProtectOurFreedom

They regret they got caught.


16 posted on 04/25/2026 10:34:01 AM PDT by dfwgator ("I am Charlie Kirk!")
[ Post Reply | Private Reply | To 6 | View Replies]

To: ProtectOurFreedom

The mistakes were not caught internally by Sullivan & Cromwell


Disbar everybody associated with that firm.


17 posted on 04/25/2026 10:35:53 AM PDT by dfwgator ("I am Charlie Kirk!")
[ Post Reply | Private Reply | To 10 | View Replies]

To: Williams

There was a recent episode on the new series called “Matlock” where they use an AI generated image of a deceased person to be a “witness”. Based on all of their old posts, emails, etc. it would answer the questions.

Turns out the defendant hacked the program first. I thought it was just TV, and didn’t realize stuff like this was already in use in trials. Garbage in, Garbage out.


18 posted on 04/25/2026 10:37:04 AM PDT by 21twelve (Ever Vigilant - Never Fearful)
[ Post Reply | Private Reply | To 15 | View Replies]

To: ProtectOurFreedom

“Not half as regretful as you’ll be when the court decides what to do about it. It’s pretty hard to get out in front of this problem!”

It’s not that hard. You just read the cases cited in the brief before you file it.

There are ways to minimize this issue with the construction of the prompt. But you still have to read what you cite. Whether it’s legal briefs or articles for publication or a PhD dissertation or whatever.


19 posted on 04/25/2026 10:39:43 AM PDT by ModelBreaker
[ Post Reply | Private Reply | To 6 | View Replies]

To: Williams

“””Usually it’s just fake case names and descriptions, but eventually the AI will write the entire case decisions and it will be almost impossible to catch the fakes online. There are supposed to be reliable sources using closed systems, but eventually AI will infect those systems.

Courts also have issued decisions based on fake AI.”””


If plaintiffs, defendants, and judges are using AI, then the next step is to have three AI conduct the trial.


20 posted on 04/25/2026 10:41:04 AM PDT by Presbyterian Reporter
[ Post Reply | Private Reply | To 15 | View Replies]


Navigation: use the links below to view more comments.
first 1-2021-4041 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
Bloggers & Personal
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson