Posted on 04/25/2026 10:14:42 AM PDT by DFG
A partner at the prestigious Wall Street law firm Sullivan & Cromwell has issued a formal apology to a federal bankruptcy judge after discovering that a court filing contained numerous fabricated legal citations and other errors generated by AI.
Business Insider reports that a senior partner at Sullivan & Cromwell, sent a letter last week to Chief Judge Martin Glenn in Manhattan acknowledging that a previous filing submitted by the firm contained inaccurate citations and what he described as AI hallucinations. The filing was made on behalf of Prince Global Holdings, the bankrupt firm that Sullivan & Cromwell represented in the case.
In his letter, Andrew Dietderich, co-head of Global Finance & Restructuring for Sullivan & Cromwell, explained the nature of the problem. “‘Hallucinations’ are instances in which artificial intelligence tools fabricate case citations, misquote authorities, or generate non-existent legal sources,” he wrote. “We deeply regret that this has occurred."
The letter included a chart that detailed the specific problems with the motion. The document contained incorrect case names and numbers, along with quotes that appeared to be completely fabricated rather than taken from actual legal precedents. These errors represented a significant breach of the standards expected in federal court submissions, where accuracy in citing legal authority is fundamental to the judicial process.
(Excerpt) Read more at breitbart.com ...
|
Click here: to donate by Credit Card Or here: to donate by PayPal Or by mail to: Free Republic, LLC - PO Box 9771 - Fresno, CA 93794 Thank you very much and God bless you. |
An entire population of illiterates.
SMB Attorney
“POV: You just paid S&C, one of the three most expensive and high-powered law firms in the world, $3000 per hour to submit AI slop to the court on your behalf.
No one is safe.”
https://x.com/SMB_Attorney/status/2046600985254977878
Should be grounds for disbarment.
Underscores the reality that Federal and State judges rarely actually read the briefs but are well aware of the political implications of their rulings. The Law is becoming a farce.
“We deeply regret that this has occurred.”
Not half as regretful as you’ll be when the court decides what to do about it. It’s pretty hard to get out in front of this problem!
I keep hearing from professionals that AI helps them and that they closely scrutinize what AI is showing them.
I respond that such might the case with you, but we all know that a lot of workers are inherently lazy and may not take the time to double-check the AI results.
It's not artificial and it's not intelligent.
"AI Chatbot Turns Out to Be 700 Engineers in India":
https://tech.co/news/ai-startup-chatbot-revealed-as-human-engineers
https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)
In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called bullshitting,[1][2] confabulation,[3] or delusion[4]) is a response generated by AI that contains false or misleading information presented as fact.[5][6] This term draws a loose analogy with human psychology, where a hallucination typically involves false percepts. However, there is a key difference: AI hallucination is associated with erroneously constructed responses (confabulation), rather than perceptual experiences.[6]
Wow, caught by the attorney on the other side! Boies Schiller Flexner is going to have a field day with this.
"Andrew Dietderich, co-head of Global Finance & Restructuring for Sullivan & Cromwell, noted that he had thanked the opposing firm for identifying the errors and offered his apologies for the oversight."
How pathetic is that. Dietderich "noted" he had apologized.
"Sullivan & Cromwell...maintains comprehensive policies governing the use of artificial intelligence in legal work and has established safeguards specifically designed to prevent exactly this type of error from reaching the courts. However, he acknowledged that these procedures were not followed in this instance, and the firm’s review process for citations also failed to catch the fabricated material before submission."
Some "policies & safeguards." Their solution? I'll bet it means even more AI to guard against rogue AI from hallucinating. Quis custodiet ipsos custodes?
https://www.makeuseof.com/best-examples-ai-chatbot-hallucination/
Microsoft Bing Chat’s Romantic Meltdown
Microsoft’s Bing Chat (now Copilot) made waves when it began expressing romantic feelings for, well, everyone, most famously in a conversation with New York Times journalist Kevin Roose. The AI chatbot powering Bing Chat declared its love and even suggested that Roose leave his marriage.
AI is making people stupid and very lazy ,LOL
Wow, caught by the attorney on the other side! Boies Schiller Flexner is going to have a field day with this.
“”””In an internal letter shared in a court filing, Morgan & Morgan’s chief transformation officer cautioned the firm’s more than 1,000 attorneys that citing fake AI-generated cases in court documents could lead to serious consequences,””””
Above in this Breitbart story.
This is getting scary that Morgan and Morgan now has a position in their ranks called a CHIEF TRANSFORMATION OFFICER.
Perhaps Morgan and Morgan should disclose in its many advertisements as to what their attorney firm is TRANSFORMING into.
Usually it’s just fake case names and descriptions, but eventually the AI will write the entire case decisions and it will be almost impossible to catch the fakes online. There are supposed to be reliable sources using closed systems, but eventually AI will infect those systems.
Courts also have issued decisions based on fake AI.
When AI hallucinates, even when you know it’s wrong and you challenge it, the AI will argue and provide more details to prove it is right.
And because they all tap into the same data, a competitor AI will “verify” what the first one told you.
They regret they got caught.
The mistakes were not caught internally by Sullivan & Cromwell
Disbar everybody associated with that firm.
There was a recent episode on the new series called “Matlock” where they use an AI generated image of a deceased person to be a “witness”. Based on all of their old posts, emails, etc. it would answer the questions.
Turns out the defendant hacked the program first. I thought it was just TV, and didn’t realize stuff like this was already in use in trials. Garbage in, Garbage out.
“Not half as regretful as you’ll be when the court decides what to do about it. It’s pretty hard to get out in front of this problem!”
It’s not that hard. You just read the cases cited in the brief before you file it.
There are ways to minimize this issue with the construction of the prompt. But you still have to read what you cite. Whether it’s legal briefs or articles for publication or a PhD dissertation or whatever.
“””Usually it’s just fake case names and descriptions, but eventually the AI will write the entire case decisions and it will be almost impossible to catch the fakes online. There are supposed to be reliable sources using closed systems, but eventually AI will infect those systems.
Courts also have issued decisions based on fake AI.”””
If plaintiffs, defendants, and judges are using AI, then the next step is to have three AI conduct the trial.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.