That situation involved a case in the U.S. District Court for the Southern District of New York, where attorneys representing a plaintiff in a personal injury lawsuit submitted a legal brief containing citations to six non-existent cases.
These fictitious cases were generated by ChatGPT, which the attorneys used for legal research without verifying the accuracy of the information provided.
The court discovered the issue when defendant's legal team was unable to locate the cited cases and brought this to the court's attention.
Upon investigation, it was revealed that the attorneys had relied on ChatGPT's output without conducting proper due diligence.
As a result, the judge sanctioned the attorneys and their law firm, imposing a fine of $5,000 for submitting false information to the court.
The judge said that he was well aware of the effective use of AI in the practice of law but expected lawyers to verify AI-generated content, especially in legal proceedings where accuracy is paramount. The issue was not the use of AI per se, but the failure to exercise professional responsibility in reviewing and confirming the validity of the information before submission.
I think that was a different case- this case I spoke of had an avatar representing the person-
It was on fox, not newsmax- I made a mistake