Posted on 07/23/2025 7:25:32 PM PDT by Be Careful
You aren’t man enough to admit when you’ve been outsmarted. I’m way above your intelligence level. I made it shorter for your limited reading ability.
“The Court explicitly stated that immunity for official acts does not depend on the president’s motives or intent, meaning mens rea is largely irrelevant in determining immunity for these acts. However, the prosecution can still proceed if it overcomes the presumption, focusing on the act’s impact rather than the president’s state of mind.scotusblog.comvox.com”
You aren’t man enough to admit when you’ve been outsmarted. I’m way above your intelligence level. I made it shorter for your limited reading ability.
“The Court explicitly stated that immunity for official acts does not depend on the president’s motives or intent, meaning mens rea is largely irrelevant in determining immunity for these acts. However, the prosecution can still proceed if it overcomes the presumption, focusing on the act’s impact rather than the president’s state of mind.scotusblog.comvox.com”
You are buried 1000 feet down. You’ll never see daylight again.
Chat GPT is not reliable for answering legal questions. Not even Wikipedia makes up court citations to non-existent cases to support bogus legal claims.
Chat GPT is simply not a relieable alternative to reading the Opinion of the Court to determine what the Court actually said.
[Syllabus recitation of holdings at 608 U.S. 596] "In dividing official from unofficial conduct, courts may not inquire into the President's motives."
[Opinion of the Court at 608 U.S. 618] "In dividing official from unofficial conduct, courts may not inquire into the President's motives. Such an inquiry would risk exposing even the most obvious instances of official conduct to judicial examination on the mere allegation of improper purpose, thereby intruding on the Article II interests that immunity seeks to protect."
- - - - - - - - -
https://hai.stanford.edu/news/ai-trial-legal-models-hallucinate-1-out-6-or-more-benchmarking-queries
AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking QueriesDate: May 23, 2024
HAI
Stanford University
Human-Centered Artificial IntelligenceA new study reveals the need for benchmarking and public evaluations of AI tools in law.
Artificial intelligence (AI) tools are rapidly transforming the practice of law. Nearly three quarters of lawyers plan on using generative AI for their work, from sifting through mountains of case law to drafting contracts to reviewing documents to writing legal memoranda. But are these tools reliable enough for real-world use?
Large language models have a documented tendency to “hallucinate,” or make up false information. In one highly-publicized case, a New York lawyer faced sanctions for citing ChatGPT-invented fictional cases in a legal brief; many similar cases have since been reported. And our previous study of general-purpose chatbots found that they hallucinated between 58% and 82% of the time on legal queries, highlighting the risks of incorporating AI into legal practice. In his 2023 annual report on the judiciary, Chief Justice Roberts took note and warned lawyers of hallucinations.
Across all areas of industry, retrieval-augmented generation (RAG) is seen and promoted as the solution for reducing hallucinations in domain-specific contexts. Relying on RAG, leading legal research services have released AI-powered legal research products that they claim “avoid” hallucinations and guarantee “hallucination-free” legal citations. RAG systems promise to deliver more accurate and trustworthy legal information by integrating a language model with a database of legal documents. Yet providers have not provided hard evidence for such claims or even precisely defined “hallucination,” making it difficult to assess their real-world reliability.
AI-Driven Legal Research Tools Still Hallucinate
In a new preprint study by Stanford RegLab and HAI researchers, we put the claims of two providers, LexisNexis (creator of Lexis+ AI) and Thomson Reuters (creator of Westlaw AI-Assisted Research and Ask Practical Law AI)), to the test. We show that their tools do reduce errors compared to general-purpose AI models like GPT-4. That is a substantial improvement and we document instances where these tools provide sound and detailed legal research. But even these bespoke legal AI tools still hallucinate an alarming amount of the time: the Lexis+ AI and Ask Practical Law AI systems produced incorrect information more than 17% of the time, while Westlaw’s AI-Assisted Research hallucinated more than 34% of the time.
Read the full study, Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools
To conduct our study, we manually constructed a pre-registered dataset of over 200 open-ended legal queries, which we designed to probe various aspects of these systems’ performance.
Broadly, we investigated (1) general research questions (questions about doctrine, case holdings, or the bar exam); (2) jurisdiction or time-specific questions (questions about circuit splits and recent changes in the law); (3) false premise questions (questions that mimic a user having a mistaken understanding of the law); and (4) factual recall questions (questions about simple, objective facts that require no legal interpretation). These questions are designed to reflect a wide range of query types and to constitute a challenging real-world dataset of exactly the kinds of queries where legal research may be needed the most.
[Figure 1: Comparison of hallucinated (red) and incomplete (yellow) answers across generative legal research tools.]
These systems can hallucinate in one of two ways. First, a response from an AI tool might just be incorrect—it describes the law incorrectly or makes a factual error. Second, a response might be misgrounded—the AI tool describes the law correctly, but cites a source which does not in fact support its claims.
Given the critical importance of authoritative sources in legal research and writing, the second type of hallucination may be even more pernicious than the outright invention of legal cases. A citation might be “hallucination-free” in the narrowest sense that the citation exists, but that is not the only thing that matters. The core promise of legal AI is that it can streamline the time-consuming process of identifying relevant legal sources. If a tool provides sources that seem authoritative but are in reality irrelevant or contradictory, users could be misled. They may place undue trust in the tool's output, potentially leading to erroneous legal judgments and conclusions.
[Figures 2 and 3]
Under the hood, these new legal AI tools use retrieval-augmented generation (RAG) to produce their results, a method that many tout as a potential solution to the hallucination problem. In theory, RAG allows a system to first retrieve the relevant source material and then use it to generate the correct response. In practice, however, we show that even RAG systems are not hallucination-free.
We identify several challenges that are particularly unique to RAG-based legal AI systems, causing hallucinations.
First, legal retrieval is hard. As any lawyer knows, finding the appropriate (or best) authority can be no easy task. Unlike other domains, the law is not entirely composed of verifiable facts—instead, law is built up over time by judges writing opinions. This makes identifying the set of documents that definitively answer a query difficult, and sometimes hallucinations occur for the simple reason that the system’s retrieval mechanism fails.
Second, even when retrieval occurs, the document that is retrieved can be an inapplicable authority. In the American legal system, rules and precedents differ across jurisdictions and time periods; documents that might be relevant on their face due to semantic similarity to a query may actually be inapposite for idiosyncratic reasons that are unique to the law. Thus, we also observe hallucinations occurring when these RAG systems fail to identify the truly binding authority. This is particularly problematic as areas where the law is in flux is precisely where legal research matters the most. One system, for instance, incorrectly recited the “undue burden” standard for abortion restrictions as good law, which was overturned in Dobbs (see Figure 2).
Third, sycophancy—the tendency of AI to agree with the user's incorrect assumptions—also poses unique risks in legal settings. One system, for instance, naively agreed with the question’s premise that Justice Ginsburg dissented in Obergefell, the case establishing a right to same-sex marriage, and answered that she did so based on her views on international copyright. (Justice Ginsburg did not dissent in Obergefell and, no, the case had nothing to do with copyright.) Notwithstanding that answer, here there are optimistic results. Our tests showed that both systems generally navigated queries based on false premises effectively. But when these systems do agree with erroneous user assertions, the implications can be severe—particularly for those hoping to use these tools to increase access to justice among pro se and under-resourced litigants.
Responsible Integration of AI Into Law Requires Transparency
Ultimately, our results highlight the need for rigorous and transparent benchmarking of legal AI tools. Unlike other domains, the use of AI in law remains alarmingly opaque: the tools we study provide no systematic access, publish few details about their models, and report no evaluation results at all.
This opacity makes it exceedingly challenging for lawyers to procure and acquire AI products. The large law firm Paul Weiss spent nearly a year and a half testing a product, and did not develop “hard metrics” because checking the AI system was so involved that it “makes any efficiency gains difficult to measure.” The absence of rigorous evaluation metrics makes responsible adoption difficult, especially for practitioners that are less resourced than Paul Weiss.
The lack of transparency also threatens lawyers’ ability to comply with ethical and professional responsibility requirements. The bar associations of California, New York, and Florida have all recently released guidance on lawyers’ duty of supervision over work products created with AI tools. And as of May 2024, more than 25 federal judges have issued standing orders instructing attorneys to disclose or monitor the use of AI in their courtrooms.
Without access to evaluations of the specific tools and transparency around their design, lawyers may find it impossible to comply with these responsibilities. Alternatively, given the high rate of hallucinations, lawyers may find themselves having to verify each and every proposition and citation provided by these tools, undercutting the stated efficiency gains that legal AI tools are supposed to provide.
Our study is meant in no way to single out LexisNexis and Thomson Reuters. Their products are far from the only legal AI tools that stand in need of transparency—a slew of startups offer similar products and have made similar claims, but they are available on even more restricted bases, making it even more difficult to assess how they function.
Based on what we know, legal hallucinations have not been solved. The legal profession should turn to public benchmarking and rigorous evaluations of AI tools.
This story was updated on Thursday, May 30, 2024, to include analysis of a third AI tool, Westlaw’s AI-Assisted Research. Paper authors: Varun Magesh is a research fellow at Stanford RegLab. Faiz Surani is a research fellow at Stanford RegLab. Matthew Dahl is a joint JD/PhD student in political science at Yale University and graduate student affiliate of Stanford RegLab. Mirac Suzgun is a joint JD/PhD student in computer science at Stanford University and a graduate student fellow at Stanford RegLab. Christopher D. Manning is Thomas M. Siebel Professor of Machine Learning, Professor of Linguistics and Computer Science, and Senior Fellow at HAI. Daniel E. Ho is the William Benjamin Scott and Luna M. Scott Professor of Law, Professor of Political Science, Professor of Computer Science (by courtesy), Senior Fellow at HAI, Senior Fellow at SIEPR, and Director of the RegLab at Stanford University.
Depends.
Spying, sedition, treason, betrayal resulting in death, capture, enemy containment, plus clear evidence of criminal intent (did it for money or to give an advantage to an enemy, did it to cause harm, did it with malicious intent) - nobody is immune.
We’re arguing with a guy convinced that he’s more intelligent because he gets his legal arguments from ChatGPT. If we keep arguing with him, then we really are the idiots.
So a President should be able to clearly and plainly violate 18USC242?
Please explain how that is an “official” act of the President.
L
Have you read the actual Supreme Court decision, in its entirety?
If you have, I'd be happy to discuss this with you. But I'm getting really tired of armchair ChatGPT or "I read what Gateway Pundit said" level of discourse.
Oh hell, I already know you didn't read it because the question you asked shows a lack of comprehension of what the Supreme Court meant by "official acts".
So no, I won't explain it to you. Read the decision yourself, you lazy ****.
“Have you read the actual Supreme Court decision, in its entirety?”
Actually, Scooter, I have. But since I’m not a lawyer and don’t speak lawyer I asked a good faith question.
“Read the decision yourself, you lazy ****.”
GFY, ***hole.
L
First, any crime has two different components. The "act" itself (or "actus reas" in legal terms) and the requisite degree of criminal intent ("mens reas"). You have to put them both together to have a "crime." Needless to say, people are usually very quick to impute mens rea to acts they don't like, but look for every excuse in the world to avoid finding mens rea when it is being applied to people they do like. "My guy did it for a noble reason, but your guy did it for an evil one."
Anyway, here's the problem with your question about that code section. The Supreme Court in Trump v. United States was very clear that you cannot consider the President's intent or motive when determining whether the act he took is covered by immunity. In other words, the President's "mens rea" (literally "guilty mind") is something that you cannot consider when determining questions of immunity. You must look only at the act itself. So when you cite a criminal code section and ask "is this covered by immunity", it is a non-sequitor because you are adding in criminal intent to an immunity determination where intent is irrelevant.
Here's an example to hopefully make the point more clearly. I have actually read someone on FR say that Kentaji Jackson-Brown is so incompetent, and so intent on destroying the country, that Biden appointing her was literally treason. So to that person, Biden committed treason - a violation of 18 U.S.C. Section 2381.
Of course, the actus reus in question is Biden appointing her to the Supreme Court, which SCOTUS would find 9-0 is an "official act" covered by Presidential Immunity. But then you'd have people here saying that would be an absurd result because you are now claiming that there is Presidential immunity for treason!! Horrors!!
That's why asking if there is immunity for the violation of a particular code section, as you did, doesn't make sense. What courts will look at is the act itself, minus all the spin, and minus all the imputation of evil motive or purpose. Simply the bare act. So, I couldn't answer your question because I don't even know what you are claiming the specific underlying act actually is. All I know is that you cited a code section without specifying what he actually did, which is the only thing that actually matters when it comes to determining whether immunity applies.
"In dividing official from unofficial conduct, courts may not inquire into the President’s motives." Trump v USA 2024
Courts may not inquire into motive.. ONLY.. when dividing what is official as opposed to unofficial. That's all SCOTUS said about motive, intent, mens rea.
Treason, among some other crimes, would, of course, necessitate an inquiry in motive, intent, mens rea. MENS REA AND IMMUNITY:
The presence or absence of a guilty mind (mens rea) is essential in determining whether an action is criminal, even if it falls under the umbrella of "official" acts, according to the Cato Institute.
For example, if a president uses their official powers to commit a crime, but the action was motivated by personal gain or malice rather than legitimate presidential duties, a court could consider the act "unofficial" for the purposes of immunity, according to the American Enterprise Institute. This is because the president's motive and intent are crucial in determining whether a course of action constitutes a crime.
THE IMPORTANCE OF MENS REA:
The concept of mens rea is crucial in ensuring that presidents are held accountable for criminal actions, even if those actions are cloaked in the language of official duties.
If a president's actions are motivated by criminal intent, even if they involve official powers, they may not be shielded by presidential immunity.
In essence, presidential immunity is not absolute and does not protect a president from criminal prosecution for acts motivated by a guilty mind, even if those acts are connected to their official duties.
The Court provided minimal guidance on how to determine whether an act is official or unofficial, remanding those questions back to the lower courts for exploration.
I rest, and win, my case.
“So, I couldn’t answer your question because I don’t even know what you are claiming the specific underlying act actually is.”
I’d say knowingly using false information to drag a duly elected President through years of criminal trials would qualify as a deliberate violation of Trump’s civil rights in violation of 18USC242. Hence my mention of that statute.
I’d also say that ordering underlings to knowingly use false information to violate the civil rights of a duly elected President might run afoul of th RICO statutes. I can’t see even the most liberal member of SCOTUS signing off on that. But once again, I’m no lawyer. And Roberts has more gyrations that an Olympic gymnast so who knows.
And if you mean to say that SCOTUS said a sitting President can NEVER be considered to have a “guilty mind” I’d politely ask you to point me to the specific section of the decision which states that. Because I must have missed it.
Granted I’m no lawyer and my Latin is a bit rusty. So I could be wrong. My understanding of the decision is that it would need to be litigated to determine whether or not it was an “official act”. Do I have that right?
And just for the record I don’t use ChatGP or any other AI platform to do my research. I don’t trust natural intelligence all that much so I’m very hesitant to use the artificial version.
It’s why I asked the question of you and not ChatGP. I assume you’re a lawyer.
L
"Litigated" in the sense of some motion practice of lawyers writing stuff and arguing about Immunity, but not "litigated" in the sense of an actual trial. The entire point of immunity is that it is improper to have that matter tried at all.
As I've said elsewhere, I don't think the conservative blogosphere is doing us any favors by characterizing the evidence and facts the way they have. They are clearly spinning matters of intent and judgment as if they are uncontroverted facts.
Nobody ever convinces anyone of anything here, and I understand I'm really swimming upstream on this one in particular. So all I'll say is that when the dust settles from all this, Obama will not end up indicted for any of his actions as President related to this. Even if they find some compliant district court judge, it will never survive appeal.
Whether or not that is "right" or "just" isn't the issue I'm debating. I'm just saying that's what the law actually is.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.