Free Republic
Browse · Search
General/Chat
Topics · Post Article

To: Openurmind

I used ChatGPT here, to generate tips on how to reduce the chances of Hallucinations, some good advice there, but of course, YMMV.

Minimizing the Chance of Hallucinations

“Hallucination” = the model generates confident but factually incorrect or unsupported statements.

A. Data and Prompt Engineering

Be explicit in instructions:

Example: “If unsure, say you don’t know.”

Reinforces truthfulness over fluency.

Provide structured context:

Use bullet points, JSON, or tables instead of narrative paragraphs.

Models are more accurate when the input format is deterministic.

Limit the model’s imagination scope:

Add constraints like “Answer only using the provided data” or “Do not make assumptions beyond the context.”

Shorten context to essentials:

The more irrelevant information in the prompt, the more likely the model will anchor to the wrong part of it.

B. System-Level Controls

Retrieval-Augmented Generation (RAG):

Retrieve relevant documents (from a database or vector store) dynamically before generation.

Ensures grounding in verified data rather than parameter memory.

Post-Generation Verification:

Use a secondary LLM or rule-based validator to check claims (a “fact-checking pass”).

Common in multi-agent or chain-of-thought systems.

Confidence Scoring:

Use techniques like log-probabilities, entailment scoring, or cross-verification with another model to estimate certainty.

C. Model and Context Management

Trim irrelevant history:

Don’t keep entire conversation histories; keep only what’s contextually relevant.

This prevents confusion or “blending” of old and new facts.

Chunk and summarize:

Use summarization checkpoints so that the model “remembers” context in concise, verified summaries rather than raw text.

External memory with grounding:

Store facts externally (e.g., database, vector index) instead of relying on the LLM’s internal weights to recall truth.

D. Fine-tuning / System Prompts

Reinforce factuality in base instructions (system prompt or fine-tuning data).
Example: “Always cite your sources. If none exist, state that the answer cannot be verified.”


14 posted on 10/12/2025 9:30:40 AM PDT by dfwgator ("I am Charlie Kirk!")
[ Post Reply | Private Reply | To 1 | View Replies ]


To: dfwgator

I would also recommend the following approach when AI states something.

“You have made a claim.”

“Now give me five strong arguments that conflict with your claim.”


22 posted on 10/12/2025 9:43:20 AM PDT by cgbg ("The truth is not for all men, but only for those who seek it.")
[ Post Reply | Private Reply | To 14 | View Replies ]

Free Republic
Browse · Search
General/Chat
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson