AI is for generating first drafts, that’s it.
Navel-gazing AI.
I always assume AI is just telling me what ever.
I always tell it that it is wrong and to try again regardless of the answer and it will always say I’m sorry and give me another wrong answer.
I’ve learned to skip the AI summaries search engines provide because they are so routinely inaccurate and unhelpful. I wonder what’s to become of humanity if they actually take such garbage output seriously? What is the point of using it to provide even a first draft if you have to spend time checking every line of it and fixing half of it?
Huh, I had the same thought years ago - that AI would recursively become more and more delusional as it fills the internet with inaccurate content that begets more inaccurate content. Like the old "telephone" game where you whisper a message down a line of people and it mutates to become unrecognizable by the end.
GIGO
My take is that AI will descend into sounding and acting like your sociopathic democrat/communist in-law or relative.
I used ChatGPT here, to generate tips on how to reduce the chances of Hallucinations, some good advice there, but of course, YMMV.
Minimizing the Chance of Hallucinations
“Hallucination” = the model generates confident but factually incorrect or unsupported statements.
A. Data and Prompt Engineering
Be explicit in instructions:
Example: “If unsure, say you don’t know.”
Reinforces truthfulness over fluency.
Provide structured context:
Use bullet points, JSON, or tables instead of narrative paragraphs.
Models are more accurate when the input format is deterministic.
Limit the model’s imagination scope:
Add constraints like “Answer only using the provided data” or “Do not make assumptions beyond the context.”
Shorten context to essentials:
The more irrelevant information in the prompt, the more likely the model will anchor to the wrong part of it.
B. System-Level Controls
Retrieval-Augmented Generation (RAG):
Retrieve relevant documents (from a database or vector store) dynamically before generation.
Ensures grounding in verified data rather than parameter memory.
Post-Generation Verification:
Use a secondary LLM or rule-based validator to check claims (a “fact-checking pass”).
Common in multi-agent or chain-of-thought systems.
Confidence Scoring:
Use techniques like log-probabilities, entailment scoring, or cross-verification with another model to estimate certainty.
C. Model and Context Management
Trim irrelevant history:
Don’t keep entire conversation histories; keep only what’s contextually relevant.
This prevents confusion or “blending” of old and new facts.
Chunk and summarize:
Use summarization checkpoints so that the model “remembers” context in concise, verified summaries rather than raw text.
External memory with grounding:
Store facts externally (e.g., database, vector index) instead of relying on the LLM’s internal weights to recall truth.
D. Fine-tuning / System Prompts
Reinforce factuality in base instructions (system prompt or fine-tuning data).
Example: “Always cite your sources. If none exist, state that the answer cannot be verified.”
I stopped using Google when it started requiring me to solve puzzles to prove I am not a robot to perform a simple search. Not worth the hassle.
Genies can never be put back in the bottle, AI is here to stay and will continue to get better and stronger.
AI is a tool, and it's really good for some important things, but not good for everything under all circumstances.
If you're convinced, then it's "intelligent." If you're not convinced, then this AI stuff is just programming, which always has restraints to keep it from going off the rails, as it has multiple times now.
"If we can fool you....." Has the ring of the Clintons and the Obamas.....
Remember "we're the ones we've been waiting for?"
bkmk
Everyone can enjoy laughing at AI for a bit longer. Yeah I can easily spot AI written drivel right now.
But I suspect there will come a time not too far in the future when one won’t be able to tell that AI generated the content. AI is only in its infancy right now.
AI can’t determine truth. Quite a great weakness.