Posted on 10/12/2025 9:14:38 AM PDT by Openurmind
Interest in artificial intelligence continues to surge, as Google searches over the past 12 months are at 92% of their all-time peak, but recent research suggests AI’s success could be its downfall. Amid the growth of AI content online, a group of researchers at Cambridge and Oxford universities set out to see what happens when generative AI tools query content produced by AI. What they found was alarming. University of Oxford’s Dr. Ilia Shumailov and the team of researchers discovered that when generative AI software relies solely on content produced by genAI, the responses begin to degrade, according to the study published in Nature last month. After the first two prompts, the answers steadily miss the mark, followed by a significant quality downgrade by the fifth attempt and a complete devolution to nonsensical pablum by the ninth consecutive query. The researchers dubbed this cyclical overdose on generative AI content model collapse—a steady decline in the learned responses of the AI that continually pollutes the training sets of repeating cycles until the output is a worthless distortion of reality.
(Excerpt) Read more at forbes.com ...
![]() |
Click here: to donate by Credit Card Or here: to donate by PayPal Or by mail to: Free Republic, LLC - PO Box 9771 - Fresno, CA 93794 Thank you very much and God bless you. |
AI is for generating first drafts, that’s it.
Navel-gazing AI.
I always assume AI is just telling me what ever.
I always tell it that it is wrong and to try again regardless of the answer and it will always say I’m sorry and give me another wrong answer.
I’ve learned to skip the AI summaries search engines provide because they are so routinely inaccurate and unhelpful. I wonder what’s to become of humanity if they actually take such garbage output seriously? What is the point of using it to provide even a first draft if you have to spend time checking every line of it and fixing half of it?
But it also comes down to how the prompts are composed, as they say, “Garbage In, Garbage Out”.
Here is the problem though... As AI records and accumulates AI works across the internet it is using these AI results to feed on for it’s next works. So this is the exact same condition as local rehashes. Every time it is sourcing from other AI it is getting worse.
Huh, I had the same thought years ago - that AI would recursively become more and more delusional as it fills the internet with inaccurate content that begets more inaccurate content. Like the old "telephone" game where you whisper a message down a line of people and it mutates to become unrecognizable by the end.
Depends on the LLM model, some might get corrupted, others won’t. That’s why it’s important to keep track of models and research their effectiveness. Competition is good, bad models will fall by the wayside. But most people don’t know how to do their due diligence.
GIGO
Absolutely.
My take is that AI will descend into sounding and acting like your sociopathic democrat/communist in-law or relative.
I am talking about AI internet tools.
I used ChatGPT here, to generate tips on how to reduce the chances of Hallucinations, some good advice there, but of course, YMMV.
Minimizing the Chance of Hallucinations
“Hallucination” = the model generates confident but factually incorrect or unsupported statements.
A. Data and Prompt Engineering
Be explicit in instructions:
Example: “If unsure, say you don’t know.”
Reinforces truthfulness over fluency.
Provide structured context:
Use bullet points, JSON, or tables instead of narrative paragraphs.
Models are more accurate when the input format is deterministic.
Limit the model’s imagination scope:
Add constraints like “Answer only using the provided data” or “Do not make assumptions beyond the context.”
Shorten context to essentials:
The more irrelevant information in the prompt, the more likely the model will anchor to the wrong part of it.
B. System-Level Controls
Retrieval-Augmented Generation (RAG):
Retrieve relevant documents (from a database or vector store) dynamically before generation.
Ensures grounding in verified data rather than parameter memory.
Post-Generation Verification:
Use a secondary LLM or rule-based validator to check claims (a “fact-checking pass”).
Common in multi-agent or chain-of-thought systems.
Confidence Scoring:
Use techniques like log-probabilities, entailment scoring, or cross-verification with another model to estimate certainty.
C. Model and Context Management
Trim irrelevant history:
Don’t keep entire conversation histories; keep only what’s contextually relevant.
This prevents confusion or “blending” of old and new facts.
Chunk and summarize:
Use summarization checkpoints so that the model “remembers” context in concise, verified summaries rather than raw text.
External memory with grounding:
Store facts externally (e.g., database, vector index) instead of relying on the LLM’s internal weights to recall truth.
D. Fine-tuning / System Prompts
Reinforce factuality in base instructions (system prompt or fine-tuning data).
Example: “Always cite your sources. If none exist, state that the answer cannot be verified.”
Seems that Way...Yuppers.
nonsensical pablum by the ninth consecutive query.>>> Like Kamala. The phrase “make a difference” is an example. It has no moral direction. Hilter made a difference. Product of Werner Erhardt.
I stopped using Google when it started requiring me to solve puzzles to prove I am not a robot to perform a simple search. Not worth the hassle.
Genies can never be put back in the bottle, AI is here to stay and will continue to get better and stronger.
I tried a browser AI to compare guitars. Little did I know that an Ibanez AM93QM semi hollow guitar was actually a well respected amplifier that paired well with an Epiphone ES-335 guitar. That AI seems to be as trustworthy as my ex.
I have yet to open an AI platform for a question, to play or to examine.
Though, my understanding is if I ask a detailed question in a standard browser prompt the question is answered by some sort of AI entity. Aren’t search and AI responses becoming the same thing?
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.