Free Republic
Browse · Search
General/Chat
Topics · Post Article

Year old but becoming relevant.
1 posted on 10/12/2025 9:14:38 AM PDT by Openurmind
[ Post Reply | Private Reply | View Replies ]


To: Openurmind

AI is for generating first drafts, that’s it.


2 posted on 10/12/2025 9:18:48 AM PDT by dfwgator ("I am Charlie Kirk!")
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Openurmind
...when generative AI software relies solely on content produced by genAI, the responses begin to degrade...

Navel-gazing AI.

3 posted on 10/12/2025 9:20:00 AM PDT by E. Pluribus Unum (Je suis Charlie Kirk.)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Openurmind

I always assume AI is just telling me what ever.

I always tell it that it is wrong and to try again regardless of the answer and it will always say I’m sorry and give me another wrong answer.


4 posted on 10/12/2025 9:20:04 AM PDT by algore
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Openurmind

I’ve learned to skip the AI summaries search engines provide because they are so routinely inaccurate and unhelpful. I wonder what’s to become of humanity if they actually take such garbage output seriously? What is the point of using it to provide even a first draft if you have to spend time checking every line of it and fixing half of it?


5 posted on 10/12/2025 9:21:26 AM PDT by EnderWiggin1970
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Openurmind
The researchers dubbed this cyclical overdose on generative AI content model collapse—a steady decline in the learned responses of the AI that continually pollutes the training sets of repeating cycles until the output is a worthless distortion of reality.

Huh, I had the same thought years ago - that AI would recursively become more and more delusional as it fills the internet with inaccurate content that begets more inaccurate content. Like the old "telephone" game where you whisper a message down a line of people and it mutates to become unrecognizable by the end.

8 posted on 10/12/2025 9:23:59 AM PDT by EnderWiggin1970
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Openurmind

GIGO


10 posted on 10/12/2025 9:26:09 AM PDT by Flatus I. Maximus (I never left the Democratic Party. It left me, and every time I look it keeps going further left.)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Openurmind

My take is that AI will descend into sounding and acting like your sociopathic democrat/communist in-law or relative.


12 posted on 10/12/2025 9:27:59 AM PDT by kvanbrunt2
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Openurmind

I used ChatGPT here, to generate tips on how to reduce the chances of Hallucinations, some good advice there, but of course, YMMV.

Minimizing the Chance of Hallucinations

“Hallucination” = the model generates confident but factually incorrect or unsupported statements.

A. Data and Prompt Engineering

Be explicit in instructions:

Example: “If unsure, say you don’t know.”

Reinforces truthfulness over fluency.

Provide structured context:

Use bullet points, JSON, or tables instead of narrative paragraphs.

Models are more accurate when the input format is deterministic.

Limit the model’s imagination scope:

Add constraints like “Answer only using the provided data” or “Do not make assumptions beyond the context.”

Shorten context to essentials:

The more irrelevant information in the prompt, the more likely the model will anchor to the wrong part of it.

B. System-Level Controls

Retrieval-Augmented Generation (RAG):

Retrieve relevant documents (from a database or vector store) dynamically before generation.

Ensures grounding in verified data rather than parameter memory.

Post-Generation Verification:

Use a secondary LLM or rule-based validator to check claims (a “fact-checking pass”).

Common in multi-agent or chain-of-thought systems.

Confidence Scoring:

Use techniques like log-probabilities, entailment scoring, or cross-verification with another model to estimate certainty.

C. Model and Context Management

Trim irrelevant history:

Don’t keep entire conversation histories; keep only what’s contextually relevant.

This prevents confusion or “blending” of old and new facts.

Chunk and summarize:

Use summarization checkpoints so that the model “remembers” context in concise, verified summaries rather than raw text.

External memory with grounding:

Store facts externally (e.g., database, vector index) instead of relying on the LLM’s internal weights to recall truth.

D. Fine-tuning / System Prompts

Reinforce factuality in base instructions (system prompt or fine-tuning data).
Example: “Always cite your sources. If none exist, state that the answer cannot be verified.”


14 posted on 10/12/2025 9:30:40 AM PDT by dfwgator ("I am Charlie Kirk!")
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Openurmind

I stopped using Google when it started requiring me to solve puzzles to prove I am not a robot to perform a simple search. Not worth the hassle.


17 posted on 10/12/2025 9:34:24 AM PDT by Dr. Sivana ("Whatsoever he shall say to you, do ye." (John 2:5))
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Openurmind

Genies can never be put back in the bottle, AI is here to stay and will continue to get better and stronger.


18 posted on 10/12/2025 9:34:58 AM PDT by bigbob (We are all Charlie Kirk now)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Openurmind
Who would have thought that "a copy of a copy of a copy" would not be as good as the original? Surprise, surprise!

AI is a tool, and it's really good for some important things, but not good for everything under all circumstances.

21 posted on 10/12/2025 9:41:36 AM PDT by The Duke (Not without inciden)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Openurmind
The Turing test and all relevant follow ons come to the same thing.

If you're convinced, then it's "intelligent." If you're not convinced, then this AI stuff is just programming, which always has restraints to keep it from going off the rails, as it has multiple times now.

"If we can fool you....." Has the ring of the Clintons and the Obamas.....

Remember "we're the ones we've been waiting for?"

Obama "We are the ones we have been waiting for"

28 posted on 10/12/2025 10:01:02 AM PDT by Worldtraveler once upon a time (Degrow government)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Openurmind

bkmk


30 posted on 10/12/2025 10:19:33 AM PDT by Raycpa
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Openurmind

Everyone can enjoy laughing at AI for a bit longer. Yeah I can easily spot AI written drivel right now.

But I suspect there will come a time not too far in the future when one won’t be able to tell that AI generated the content. AI is only in its infancy right now.


31 posted on 10/12/2025 10:34:40 AM PDT by plain talk
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Openurmind

AI can’t determine truth. Quite a great weakness.


37 posted on 10/12/2025 12:24:24 PM PDT by aimhigh (1 John 3:23 "And THIS is His commandment . . . . ")
[ Post Reply | Private Reply | To 1 | View Replies ]

Free Republic
Browse · Search
General/Chat
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson