Posted on 02/08/2026 8:38:29 AM PST by BenLurkin
According to reports, an Australian tourism firm found itself embroiled in controversy earlier this year when it put out an article about an idyllic destination known as the Weldborough Hot Springs.
...
Keen to visit the hot springs but with no idea how to actually get there, tourists headed to the general area and then asked local businesses for directions. The trouble was, the hot springs didn't actually exist.
It turned out that the company had been using an AI to write some of its articles and the system had 'hallucinated' the entire thing.
The firm's owner explained that they had turned to AI to help compete with larger companies by providing new articles on a regular basis.
(Excerpt) Read more at unexplained-mysteries.com ...
What I like is, if I need to learn a new coding language/technology, I just have Claude Code spin me up a working demo application with clear coding examples, that already have the best practices documented.
Or even convert an existing application to another language/environment.
I think programming languages as we know them are going away, or at least become the new “assembly language” that you don’t have to know.
The future will be in standardized natural-language specifications, that will simply generate the code in whatever language/environment you specify, maybe even directly to machine language.
Well AI is not Intelligent
“Willoughby. This stop is Willoughby.”
That’s a good episode...until he stepped off the train.
Absolutely! I just retired, but was lucky enough to work for years with a talented young man who had no formal schooling, but is the smartest and most down-to-earth, humble, and commonsensical guy I have ever known.
And that is saying a lot, because I have worked with some stellar people.
He is an autodidact, and he writes a lot of code. He and I have had a lot of conversations about AI, and he is all on board. I am of an older generation, but. because I believe many things in this world have a good and evil duality in them, it’s true I can see the good things that are possible in AI, I am also extremely concerned not only with the unspeakable evil it may be capable of, but with the unintended consequences of it.
He may have concerns, but one day he showed me how he could tell AI just what result he wanted in a given language, and depending on how he submitted the instructions, he could get a framework back in that language that allowed him to tweak with his own elements and get a working package in a matter of days instead of a matter of weeks. It was very impressive.
“The firm’s owner explained that they had turned to AI to help compete with larger companies by providing new articles on a regular basis.”
If you cheap out and use free open source AI models, you’re going to get hallucinations.
They cheaped out. Then they were too lazy to fact check their articles. But “AI makes stuff up and inconveniences people” makes for a better headline.
In this case, anything in, garbage out. There was nothing wrong with the raw data; the process is defective. And defective at a rate that will go unchecked because people can’t oversee it as fast as it generates errors. By the time we understand what has been done, I’m afraid it will be nearly impossible to undo.
Perhaps this was done intentionally to get tourists to visit there.
I’ve been trying to point out the flaws in AI. These flaws are not going away, in my opinion.
If you cheap out and use free open source AI models, you’re going to get hallucinations.
Depends. There are open source models that are for specific domains, that probably work even better than the Frontier Models for certain things.
And there are plenty of patterns that can mitigate hallucinations.
I expect fairly soon they will find a fix for “content rot”.
We are still in the early stages of AI.
I predict in 10-years, there will be an LLM (or whatever replaces it), as a chip on your laptop, that will be more powerful than the most powerful LLM model today.
I guess it could have worked if they were wearing a VR headset...LOL
Old School statistics says it NEVER gives you an answer. It might give you another question.
AI will always give an answer. But is it a correct one?
Capt. Renault: What on Earth brought you to Casablanca?Regards,Rick Blaine: My health, I came to Casablanca for the waters.
Capt. Renault: The waters? What waters? We're in the desert!
Rick Blaine: Blame ChatGPT!
Regards,

Regards,
Just wait.
The “Conspiracy Theorist” AI algorithms will now propagate the rumor that “they” are hiding the hot springs from the public.
The hot springs will eventually be revealed next to an alien spacecraft and the perfect barbecue sauce recipe.
This means that no one at the tourism firm was reading, editing or fact checking the slop the AI was churning out.
The firm should be liable for damages if anyone sues for the cost of a trip based on its misinformation.
I know lots of programming languages, but I am an operating system and computer systems design(electronics) guy. I can teach an eight year(if they can read) old how to program.
The main problem with AI is it cannot determine what is “garbage in” before it makes garbage out.
AI is not intuitive, it is algorithmic.
So AI is good for doing a lot of the boring stuff, but in the end it is not a replacement for mature human intuition and instincts, but AI sure is a great tool for us to use where appropriate to save us from the boring stuff.
You still need Human-In-The-Loop, in any agentic process.
But the one thing I use it for religiously, is generating technical documentation, to explain exactly what the application does. It used to take me hours to do that.
How do we know this article is not falsely AI generated?
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.