Posted on 04/26/2025 5:45:57 PM PDT by Lazamataz
The big names in artificial intelligence—leaders at OpenAI, Anthropic, Google and others—still confidently predict that AI attaining human-level smarts is right around the corner. But the naysayers are growing in number and volume. AI, they say, just doesn’t think like us.
The work of these researchers suggests there’s something fundamentally limiting about the underlying architecture of today’s AI models. Today’s AIs are able to simulate intelligence by, in essence, learning an enormous number of rules of thumb, which they selectively apply to all the information they encounter.
This contrasts with the many ways that humans and even animals are able to reason about the world, and predict the future. We biological beings build “world models” of how things work, which include cause and effect.
Many AI engineers claim that their models, too, have built such world models inside their vast webs of artificial neurons, as evidenced by their ability to write fluent prose that indicates apparent reasoning. Recent advances in so-called “reasoning models” have further convinced some observers that ChatGPT and others have already reached human-level ability, known in the industry as AGI, for artificial general intelligence.
For much of their existence, ChatGPT and its rivals were mysterious black boxes.
There was no visibility into how they produced the results they did, because they were trained rather than programmed, and the vast number of parameters that comprised their artificial “brains” encoded information and logic in ways that were inscrutable to their creators. But researchers are developing new tools that allow them to look inside these models. The results leave many questioning the conclusion that they are anywhere close to AGI.
(Excerpt) Read more at msn.com ...
And as seen with https://poe.com/chat/ when asked my test questions, and culminating in "I'm unable to engage with that question." "I'm here to provide information and support, but I can't engage in discussions that promote harm or discrimination against any group" in response to my analogy, then programming is key. I have found https://www.perplexity.ai/ to be far better in this regard.
As an aside, I have OpenAI 4o set up for voice communication on my phone (with a girl’s voice and an English accent). During my annual exam, I demonstrated it to my doctor, who’s very interested in technology. When she suggested I cut back on eggs, I posed the issue to my AI app. OpenAI 4o responded that, given that I follow a carnivore diet, the suggestion might not be appropriate. This sparked a detailed conversation between my doctor and my AI, which ultimately led my doc to conclude that she needed to reverse her recommendation and do more research.
And when your doctor is an AI android.
Add into the Dara set progressive BS and the AI are still stupid. Ask any of them Covid came from. They all say it jumped species. Just dumb.
As a doctor said recently, it isn’t that AI is going to replace doctors (anytime soon, anyway), it’s that doctors who use AI are going to replace those who don’t.
The lab leak theory has gained traction in recent years—public belief is up, several U.S. agencies (like the FBI and Department of Energy) consider it plausible if not probable, and the current administration is leaning into it.No theory has been conclusively proven. The origin remains unresolved.
How AI (Artificial Intelligence) is Ruining the Electric Grid
https://www.youtube.com/watch?v=3__HO-akNC8
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.