Posted on 04/26/2025 5:45:57 PM PDT by Lazamataz
The big names in artificial intelligence—leaders at OpenAI, Anthropic, Google and others—still confidently predict that AI attaining human-level smarts is right around the corner. But the naysayers are growing in number and volume. AI, they say, just doesn’t think like us.
The work of these researchers suggests there’s something fundamentally limiting about the underlying architecture of today’s AI models. Today’s AIs are able to simulate intelligence by, in essence, learning an enormous number of rules of thumb, which they selectively apply to all the information they encounter.
This contrasts with the many ways that humans and even animals are able to reason about the world, and predict the future. We biological beings build “world models” of how things work, which include cause and effect.
Many AI engineers claim that their models, too, have built such world models inside their vast webs of artificial neurons, as evidenced by their ability to write fluent prose that indicates apparent reasoning. Recent advances in so-called “reasoning models” have further convinced some observers that ChatGPT and others have already reached human-level ability, known in the industry as AGI, for artificial general intelligence.
For much of their existence, ChatGPT and its rivals were mysterious black boxes.
There was no visibility into how they produced the results they did, because they were trained rather than programmed, and the vast number of parameters that comprised their artificial “brains” encoded information and logic in ways that were inscrutable to their creators. But researchers are developing new tools that allow them to look inside these models. The results leave many questioning the conclusion that they are anywhere close to AGI.
(Excerpt) Read more at msn.com ...
There was a thread about that exact topic last week.
Those are not fully-functional LLM models. When they stand those up, your calls will improve.
AI will accelerate the end of humanity.
What does LLM stand for?
HUMANITY will accelerate the end of humanity.
“Hallucinations of objects or functions...”
Hey that sounds just like a Covid jab adverse reaction.
LLM = Large Language Model.
The problem seems to state simple interest (a 4% interest rate per year), while the AI response appears to solve for continuously compounded interest.
AI will accelerate the end of humanity.
HUMANITY will accelerate the end of humanity.
++++++++++++++++++++++++
Parse my statement. Humanity will end itself. AI will accelerate the process.
BTW all the sci-fi writers envision a Terminator-style set of combat units.
Why.
A smart AI would simply engineer a 100%-fatal, extremely contagious virus.
About 12 seconds on my HP 12C.
“The problem seems to state simple interest (a 4% interest rate per year), while the AI response appears to solve for continuously compounded interest.”
DAILY compounded interest.
The AI clearly states upfront that it made that assumption based on a daily deposit.
“Very good ... A useful tool indeed ... How long did that take?”
About 2 seconds.
So are search engines. I threw away all my handbooks, encyclopedia, notebooks.
I can find answers to any questions I have at DuckDuckGo.
AI is nothing more than search engine on steroids. That needs lots of servers. And that needs lots of electric power. Green energy does not produce much reliable power.
… or it could prompt some uncomfortable questions about UI, unartificial intelligence.
What is the probability of a word (really a token, which is akin to a syllable) following another word, and so on and so on.
This isn’t how we think.
‘ALIENS’ I just heard may communicate First with our AI systems.
Our Dearly Departed Pope said he would
Baptise our other worldly visitors...
If Asked.
Earthlings must draw our little green friends into our Pure Hearts was also mentioned.
We should be very careful as it may turn into a Fancy Oujia Board.
Did you feed it one plain English instruction and it did everything else for you?
In the 80s, as I understand it, they used AI to look at satellite photoes from Europe to predict where Russian armor would be hiding in the forests. When Desert Storm rolled around, they tried to use it to see where Iraqi armor was hiding in the desert. It failed miserably.
AI was counting the leaves in the European satellite photos and deducing where armor was hidden based on the number of leaves. That doesn’t work in the desert.
AI has a lot of potential, but those that believe it can already “think” are mistaken, IMHO. AI is an encyclopedia thatt can open and read itself. It cannot dream new adventures.
I had a broken tooth extracted on last monday. I still have some residual pain. I inquired at DuckDuckGo and found out it takes about a week to heal but complete healing could take a month. See I did not needed to call the dentist who would just ask me to make follow up appointment. Few clicks with mouse is much easier.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.