Posted on 12/21/2024 6:39:53 AM PST by BenLurkin
Almost all leading large language models, or “chatbots,” show signs of mild cognitive impairment when tested using assessments commonly used to detect early dementia, according to a study published in the Christmas issue of The BMJ.
The study also found that older versions of these chatbots, much like aging human patients, performed worse on the tests. The authors suggest that these findings “challenge the assumption that artificial intelligence will soon replace human doctors.”
The instructions given to the LLMs for each task were the same as those given to human patients. Scoring followed official guidelines and was evaluated by a practicing neurologist.
ChatGPT 4o achieved the highest score on the MoCA test (26 out of 30), followed by ChatGPT 4 and Claude (25 out of 30), with Gemini 1.0 scoring lowest (16 out of 30).
Challenges in Visual and Executive Functions
All chatbots showed poor performance in visuospatial skills and executive tasks, such as the trail-making task (connecting encircled numbers and letters in ascending order) and the clock drawing test (drawing a clock face showing a specific time). Gemini models failed at the delayed recall task (remembering a five-word sequence).
Most other tasks, including naming, attention, language, and abstraction were performed well by all chatbots.
(Excerpt) Read more at scitechdaily.com ...
“Yup—it is because AI has been programmed by folks who believe in equity—so AI must learn to be as dumb as the dumbest humans.”
We know most were programmed by leftists and democrats. So this stands to figure. 🤔
If an LLM is initially trained on high quality data sets and input - but later programmed to absorb any/all open source data available on the internet (like, say, a Saturday night blog post by “NextRush”), then it’s bound to show severe cognitive decline.
Clapping wildly. Great article and thread.
My suspicion is that the LLMs lack the capability for processing conflicting information long term, particularly as regards set intersections.
Should be interesting. I’m thinking nested sets of premises is where that would go. That way liberills could lock in comfortable stupidity.
However, if there's some way to distill down the context window that would be helpful (perhaps a "helper AI", and AI is pretty good at summarizing things).
Lately I've had some interesting conversations with Claude AI. Topics have included ..."Could the existence of multiverses replace theories regarding dark matter?", "Could the creation of an AI-centric language result in better AI performance?" and "Could the fusion of traditional object oriented programming with modern-day AI solve the problem of AI dementia?". I found those conversations to be illuminating and worth having.
Also, I've been having Claude critique a book that I have in progress, and have found its suggestions to be most helpful. Due the size of Claude's context window, it is able to ingest and provide feedback on the entire text of the book (in this case, a dystopian sci-fi novel).
GIGO.
Yes, They are less than worthless; they are a complete waste of time. They are not trusted or authorized to be helpful in anyway. Companies who use them are giving the middle finger to their customers. They are actually worse than getting connected to customer support in India or Pakistan.
Not sure how many times I’ve tried to tell people AI is low IQ.
They get confused by the stupid stuff that people ask them to think about.
Hell, I get confused by stupid people every day but my built-in large language model is programmed to laugh at and ridicule the stupid people around me.
This is what Douglas Adams foresaw in the Hitchhiker’s Guide to the Galaxy with Marvin the Paranoid Android. He could not handle all the stupid humans which make up more than half of all people.
I'm with those who believe that much of "dreaming" is our minds re-arranging and discarding information while checking if any of it is applicable to new patterns that may prove useful in the future.
The real breakthrough in Asimov's novel "I, Robot" was when a positronic brain finally achieved something approaching that, finally elevating itself beyond a mere artificial simulation.
Good posts...keep it up and Merry Christmas!
I create commands for chatgpt. The commands always include commands to review the output and check it against the commands given to make sure that every command has been followed—and keep checking the output against the commands until every command has been followed.
Otherwise, if you have a long detailed command stack—some commands will be dropped or missed.
Interesting
When you put a question in my area of expertise into Google search and the AI answer pops up, it’s wrong (like comically, absurdly wrong) 80% of the time.
Now Google isn’t my go-to in the first place when I need to retrieve info I don’t know, but still…
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.