Posted on 12/21/2024 6:39:53 AM PST by BenLurkin
Almost all leading large language models, or “chatbots,” show signs of mild cognitive impairment when tested using assessments commonly used to detect early dementia, according to a study published in the Christmas issue of The BMJ.
The study also found that older versions of these chatbots, much like aging human patients, performed worse on the tests. The authors suggest that these findings “challenge the assumption that artificial intelligence will soon replace human doctors.”
The instructions given to the LLMs for each task were the same as those given to human patients. Scoring followed official guidelines and was evaluated by a practicing neurologist.
ChatGPT 4o achieved the highest score on the MoCA test (26 out of 30), followed by ChatGPT 4 and Claude (25 out of 30), with Gemini 1.0 scoring lowest (16 out of 30).
Challenges in Visual and Executive Functions
All chatbots showed poor performance in visuospatial skills and executive tasks, such as the trail-making task (connecting encircled numbers and letters in ascending order) and the clock drawing test (drawing a clock face showing a specific time). Gemini models failed at the delayed recall task (remembering a five-word sequence).
Most other tasks, including naming, attention, language, and abstraction were performed well by all chatbots.
(Excerpt) Read more at scitechdaily.com ...
Most other tasks, including naming, attention, language, and abstraction were performed well by all chatbots.
problem solving, not so much.
If AI learns from us, & we are aging & declining mentally, it stands to reason that the same would happen to AI. ;-)
Yup—it is because AI has been programmed by folks who believe in equity—so AI must learn to be as dumb as the dumbest humans.
Seems like there’s a difference between learning and discernment... well, duh.
They need to develop AI that can forget useless information.
“They need to develop AI that can forget useless information.”
People may need to develop the ability to forget useless information...but who decides what information is useless and how do they do it?
I use Microsoft’s Copilot often, but only as a super power search engine, to provide me with resources that MIGHT support Copilot’s responses to my questions. Some of the website links that Copilot suggests are totally irrelevant to my questions, but it does a goood job of chcking my spelllling and also my grammmer too...
I don’t think I have EVER connected to a customer service chat line to help me with something and have the chatbot actually help, but normally you have to go through the chatbot to get to a human who can help you.
There are lots of psychological studies on how the brain both memorizes and forgets information. They just need to incorporate an approximate method to purge the datasets.
They just need to incorporate an approximate method to purge the datasets.
“and who is they?”
The AI developers, you know, the people that invented AI. I’m still trying to figure out if your question was snark or just a brain fart.
If they “knew” as much of the mass-time-dimension facts as did Albert, could they ever deduce the concept that light shows the special inescapable relationship between mass and power?
“The study also found that older versions of these chatbots, much like aging human patients, performed worse on the tests.”
This is what passes for journalism now. The quote implies that any given chatbot loses ability with time (”much like aging human patients”) instead of simply stating that newer ones are more advanced and score higher.
Maybe the problem is knowledge without intellect, a which has always been attributed solely to God, angels and humans. I never thought that inanimate human creations could truly reason, no matter how many processors or how fast the clock speed.
Interestingly, I watched a documentary on YouTube last week comparing NASA and SpaceX. I figured it must have been AI generated when it referred to a NASA official receiving an email in 1966.
Programmed by fellows of compassion and vision
We’ll be free when their work is done
We’ll be eternally free, yes, and eternally young
What a beautiful world it will be
What a glorious time to be free
Well. Isn't that interesting.
Once their new god is failing they admit what I have been saying all along.
It is not AI.
I am shocked.
Yeah, i don’t trust AI either. Like all new technologies it will first be used to kill and abuse the populations of earth before it gets democratized by widespread use. When it comes to science I’m excited for AI but when it comes to its impact on society and our civilization I am a skeptic.
Maybe the neurologists giving the tests are the ones suffering from a decline by not discussing that.
People already do.
And we spend a great deal of brain power on filtering out information.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.