1 posted on
12/21/2024 6:39:53 AM PST by
BenLurkin
To: BenLurkin
Most other tasks, including naming, attention, language, and abstraction were performed well by all chatbots.
think about that.......................
problem solving, not so much.
2 posted on
12/21/2024 6:45:59 AM PST by
PeterPrinciple
(Thinking Caps are not longer being issued, but there must be a warehouse full of them somewhere)
To: BenLurkin
If AI learns from us, & we are aging & declining mentally, it stands to reason that the same would happen to AI. ;-)
3 posted on
12/21/2024 6:46:10 AM PST by
Twotone
( What's the difference between a politician & a flying pig? The letter "F.")
To: BenLurkin
Seems like there’s a difference between learning and discernment... well, duh.
5 posted on
12/21/2024 6:48:20 AM PST by
HombreSecreto
(The life of a repo man is always intense)
To: BenLurkin
They need to develop AI that can forget useless information.
To: BenLurkin
I use Microsoft’s Copilot often, but only as a super power search engine, to provide me with resources that MIGHT support Copilot’s responses to my questions. Some of the website links that Copilot suggests are totally irrelevant to my questions, but it does a goood job of chcking my spelllling and also my grammmer too...
To: BenLurkin
I don’t think I have EVER connected to a customer service chat line to help me with something and have the chatbot actually help, but normally you have to go through the chatbot to get to a human who can help you.
9 posted on
12/21/2024 7:03:42 AM PST by
libertylover
(Our biggest problem, by far, is that almost all of big media is AGENDA-DRIVEN, not-truth driven.)
To: BenLurkin
If they “knew” as much of the mass-time-dimension facts as did Albert, could they ever deduce the concept that light shows the special inescapable relationship between mass and power?
13 posted on
12/21/2024 7:14:59 AM PST by
imardmd1
(To learn is to live; the joy of living: to teach. Fiat Lux!)
To: BenLurkin
“The study also found that older versions of these chatbots, much like aging human patients, performed worse on the tests.”
This is what passes for journalism now. The quote implies that any given chatbot loses ability with time (”much like aging human patients”) instead of simply stating that newer ones are more advanced and score higher.
14 posted on
12/21/2024 7:18:10 AM PST by
coloradan
(They're not the mainstream media, they're the gaslight media. It's what they do. )
To: BenLurkin
Maybe the problem is knowledge without intellect, a which has always been attributed solely to God, angels and humans. I never thought that inanimate human creations could truly reason, no matter how many processors or how fast the clock speed.
Interestingly, I watched a documentary on YouTube last week comparing NASA and SpaceX. I figured it must have been AI generated when it referred to a NASA official receiving an email in 1966.
15 posted on
12/21/2024 7:18:45 AM PST by
I-ambush
(From the brightest star comes the blackest hole. You had so much to offer, why didya offer your sou?)
To: BenLurkin
Almost all leading large language models, or “chatbots,” Well. Isn't that interesting.
Once their new god is failing they admit what I have been saying all along.
It is not AI.
I am shocked.
17 posted on
12/21/2024 7:24:13 AM PST by
Harmless Teddy Bear
( Not my circus. Not my monkeys. But I can pick out the clowns at 100 yards.)
To: BenLurkin
Are the older versions performing worse than they did last year? That would indicate cognitive decline. Or do the older models performing worse than the newer models? That would indicate cognitive improvement, like comparing results from a five year old versus a six year old. The six year olds do better, but that is not evidence of the five year olds suffering from a decline.
Maybe the neurologists giving the tests are the ones suffering from a decline by not discussing that.
19 posted on
12/21/2024 7:26:23 AM PST by
KarlInOhio
(Now unburdened by the Biden/Harris administration that has been.)
To: BenLurkin
If an LLM is initially trained on high quality data sets and input - but later programmed to absorb any/all open source data available on the internet (like, say, a Saturday night blog post by “NextRush”), then it’s bound to show severe cognitive decline.
22 posted on
12/21/2024 7:29:39 AM PST by
LittleBillyInfidel
(This tagline has been formatted to fit the screen. Some content has been edited.)
To: BenLurkin
Clapping wildly. Great article and thread.
23 posted on
12/21/2024 7:33:18 AM PST by
mairdie
(GreenwichVillage ArmyPoet: https://www.iment.com/maida/family/father/oldsoldiersdrums/frontcover.htm)
To: Lazamataz
My suspicion is that the LLMs lack the capability for processing conflicting information long term, particularly as regards set intersections.
Should be interesting. I’m thinking nested sets of premises is where that would go. That way liberills could lock in comfortable stupidity.
24 posted on
12/21/2024 7:46:02 AM PST by
Carry_Okie
(The tree of liberty needs a rope.)
To: BenLurkin
I have an AI program still in the box that I purchased on 5 1/4" floppies for one of the PC XT/AT compatible clones which I put together for myself and friends and family many years ago. I need to try and find the program and one of the computers. (I have a large airplane hangar stuffed with ancient computer hardware.) The program is likely suffering from severe dementia by now. It came with a fairly thick manual and promotional info that might be amusing these days.
26 posted on
12/21/2024 8:00:47 AM PST by
fireman15
(Irritating people are the grit from which we fashion our pearl. I provide the grit. You're Welcome.)
To: BenLurkin
27 posted on
12/21/2024 8:02:16 AM PST by
mewzilla
(Swing away, Mr. President, swing away!)
To: BenLurkin
Not sure how many times I’ve tried to tell people AI is low IQ.
29 posted on
12/21/2024 8:25:30 AM PST by
reasonisfaith
(What are the personal implications if the Resurrection of Christ is a true event in history?)
To: BenLurkin
They get confused by the stupid stuff that people ask them to think about.
30 posted on
12/21/2024 8:31:44 AM PST by
webheart
(S)
To: BenLurkin
I create commands for chatgpt. The commands always include commands to review the output and check it against the commands given to make sure that every command has been followed—and keep checking the output against the commands until every command has been followed.
Otherwise, if you have a long detailed command stack—some commands will be dropped or missed.
34 posted on
12/21/2024 1:11:19 PM PST by
ckilmer
To: BenLurkin
When you put a question in my area of expertise into Google search and the AI answer pops up, it’s wrong (like comically, absurdly wrong) 80% of the time.
Now Google isn’t my go-to in the first place when I need to retrieve info I don’t know, but still…
36 posted on
12/21/2024 2:39:46 PM PST by
Jim Noble
(Assez de mensonges et de phrases)
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson