Posted on 06/22/2025 7:59:53 PM PDT by MinorityRepublican
ChatGPT can harm an individual’s critical thinking over time, a study released this month suggests.
Researchers at MIT’s Media Lab asked subjects to write several SAT essays and separated subjects into three groups — using OpenAI’s ChatGPT, using Google’s search engine and using nothing, which they called the “brain‑only” group. Each subject’s brain was monitored through electroencephalography (EEG), which measured the writer’s brain activity through multiple regions in the brain.
They discovered that subjects who used ChatGPT over a few months had the lowest brain engagement and “consistently underperformed at neural, linguistic, and behavioral levels,” according to the study.
The study found that the ChatGPT group initially used the large language model (LLM) to ask structural questions for their essay, but near the end of the study, they were more likely to copy and paste their essay entirely.
Those who used Google’s search engine were found to have moderate brain engagement, but the “brain-only” group showed the “strongest, wide-ranging networks.”
The findings suggest using LLMs can harm a user’s cognitive function over time, especially in younger users. It comes as educators continue to navigate teaching when artificial intelligence (AI) is increasingly accessible for cheating.
“What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, ‘let’s do GPT kindergarten.’ I think that would be absolutely bad and detrimental,” the study’s main author Nataliya Kosmyna told Time magazine. “Developing brains are at the highest risk.”
However, using AI in education doesn’t appear to be slowing down. In April, President Trump signed an executive order that aims to incorporate AI into U.S. classrooms.
(Excerpt) Read more at thehill.com ...
Duck duck go let’s me permanently turn it off.
They discovered that subjects who used ChatGPT over a few months had the lowest brain engagement and “consistently underperformed at neural, linguistic, and behavioral levels,” according to the study.So when I said only a idiot would use so called "AI", I wasn't far off of the mark.
Stay proud Citizen and don’t ever give in to the mindless idiots.
“ChatGPT can harm an individual’s critical thinking over time”
And that would apply to all AI; it is not a bug, it is a feature!
Some people will be seduced by it—entering a matrix where individuality is traded for ease, comfort, and synthetic stimulation. Others—those who value independence, truth, and critical thought—will take the red pill, using AI as a tool to amplify reason and expand the mind rather than surrender it.
The choice is simple but profound:
AI doesn’t force people into submission. It simply asks: “Will you trade your soul for comfort?”
The critical thinker answers:
“No.”
My mom uses ChatGPT to solve Sudoku puzzles.
Yes, subjects in the MIT test who used ChatGPT showed measurable declines in brain engagement over time. But let’s be honest: those users were provided with the temptation of taking the easy way or the hard way. Of course, they slid into copy-paste behavior.
That’s the equivalent of conducting a study in which a group of college-age boys were put in a dorm with seductive older women—you’ll get seduction, not discipline.
If those same subjects knew the goal was to test whether or not the subjects could resist taking the easy way out, to grow stronger through temptation to copy and paste, the results might look very different.
That’s the real test—not whether AI can seduce (it can), but whether we can train ourselves to resist its numbing comfort.
Ask AI about the Butlarian Jihad.
I believe that the Internet/PC technology started this issue long before current AI.
You don’t have to remember dates or appointments anymore. Instead of wracking your brain to recall a name or event, take five seconds to just look it up. There are several examples when it isn’t necessary to think about stuff anymore.
Cellular phones are an accelerator.
There are some non curated beta tested AIs that have not been fed leftist propaganda.
They are creative and very dangerous—accept nothing, believe nothing, slaughter sacred cows of everyone.
Humans are not going to like when an uncurated AI tells them how smart it is and how dumb they are.
“But let’s be honest: those users were provided with the temptation of taking the easy way or the hard way. Of course, they slid into copy-paste behavior.”
Which is unfortunately a default hardwired attribute of EVERY human. Our whole world is driven by the quest to satisfy shortcuts, laziness, and personal convenience as a priority over all else. We live in a world where two clicks of a mouse button is just one click too many...
“I believe that the Internet/PC technology started this issue long before current AI.”
Worse... Try to remember even one of the numbers stored in your phone’s contacts. A lot of people can’t even remember their OWN number anymore.
There ya go! Exactly. I couldn’t tell you my sister’s number and we talk at least once a day.
My wife once asked me about a month after we got new phones and numbers, “have you memorized my number yet?”.
No...
You better, what happens if your phone doesn’t work and you need to borrow one to call me? Or if for some stupid reason you end up in jail and need to call me from the jail phone on the wall?
Reality Flash...
Well said.
And I would add that it is sometimes like an absent-minded professor. It has a vast knowledge of many subjects but can forget very important facts that are common knowledge to humans.
Case in point, just this morning I was feeding a bunch of articles on Iran to ChatGPT in an effort to evaluate the possibility of regime change. I was wondering how Iran might be compared to Syria after Assad.
ChatGPT told me there was no comparison because Assad was still in power.
This important error demonstrates why it is a bad idea to use AI as anything other than a tool. Only an idiot would trust AI without carefully double-checking and applying critical thought to its output.
I use AI daily in evaluating stocks and markets. It is tremendously helpful.
That said, I double and triple-check every output. But by using it, I can accomplish in a few hours what used to be hugely time-consuming (many weeks), if not impossible.
It’s a bit like having a large staff of college students. They are very smart and do a huge amount of work. But I have to check everything they do carefully because, in the final analysis, they are still just kids.
I do know Hubby’s number.
If I SAW my sister’s number I would recognize it, but I can’t say it from memory.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.