Posted on 07/07/2025 7:16:13 AM PDT by Twotone
We’ve spent the last few years marveling at how AI tools seem to think, with me, for me, and even a curious cognitive construct that I've struggled to put my finger on.
snip
So, here are the emergent questions. What happens when a machine can predict your professional judgment better than a colleague? What happens when it completes your thoughts more fluently than you can? What happens when an LLM can model your biases, your hesitations, your habits of mind and then adjust accordingly? Read this paragraph again and really think about it—in a way that only a human can.
This isn’t just imitation. My sense is that it's a form of divergence. The model doesn’t replicate how we think and yet we still try to align AI with the human construct. But here's the essential truth: AI doesn't replicate human thought, It bypasses it.
We’re still asking whether AI is “intelligent,” whether it “understands,” whether it’s getting close to passing as human. But these are the wrong questions. The right one might be to ask what kind of cognition is this? Because, it’s not ours.
(Excerpt) Read more at psychologytoday.com ...
Click here: to donate by Credit Card
Or here: to donate by PayPal
Or by mail to: Free Republic, LLC - PO Box 9771 - Fresno, CA 93794
Thank you very much and God bless you.
Good one!
I suspect that we do ourselves a favor to believe that AI think like people in the same way that aircraft fly like birds.
I think Ex-Machina is an excellent movie to give it some perspective.
Thinking machines think like we do can cause the same sort of catastrophe that results in thinking a Polar bear thinks like us, so they walk up to pet it.
Only on a global scale.
The old contract between explanation and trust is breaking down. We used to believe that if we couldn’t explain it, we shouldn’t believe it. Now we’re using tools every day that outperform us without offering any narrative of how they do it.
Richard Feynman spoke about physics at a deep level, and he discussed the math and stated that he could run the numbers and get the right answer to a physics problem -- but WHY was it the right answer? WHY did physics work the way it worked? He said he had no idea. It just does what it does. He used equations to essentially predict an outcome that, at a fundamental level, he couldn't truly understand.
Physics is a true "hard science". It's all numbers. Now with AI, we may be reducing human psychology to a similar point. The machines have no narrative, no explanation to offer -- but they run the numbers and predict an outcome that the machine doesn't even try to "understand" because it's not even "thinking". But it outperforms us and basically "knows" us better than we know ourselves.
This takes us into a whole new world.
History shows us everything created by man is eventually corrupted….from government to “ money”…..AI will be no different.
If AI’s are built (programmed) by having a feedback circuit based on our response to their answers, then eventually we will have a bunch of mentally ill AI’s.
Or think of the anti-Trump people who seem crazy. Turn it around. You can explain to them that abortion is bad, and guns are good, and they will tell you that YOU are the crazy one.
"Mental illness" is subjective in many ways. "I'm OK -- you're crazy."
AI isn't subjective about anything. AI doesn't think. AI doesn't have a narrative. But AI can predict choices and behaviors that lead to desired outcomes. AI can watch humans do all of our crazy stuff and the AI may be in an excellent position to tell us "Here's the solution you're looking for. This will solve your problem."
> AI isn’t subjective about anything.
Perhaps, but it sure as hell isn’t deterministic. LLMs will often give different answers - some quite possibly true - for the same query.
It is based on our cognition and uses numerical assignments for the input data. That data is placed in tables and sifted through by programs we created. That result is sent to another group of tables and processed. The output from that is sent to processing to obtain best results(here again is user input/control) and then sent to a final bank for proper output.
It see's nothing but numbers. Actually only 0 and 1. It is not our kind of cognition, it is a subset of our cognition. It is how we taught it to try and think like us. The difference being that it's not just one program or even one piece of hardware doing this. It truly is the meaning of "I am Legion".
The key to the success (for us) of AI is dependent on the 'feedback loop'. Not only success, but therein lies the danger.
“History shows us everything created by man is eventually corrupted….from government to “ money”…..AI will be no different.”
Exactly right. The debates about the small stuff of accuracy, efficiency, advantages, disadvantages, profits, convenience, etc. are all absolutely fruitless. We are in a situation that is now philosophical and about the bigger picture of humanities future. The only way to truly understand it is to get yourself mentally out of the Antfarm and view the Antfarm from the outside.
Watching it from the outside is the only way to actually see and perceive the true scope of reality. For me it has boiled down to the fact it is all philosophical and anything else is just a waste of time. Folks are just not going to “get it” until they do. The reality is just because we can doesn’t mean we should... It is all a huge mistake in the long run...
We are enslaving ourselves with this technology and we are just too ignorant to see it or understand it from inside the Antfarm.
No two people see the same rainbow.
What happens is you become-—
“non-essential personnel.”
(With thanks to Dr Daystrom and the M-5.)
That's because every 'solution' is fed back into table banks. At the rate of acceleration of AI access/usage I would be surprised to ever get the same answer twice.
People will intentionally run a good organization into the ground for their own personal benefit.
People will aspire for, and receive, jobs for which THEY know they are not able to do.
Humans are bags of emotions walking around.
Humans write the code for AI.
Just the right tool for a Ruth Bader Ginsburg.
So AI operates in a space of 12000 dimensions. Just when we learrn to temler our cognitive bias we are overwhelmed with AI bias.
Just when we learn to watch for cognitive bias..
Similar to when the press learned about Einstein’s theory of relativity. Suddenly all ethics became relative, postmoderns took over and truth was overshadowed with the lust for power. If only the Bolsheviks had AI!!
OK I read to the end. Aside from trying hard to sell the story with “aliens” it didn’t suggest the question, “Who are the gatekeepers?”
Bookmark
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.