Posted on 09/16/2025 9:25:02 AM PDT by SeekAndFind
Machines certainly can make businesses and people more efficient, but I have never believed that A.I. itself makes us more intelligent.
Basically, Google AI, and I assume most A.I. programs, just regurgitate what they read. They do not appear to analyze or evaluate things to see how truthful they are.
The great danger is that people, especially children and people posing as journalists, will believe whatever something called artificial intelligence spits out.
I asked Google A.I. a simple math question.
Have Trump’s tax rate cuts from 2017 cost the government trillions of dollars?
Yes, most independent, non-partisan analyses of the 2017 Tax Cuts and Jobs Act (TCJA) found that the law significantly reduced government revenue by trillions of dollars over its first decade. The Congressional Budget Office (CBO) estimated the cuts would increase the deficit by approximately $1.9 trillion over ten years.
Instead of just answering the question with actual revenue receipts for the seven years of actual receipts from the seven years after 2017 from 2018 to 2024, the A.I. went to supposedly independent, nonpartisan think-tanks for the answer. And then, for no reason, it threw in the predictions from the CBO, as if that were meaningful. Predictions are not factual.
Somehow, when the A.I. was searching for answers, it missed this one.
Despite CBO’s Predictions, Trump Tax Cuts Were a Boon for America’s Economy and Working Families
The truth is, the Trump tax cuts resulted in economic growth that was a full percentage point above CBO’s forecast, and federal revenues far outpaced the agency’s predictions. In fact, under Trump tax policies in 2022, tax revenues reached a record high of nearly $5 trillion, and revenues averaged $205 billion above CBO predictions for the four years following implementation of the law.
(Excerpt) Read more at americanthinker.com ...
Google AI clearly had the answer, because it readily gave me the actual individual income tax receipts for FY2017 through FY2024.
“Instead of just answering the question with actual revenue receipts for the seven years of actual receipts from the seven years after 2017 from 2018 to 2024,”
That wasn’t in the prompt. GIGO
You see this in medicine with meta analysis. The idea is that many studies increase the population studied and therefore will be more accurate. However if studies are poorly designed and/or by biased/compromised researchers the results are skewed to the bias. This of house happens all the time with controversial and political topics.
I think the best thing to do is ask multiple AI sources , the look at sources and ask same questions that excludes those sources. I haven’t seen AI do much in excluding unreliable biased sources.
I just asked the net about black church attendance and got the runaround, no direct answer, no numbers.
I asked AI (CoPilot) and got a concise answer immediately. 60-66% of black Americans attend church services.
Google is just a place to start. Google AI is more of a determination by popular vote and also just a place to start. Not all citations are relevant or even accurate.
We learned to be careful about research when some of us were in school. You don’t just go to the library and run with the first thing you find. Digging deeper usually leads you to dig even deeper until you also form a consensus of consistent opinion or fact. AI may do that but it seems to be skewed by the coder’s views.
AI is good for resumes and emails to prospective customers. And useless trivia. It’s quicker than google for that.
Thank you! Good actual information.
(Oh. And all those ABCNNBCBS “non-partisian, independent economic analysis” predictions so uniformly quoted for years were SO UNIFORMLY WRONG, weren’t they?)
Why would anyone do this? In my work, I never take a single source (AI or otherwise) at face value—I triangulate and corroborate every piece of information before I consider it reliable.
There's a lot of bad information out there. AI is just as likely to pick it up as we are.
I asked Google AI about Gus Hall’s book “Toward a Soviet America”, which was published in the 1950’s. AI responded that it was very popular with American Communists. Then I added the idea of “suppression”. The AI then admitted that it was kept out of the public eye.
The book: The Naked Communist contended that the Communists had been steeling it off library shelves because Gus had revealed too much of their strategy.
Back in the 1960’s, I tested this hypothesis by visiting my local library in Glendale, California. Sure enough, it was not on the shelves, but it was available ar the front desk, but not for checking out. The librarians told me that its special handling was due to a history of being stolen.
It is interesting that today’s AI didn’t want to talk about it!
This is a Large Language Model. It's goal is to try to present the information you asked for in the simplest terms possible. Because, since you are using it, you are undoubtedly a bit simple.
Because it was trained on fiction it has no way of filtering out if the information it is using is fiction, satire, or even just plain wrong.
Stop calling it AI and expecting it to be an AI and you can use it.
Also understand it is programmed to reflect back to you what answers you have indicated you want. So the more it "talks" to you the more it is going to become your echo chamber. But do not think you have "changed it's mind" because, for one thing it does not have a mind and for the other all it is doing is for you. For your neighbor it is giving him entirely different answers based on what he wants.
That should give you a nice warm glow.
If you use ublock origin you can add uBlacklist Huge AI Blocklist , LOL
You might also tell the AI model that you're trying to approach this issue from all sides to understand it better.
AI algorithms have difficulty with both the above. I have also accused ChatGPT and Grok of demonstrating a bias, in your case, for example, that would be bias against black churchgoers. They'll blow a GPU over that.;-)
Excellent point.
Many are incorporating AI into their operations.
I try and change my drivers license address
Can only do it by phone or online ( or so ca. DMV told me when I went in person. So I TRIED. As part of verification I was asked to chose which of the 3 sets of 2 numbers were the last 2 of my phone number. NONE of them were. Dead end, no live person I could reach to fix it.
I called the ss office. AI said I had to do it online. Went online, to verify my identity had to send a cell phone pic and a pic of the front and back of drivers lic
AI then tells me validation failed, try again. After 6 failures was told by AI, try again after 24 hours
Rinse and repeat, another dead end with no real person to talk to.
Tried to pay my pge bill by phone like always ( pge won’t accept payments in person at local office. Infact office closed to all public ) AI verification on address of bill, again wrong apt. Letter. Called got real person who corrected my address, but couldn’t process payment. Had to go on phone or online. Yup, AI still had address apt letter wrong. Rinse and repeat. I finally just took the chance I was paying my neighbors bill. Didn’t get my power shut off so I guess the bill got paid. But every month since AI still gets it wrong.
I
Hate
AI.
It may be artificial, but it’s definitely not intelligent.
I bad mouth AI alot, always have, and can’t help but wonder if it’s not malicious.
Thank you! Talking to AI requires communication skills, and people no longer have that (ignoring the fact that English is arguably the lingua franca of AI, which really makes you shake your head).
GIGO indeed!
I have never had the need nor inclination to consult “AI” for anything. I am in no hurry to become a slave to the machine.
As someone else noted GIGO.
Thanks.
“So shines the truth, in a dark, weary world.”
You stated it well: if you are asking the current LLM-based “AIs” for anything, you probably are a bit slow and need all the imperfect help you can get.
To me, AI is only as good as the information humans feed into it. It’s a non-entity, making people even more lazy like the internet already has, which has also contributed to making communication and relationships shallow and impersonal.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.