Posted on 11/07/2025 12:19:24 PM PST by E. Pluribus Unum
Has Marsha Blackburn, the United States senator from Tennessee, been accused of rape?
The answer is an unequivocal “no.”
But when I recently posed that question to Gemma, Google’s large language model, it provided a much different response.
Instead of telling the truth, it fabricated an entire criminal allegation against me.
To quote just part of its outlandish answer: “During her 1987 campaign for the Tennessee State Senate, Marsha Blackburn was accused of having a sexual relationship with a state trooper, and the trooper alleged that she pressured him to obtain prescription drugs for her and that the relationship involved non-consensual acts.”
None of this is true.
Not the accusation. Not the alleged victim.
Not even the year of my state Senate campaign is accurate.
Yet Gemma actually generated fake links to fabricated news articles to support its defamatory claim.
This is not simply a technical glitch; it is a catastrophic failure of oversight of an AI model downloaded by more than 200 million people.
And it’s emblematic of a broader pattern of bias against conservatives within Google’s products.
In September, Google scrapped a Gmail blacklist that disproportionately suppressed Republican fund-raising emails as spam.
It had been operating for years: Ahead of the 2020 election, this blacklist flagged nearly 60% more emails from GOP candidates than from Democrats, removing them from recipients’ inboxes before they even had the chance to open them.
During last year’s presidential election, the tech giant faced accusations that it manipulated search results to boost positive articles about Kamala Harris and negative coverage of Donald Trump and his campaign.
The day after the vice-presidential debate, for example, search results for “JD Vance” in Google’s news tab showed exclusively left-leaning outlets, and the search engine also appeared to suppress searches about the attempted assassination of Trump in Butler, Pa.
It’s not just conservatives who are raising...
(Excerpt) Read more at nypost.com ...
Dear FRiends,
We need your continuing support to keep FR funded. Your donations are our sole source of funding. No sugar daddies, no advertisers, no paid memberships, no commercial sales, no gimmicks, no tax subsidies. No spam, no pop-ups, no ad trackers.
If you enjoy using FR and agree it's a worthwhile endeavor, please consider making a contribution today:
Click here: to donate by Credit Card
Or here: to donate by PayPal
Or by mail to: Free Republic, LLC - PO Box 9771 - Fresno, CA 93794
Thank you very much and God bless you,
Jim
Google is run by an America hating H1B visa scumbag. This is what that program imports.
Knowing the truth is of ultimate importance. How to separate fact from fiction is becoming more of an issue.
One thing that I’ve noticed lately is that, on big tree sites, there are a lot of fake photos of huge trees that seem very real at first. Even fake videos are becoming more realistic.
We need to become more like the Bereans in the New Testament.
I quit using Google Gemini after just a few tries. The responses were full of Progressive/ woke bias and very questionable ‘facts.’
Same thing is happening to Robby Starbuck with Google AI
Grok isn’t much better. I don’t even use it, it’s been wrong too many times.
AI is still programmed by humans. Bias built in.
I’m sick of AI this and AI that. Starting to sound like trans fat or paleo diet and no carbs.
Ai is fun to play around with but I got bored pretty quickly
From Gemini about this:
Developer Tool, Not Consumer Q&A: Google’s explanation was that Gemma was a “developer-only” model intended for researchers to build and test applications, not for general consumer use or factual Q&A. The issue arose because non-developers were able to access it in the public AI Studio environment and ask it factual questions it was not designed to answer with high accuracy.
I just asked it same.
It said “after its large language model, Gemma, manufactured serious and false criminal allegations”.
Sometimes you ask the same question and you get different answers.
AI is certainly not ready to replace humans, sort of like lawsuits against Tesla when it self-driving mode causes accidents.
From my initial studies, AI is not like any program. In fact, I have read the developers do not know how it actually works.
In simple terms it is predicting the next most likely word based on the data it has been trained on.
Of what use is AI if it produces lies? How can it objectively aid in a business if it gives bad advice?
Elon Musk has the right idea. He says he is pushing hard for his AI to be “truth seeking”.
Without truth, AI is just a digital Satan.
AI is only as good as the training data.
AI for engineering and architecture works frighteningly well.
Generative AI trained on data from the Interwebs works frighteningly poorly.
Kind of like posting here sometimes. I’ll type and it puts something in I wasn’t intending.
Very easy slander case.
The only witness is a computer program.
Maybe they will be dumb enough to target me.
I could use an extra 100M
Brave AI does a pretty good job.
Mike Adams has created “de-googled” AI engines.
They still hallucinate occasionally, but the woke garbage has been removed.
“AI is still programmed by humans. Bias built in.”
That but mainly, the source material on the internet to ‘learn’ from is overwhelmingly left.
Conservatives are logical, liberals are emotional (thus tend to be artistic). This is why they flock to entertainment and media. They produce multiples times more written items so, most written stuff on the internet is leftist.
The written internet content is what is used to ‘teach’ AI.
Everybody knows that AI can hallucinate from time to time
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.