Posted on 08/07/2025 11:42:55 AM PDT by E. Pluribus Unum
University of Washington researchers recruited self-identifying Democrats and Republicans to make political decisions with help from three versions of ChatGPT: a base model, one with liberal bias and one with conservative bias. Democrats and Republicans were both likelier to lean in the direction of the biased chatbot they were talking with than those who interacted with the base model. Here, a Democrat interacts with the conservative model. Credit: Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2025). DOI: 10.18653/v1/2025.acl-long.328
If you've interacted with an artificial intelligence chatbot, you've likely realized that all AI models are biased. They were trained on enormous corpuses of unruly data and refined through human instructions and testing. Bias can seep in anywhere. Yet how a system's biases can affect users is less clear.
So a University of Washington study put it to the test. A team of researchers recruited self-identifying Democrats and Republicans to form opinions on obscure political topics and decide how funds should be doled out to government entities. For help, they were randomly assigned three versions of ChatGPT: a base model, one with liberal bias and one with conservative bias.
Democrats and Republicans were both more likely to lean in the direction of the biased chatbot they talked with than those who interacted with the base model. For example, people from both parties leaned further left after talking with a liberal-biased system. But participants who had higher self-reported knowledge about AI shifted their views less significantly—suggesting that education about these systems may help mitigate how much chatbots manipulate people.
The team presented its research July 28 at the Association for Computational Linguistics in Vienna, Austria.
"We know that bias in media or in personal interactions can sway people," said...
(Excerpt) Read more at phys.org ...
![]() |
Click here: to donate by Credit Card Or here: to donate by PayPal Or by mail to: Free Republic, LLC - PO Box 9771 - Fresno, CA 93794 Thank you very much and God bless you. |
If AI was truly intelligent, the response would have been to get the lying, broke, corrupt government out of telling people where they should live. But phys.org is going to tell you that AI should have said that multifamily zoning rules are totally awesome, dude.
And who exactly defines “biased”. Of course I’m going to gravitate to a conservative modeled AI, because that one is more often correct. Liberalism is rot.
If AI is used as a tool and not treated as the keeper of all wisdom it can be helpful—even on political topics.
For example—it can probably quickly collect ten strong arguments in favor of a political position and ten strong arguments against it.
That is a lot easier than hunting around on the web.
Because too many people are spineless and wishy washy. I noticed this after the 1994 election, when the GOP famously took back the House after 40 years.
Within a week of the new Congress, people who had (supposedly) voted R started criticizing the Contract with America. When I heard this, I pointed out that they are just doing what they promised. The response was typically something like “Yeah, but they are being too mean about it”. Facepalm.
Garbage in, Garbage out is all;-)
Conservativism is strongly biased toward truth. Leftism is strongly biased toward virtue signaling.
Too bad the REAL chatbots swing to the left, probably because most of what they have “learned” from are “sources” like the NYTimes, WaPo and AP.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.