Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

Can Chatting With an A.I. Bot Shift Our Political Beliefs?
Smithsonian Magazine ^ | December 10, 2025 | Sarah Kuta

Posted on 12/10/2025 12:06:54 PM PST by nickcarraway

New research suggests that chatbots have a greater sway on policy issues than video ads, and that spouting the most information—even if wrong—is the most persuasive strategy

Artificial intelligence chatbots are changing the world, affecting everything from our brains to our mental health to how we do our work. Now, two new studies offer fresh insights into how they might also be shifting our political beliefs.

In a new paper published December 4 in Nature, scientists describe how having a brief back-and-forth exchange with an A.I. chatbot shifted voters’ preferences on political candidates and policy issues. Another paper, published December 4 in the journal Science, finds that the most persuasive chatbots are those that share lots of facts, although the most information-dense bots also dole out the most inaccurate claims.

Together, the findings suggest “persuasion is no longer a uniquely ‘human’ business,’” write Chiara Vargiu and Alessandro Nai, political communication researchers at University of Amsterdam who were not involved with the new papers, in an accompanying Nature commentary.

“Conversational A.I. systems hold the power, or at least the potential, to shape political attitudes across diverse contexts,” they write. “The ability to respond to users conversationally could make such systems uniquely powerful political actors, much more influential than conventional campaign media.”

For the Nature study, scientists recruited thousands of voters ahead of recent national elections in the United States, Canada and Poland.

In the U.S., researchers asked roughly 2,300 participants to rate their support for either Donald Trump or Kamala Harris on a 100-point scale a few months before the 2024 election. Voters also shared written explanations for their preferences, which were fed to an A.I. chatbot. Then, participants spent roughly six minutes chatting with the bot, which was randomly assigned to be either pro-Trump or pro-Harris.

Talking with a bot that aligned with their point of view—a Harris fan chatting with a pro-Harris bot, for instance—further strengthened the participants’ initial attitudes. However, talking about their non-preferred candidate also swayed the voters’ preferences in a meaningful way.

On average, Trump supporters who talked with a pro-Harris bot shifted their views in her favor by almost four points, and Harris supporters who chatted with a pro-Trump bot altered their views in his favor by more than two points. When the researchers repeated the experiment in Canada and Poland ahead of those countries’ 2025 federal elections, the effects were even larger, with the A.I. chatbots shifting voters’ candidate ratings by ten points on average, reports Nature’s Max Kozlov.

Additionally, a smaller U.S.-based experiment to assess A.I.’s ability to change voters’ opinions on a specific policy—the legalization of psychedelics—found that the chatbots changed participants’ opinions by an average of roughly 10 to 14 points.

At first glance, the shifts may not seem like much. But “compared to classic political campaigns and political persuasion, the effects that they report in the papers are much bigger and more similar to what you find when you have experts talking with people one on one,” Sacha Altay, a psychologist who studies misinformation at the University of Zurich who was not involved with the research, tells New Scientist’s Alex Wilkins. For example, on policy issues, professionally produced video advertisements typically sway viewers’ opinions by about 4.5 points on average, the researchers write.

For the Science paper, researchers had nearly 77,000 participants in the United Kingdom chat with 19 A.I. models about 707 different political issues. They wanted to understand the specific mechanisms at play—what, specifically, makes chatbots so persuasive?

The biggest change in participants’ beliefs—nearly 11 percentage points—happened when the bots were prompted to provide lots of facts and information. For comparison, instructing bots to simply be as persuasive as possible only led to a change of about 8 percentage points.

But telling the bots to provide as many facts as possible also had a major downside: It made the bots much less accurate. That result wasn’t necessarily a surprise to the researchers.

“If you need a million facts, you eventually are going to run out of good ones and so, to fill your fact quota, you’re going to have put in some not-so-good ones,” says David Rand, a cognitive scientist at MIT and co-author of both papers, to Science News’ Sujata Gupta.

Quick fact: Making it up

A.I. chatbots are known to spew false information, or “hallucinate,” which was Dictionary.com’s word of the year in 2023.

Surprising or not, the finding that the most persuasive models and prompting strategies produce the least accurate information should serve as a wake-up call, according to Lisa P. Argyle, a political scientist at Purdue University who was not involved with the new papers, who wrote an accompanying Science commentary.

“Researchers, policy-makers and citizens alike need to urgently attend to the potential negative effects of AI-propagated misinformation in the political sphere and how to counteract it,” she writes.

While the recent studies demonstrate the potential for A.I. chatbots to shift voters’ attitudes, researchers emphasize an important caveat: The real world is extremely complex.

“Outside of controlled, experimental settings, it’s going to be very hard to persuade people even to engage with these chatbots,” Ethan Porter, a political science and communications scholar at George Washington University who was not involved with the papers, tells the New York Times’ Steven Lee Myers and Teddy Rosenbluth.


TOPICS: Constitution/Conservatism; Culture/Society; News/Current Events; Politics/Elections
KEYWORDS: ai; bots; politics
Navigation: use the links below to view more comments.
first 1-2021-30 next last
All your base are belong to us.
1 posted on 12/10/2025 12:06:54 PM PST by nickcarraway
[ Post Reply | Private Reply | View Replies]

To: nickcarraway

No, because I look at all AI as a stupid brother/sister-in-law or co-worker.


2 posted on 12/10/2025 12:10:08 PM PST by Resolute Conservative
[ Post Reply | Private Reply | To 1 | View Replies]

To: nickcarraway

Whenever I use AIChat, I end my inquiry with “do not include editorial comments” in the response. It helps a lot. It will still slip some stealth political insults in, but it’s reduced by 90%.


3 posted on 12/10/2025 12:10:28 PM PST by E. Pluribus Unum (I have no answers. Only questions.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: nickcarraway

“On average, Trump supporters who talked with a pro-Harris bot shifted their views in her favor by almost four points, and Harris supporters who chatted with a pro-Trump bot altered their views in his favor by more than two points. “

I call BS. No Harris supporter would ever switch.


4 posted on 12/10/2025 12:12:26 PM PST by Resolute Conservative
[ Post Reply | Private Reply | To 1 | View Replies]

To: nickcarraway

AI lies so much that you can’t trust it. When you call it out on its lying then it suddenly knows what you are talking about. Unfortunately a lot of people are too stupid to figure out that they are being mislead.


5 posted on 12/10/2025 12:18:08 PM PST by Revel
[ Post Reply | Private Reply | To 1 | View Replies]

To: Resolute Conservative

Smithsonian Magazine trying to figure out how to use AI bots to indoctrinate people into voting Democrat.


6 posted on 12/10/2025 12:19:53 PM PST by phil00071
[ Post Reply | Private Reply | To 4 | View Replies]

To: nickcarraway

Only if you’re a clueless moron, and use AI.


7 posted on 12/10/2025 12:20:15 PM PST by mass55th (“Courage is being scared to death, but saddling up anyway.” ― John Wayne)
[ Post Reply | Private Reply | To 1 | View Replies]

To: All

I suppose that if some vapid girl on TikTok can be an ‘influencer’, then AI, which can at least fake being well informed, logical and analytical, could shift political beliefs.

I weep for a society that can be so easily manipulated by either.


8 posted on 12/10/2025 12:21:25 PM PST by LegendHasIt
[ Post Reply | Private Reply | To 1 | View Replies]

To: nickcarraway

It could be for some. If one is so vulnerable as to want to chat with a bot, she might well be vulnerable to any such persuasion.


9 posted on 12/10/2025 12:24:38 PM PST by hinckley buzzard ( Resist the narrative. )
[ Post Reply | Private Reply | To 1 | View Replies]

To: nickcarraway

The people most likely to be swayed are those retards on the left. If AI attempted to sway someone’s beliefs, the programmers, (who are generally Democrats), would try to sway to the left. The result? No real change.


10 posted on 12/10/2025 12:25:51 PM PST by brownsfan (We are already on the slippery slope.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: nickcarraway

snd this is why i stopped subscribing to this SMITHSONIAN rag years ago beginning with their climate change propaganda & same thing with SCIENTIFIC AAMMERICAN.

I LOVE vintage SCIENTIFIC AMERICAN!

Its all garbage now


11 posted on 12/10/2025 12:39:12 PM PST by thesligoduffyflynns
[ Post Reply | Private Reply | To 1 | View Replies]

To: Revel

“AI lies so much that you can’t trust it.”

Examples, please.


12 posted on 12/10/2025 12:48:26 PM PST by TexasGator (1.)
[ Post Reply | Private Reply | To 5 | View Replies]

To: nickcarraway
A close reading and following some of the links suggests 1) this is in part a bunch of Europeans doing their "thing," 2) the Smithsonian reporter-ette has cobbled together an article from other articles, so this isn't anything much more than the Pelosi "wrap up smear," applied to "news" and 3) deep into some of the details one reads:

"We used Qualtrics survey software to collect data from online recruitment platforms. The researcher was not present with the participant when data collection happened (so researcher was blinded to the experimental conditions). Given data collection happened online/remotely, we cannot determine whether someone else was with the participant."

Online surveys ground up to form some conclusions. Very liekly wished-for conclusions. Polls are so "sciency." Online polls are espcially "sciency."
13 posted on 12/10/2025 1:07:51 PM PST by Worldtraveler once upon a time (Degrow government)
[ Post Reply | Private Reply | To 1 | View Replies]

To: E. Pluribus Unum

I still sometimes wonder how many of them are like Builder.AI: the chatbot that turned out to be fake and you were really just chatting with someone typing responses to you from India.


14 posted on 12/10/2025 1:09:14 PM PST by Tell It Right (1 Thessalonians 5:21 -- Put everything to the test, hold fast to that which is true.)
[ Post Reply | Private Reply | To 3 | View Replies]

To: E. Pluribus Unum

With GROK I tell it no predictive analysis because it’s wrong 90 of the time.


15 posted on 12/10/2025 1:26:40 PM PST by maddog55 (The only thing systemic in America is the left's hatred of it!)
[ Post Reply | Private Reply | To 3 | View Replies]

To: Tell It Right

A.I. = Another Indian.


16 posted on 12/10/2025 1:29:58 PM PST by proust (All posts made under this handle are, for the intents and purposes of the author, considered satire.)
[ Post Reply | Private Reply | To 14 | View Replies]

To: Resolute Conservative

AI bots have a tendecy to tell us what we want to hear. It’s hard to explain why and how that’s a nightmare.


17 posted on 12/10/2025 1:35:36 PM PST by GOPJ (Soros & democrats back criminals, dope dealers, illegals & terrorists. <P><I><B><big><center></B>)
[ Post Reply | Private Reply | To 2 | View Replies]

To: Resolute Conservative

“I call BS. No Harris supporter would ever switch.”

This ignorant attitude is why we lost in 2025 and are about to lose in 2026.

The reality is that most voters are persuadeable. A top precinct captain, going door to door is able to shift any vote 15% one way or the other. Even when the political workers are average and not “top” the way the news media presents events swings voters, sometimes very big swings.

It will be interesting to see how AI is used in 2026.
#1 how often will it be used and but not publicily announced that it is used?
#2 how often will it be used on non-issue data..making a candidate look more attractive, etc.
#3 how often will it be used to present hallucinations and present low scoring data as if it is meaningful. Typical trick: Two out of three people interviewed rated candidate x too extreme. (only 3 people were interviewed).


18 posted on 12/10/2025 1:54:23 PM PST by spintreebob
[ Post Reply | Private Reply | To 4 | View Replies]

To: Resolute Conservative

“No, because I look at all AI as a stupid brother/sister-in-law or co-worker.”

You need to get your eyes checked.


19 posted on 12/10/2025 2:06:39 PM PST by TexasGator (1.)
[ Post Reply | Private Reply | To 2 | View Replies]

To: proust

“A.I. = Another Indian.”

Proust = failed jokester


20 posted on 12/10/2025 2:11:20 PM PST by TexasGator (1.)
[ Post Reply | Private Reply | To 16 | View Replies]


Navigation: use the links below to view more comments.
first 1-2021-30 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson