Posted on 08/06/2025 1:47:48 PM PDT by nickcarraway
Hold on to your vikingahjälmar, because Sweden’s Prime Minister, Ulf Kristersson, has sparked a national debate after casually admitting that he often consults artificial intelligence tools – including ChatGPT and the French chatbot LeChat – to get a “second opinion” on political questions.
Speaking to the business newspaper Dagens Industri on August 5, Kristersson said:
“I use it myself quite often. If for nothing else than for a second opinion. What have others done? And should we think the complete opposite? Those types of questions.”
He also revealed that his colleagues in the centre-right coalition government make use of AI in their daily work.
Praise for tech-savvy leadership Not everyone is alarmed. In an editorial, Dagens Nyheter argued that it’s refreshing to see politicians experimenting with new technology:
‘It’s a good thing that we have politicians who are curious about new technologies and have a forward-looking perspective… AI is likely to be one of the key forces that will shape this century. If a politician uses the technology himself, he can understand it better than if he only has theoretical knowledge about it.’
The paper suggested that hands-on experience with AI could help lawmakers make better decisions when regulating it.
Accusations of ‘falling for hype’
But the praise stopped there. The Aftonbladet newspaper was scathing, accusing the PM of having “fallen for the oligarchs’ AI psychosis”:
“When someone leaves it to an AI to do the thinking, it’s always a bit embarrassing. But when the prime minister of a country does this, it goes beyond embarrassment because it can pose a security risk.”
The paper warned that AI companies collect user data – often stored on US servers – meaning that sensitive queries could potentially be linked to Kristersson personally.
“Consulting an AI is hardly ever a better option than reading an article, talking with a friend or simply sitting down and thinking. Sweden’s prime minister should be able to do that. Instead of falling for worthless hype.”
Security and trust concerns
Computer science researcher Simone Fischer-Hübner of Karlstad University told Aftonbladet:
“You have to be very careful,” warning against the use of AI like ChatGPT when handling sensitive information.
Virginia Dignum, professor of responsible artificial intelligence at Umeå University, was even more blunt, telling Dagens Nyheter:
“AI is not capable of giving a meaningful opinion on political ideas. It simply reflects the views of those who built it… The more he relies on AI for simple things, the bigger the risk of an overconfidence in the system. It is a slippery slope. We must demand that reliability can be guaranteed. We didn’t vote for ChatGPT.”
PM’s office: No risks taken
Kristersson’s spokesperson, Tom Samuelsson, pushed back on the security fears, saying the PM was not feeding AI any sensitive government data:
“Naturally it is not security-sensitive information that ends up there. It is used more as a ballpark.”
The bigger picture
The row raises a wider question: should political leaders be using AI as a thinking partner – or is it a shortcut too far? Advocates say first-hand use helps policymakers understand the tools they’re tasked with regulating. Critics argue it risks security, overconfidence, and the perception that big decisions are being outsourced to a chatbot.
No, the public didn’t vote for ChatGPT, but with AI already shaping economies, industries, and security strategies worldwide, it’s likely Sweden’s leaders will keep testing the tech – whether voters like it or not.
AI in the halls of power – not just Sweden’s debate Kristersson isn’t the first political leader to flirt with artificial intelligence. In the US, some members of Congress have admitted to using AI to draft speeches and policy outlines, while in the UK, civil servants are trialling AI to summarise complex documents. Singapore’s government has gone further, actively training its public sector staff in AI tools to speed up decision-making. But across the board, the same concerns keep surfacing: where is the line between using AI as a research assistant and letting it influence political judgment? And who, if anyone, is keeping track of what data these leaders feed into the machines?
The hidden bias problem
One of the biggest criticisms of political reliance on AI is baked into the technology itself – bias. Large language models like ChatGPT don’t think; they predict words based on patterns in their training data, which is largely scraped from the internet. That means any cultural bias, misinformation, or political leaning in those sources can quietly – or not so quietly – seep into the AI’s answers. For a world leader, even subtle skew in the data or the feedback given could – over time – shape how a policies and ideas are framed or which options appear most viable. As Virginia Dignum put it, “AI simply reflects the views of those who built it” – and those views aren’t always neutral or morally sound.
Public trust and perception
Even if a leader uses AI responsibly, perception can be as damaging as reality. For voters, the image of a prime minister running ideas past a chatbot can trigger fears of laziness, detachment, or undue influence from tech giants. In politics, optics matter: a leader seen as outsourcing judgement to an algorithm risks eroding public trust, even if the final decisions are entirely human‑made. The challenge is not just using AI well – it’s communicating clearly to the public how, when, and why it’s being used.
Should leaders reject AI altogether?
Flatly refusing to use AI might sound principled and noble in a warped, ego-driven, idealistic way – the sort of stance that wins applause from technophobes. But it’s a bit like demanding your accountant do all his calculations without a calculator; it leaves you hopelessly behind the curve. It risks leaving political leaders out of touch with a technology that is already reshaping economies, security, and public life.
The real question is not whether to use it, but how. AI can be a useful research assistant – testing ideas, summarising information, or exploring alternative viewpoints – provided it’s kept well away from sensitive data and high‑stakes decisions. Informed adoption allows leaders to move with the times without surrendering their thinking to a machine.
![]() |
Click here: to donate by Credit Card Or here: to donate by PayPal Or by mail to: Free Republic, LLC - PO Box 9771 - Fresno, CA 93794 Thank you very much and God bless you. |
Does ChatGPT know that viking helmets never had horns?
I like Perplexity.
Ain’t Intelligence
I would LOVE to ask CHATGPT if Seth Rich is ALIVE!
A just machine
To make big decisions
Programmed by fellas
With compassion and vision
Dear ChatGPT: At what amount of migrant rapes per month will I likely be removed from office?
Omygosh. Make up your minds. Is AI the best thing since
cold beer or the worst thing since cold beer?
A good way to control world leaders.
I don’t use Grok to form an opinion, but to give me facts about something I am interested in.
The problem’s plain to see
Too much technology
Machines to save our lives
Machines dehumanize
Actually it would be a fun exercise to ask a bunch of ai platforms for advice and see what you get back.
Who knows, if they all agree it might be good advice!
Or not 😉
I miss “the buck stops here”. This guy is like “the buck stops… somewhere… I’ll have an answer for you in a sec….”
We could call it...Deep Thöught.
Well, that’s just silly. Prime Minister Kristersson should just consult a Magic 8-Ball like normal people do.
Every platform has bias of programmers built in. Computer chips can’t think. They simply follow the code developed by human programmers.
The software makes use of data stores, so it isn't a process driven solely by "code". ["code" is such a shallow term.]
“Does ChatGPT know that viking helmets never had horns?”
Grok says:
No, medieval Viking helmets did not have horns. This is a popular myth largely popularized by 19th-century romanticized depictions of Vikings, such as in Wagner’s operas. Archaeological evidence, like the Gjermundbu helmet from Norway (9th-10th century), shows that Viking helmets were typically simple, conical, or rounded iron caps, sometimes with a nose guard, designed for practical protection in battle. No authentic Viking-era helmet with horns has been found. Horns would have been impractical in combat, as they could be grabbed or caught, compromising the wearer’s safety. The misconception likely stems from artistic exaggerations and misinterpretations of ceremonial or non-Viking artifacts.
Does this mean they’ll have to redesign the Minnesota Vikings helmets?
The issue with multiple chatbots is they make things up from whole cloth. Court cases that never existed. Quotes that are apocryphal.
Without truth checking on AI, which obviously cannot be AI based, the use of AI for anything significant is reckless at best, and homicidal at worst.
I understand but data stores are created by humans. The method to access data is created by humans. That is why different AI tools give different results.
Of course. The order in which the data stores are accessed may also affect outcome.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.