Sweden’s Prime Minister Ulf Kristersson has come under intense scrutiny after admitting to regularly using artificial intelligence tools — including ChatGPT and France’s LeChat — to support his work leading the country.
Kristersson, who heads a centre-right coalition under the Moderate Party, told Swedish business daily Dagens industri that he often uses generative AI to gain a “second opinion,” particularly when weighing policy decisions or exploring whether the government should pursue ideas counter to conventional thinking. “What have others done? And should we think the complete opposite?” he said, describing his process.
The revelation has sparked criticism from academics and political commentators alike, igniting debate over the appropriateness of using commercial AI tools in governance. Sweden’s tabloid newspaper Aftonbladet was blunt in its editorial, accusing Kristersson of succumbing to what it termed “the oligarchs’ AI psychosis.”
Computer science expert Simone Fischer-Hübner of Karlstad University warned that AI tools like ChatGPT are not designed to handle politically sensitive material or make judgments of national importance. “You have to be very careful,” she said, particularly about the risk of security breaches or uncritical reliance on systems not built for policymaking.
Kristersson’s spokesperson, Tom Samuelsson, attempted to downplay the controversy, clarifying that no sensitive or classified data was ever shared with the AI systems. “Naturally it is not security-sensitive information that ends up there,” he said. “It is used more as a ballpark.”
Still, concerns persisted. Virginia Dignum, a professor of responsible AI at Umeå University, argued that AI tools are inherently limited in their ability to evaluate political or moral dilemmas. “AI cannot offer meaningful opinions on political ideas — it simply reflects the views and biases of those who created and trained it,” she told Dagens Nyheter. She warned of a potential “slippery slope” if political leaders increasingly outsource thinking to automated systems.
“The more he relies on AI for simple things, the bigger the risk of overconfidence in the system,” Dignum said. “We must demand reliability. We didn’t vote for ChatGPT.”
The episode has raised broader questions about transparency, trust, and the role of emerging technologies in political decision-making — particularly as governments around the world begin integrating AI into their operations.
You Might Also Like

Latest Article
Estate Agents Urge Transparency In Use Of Government Land For Affordable Housing
The Estate Agents Section (EAS) within the Malta Developers Association has expressed support for initiatives aimed at helping first-time buyers enter the property market, while warning that any government-led affordable housing schemes involving state-owned land must be managed transparently and competitively. In a statement, Michael Bonello, CEO of Alliance Real Estate Group and Head of … Continued
|
12 August 2025
Written by MeetInc.

Auditors Warn Dizz Group’s Future Depends On Successful Refinancing
|
12 August 2025
Written by MeetInc.

UFC Signs $7.7 Billion Streaming Deal With Paramount, Ending Pay-Per-View Model
|
12 August 2025
Written by MeetInc.