Kristersson Turns to AI for a “Second Opinion” on Policy Decisions
Swedish Prime Minister Ulf Kristersson is facing heavy backlash after openly revealing that he frequently uses ChatGPT to help guide his political thinking.
The admission has sparked a national debate about the risks of politicians relying on generative AI technologies in the course of official government duties.
In a recent interview with a local media outlet, Kristersson stated that he uses AI tools "quite often" to gain alternative viewpoints when evaluating political strategies.
Kristersson revealed that he often relied on AI to help him understand prevailing opinions or approaches taken by others on similar issues, which he uses to assess whether he should adopt a contrasting position.
He explained that this process gives him a broader perspective on complex matters, adding that several of his government colleagues have also integrated AI tools into their daily workflow.
Concerns Mount Over Security and Ethical Implications
While Kristersson’s remarks may reflect a modern embrace of digital tools in leadership, they have drawn sharp criticism from technology experts and political commentators in Sweden.
Some believe the prime minister’s casual use of generative AI for policy thinking sets a dangerous precedent.
Swedish tabloid Aftonbladet accused Kristersson of slipping into what it called an “AI psychosis,” suggesting an overreliance on machine-generated input could cloud sound political judgment.
Security experts have also raised red flags about the potential for sensitive information to be unintentionally exposed through interactions with AI systems.
Simone Fischer-Hübner, a computer science researcher at Karlstad University, voiced her concerns in an interview with Aftonbladet.
She stressed that AI users—especially political figures—must exercise extreme caution when inputting any form of confidential data into public AI models.
“You have to be very careful,” she warned, highlighting the lack of data privacy safeguards in many commercial AI tools.
In response to the backlash, Kristersson’s spokesperson Tom Samuelsson sought to clarify the prime minister’s usage of AI, stating that the tools are not used to handle or process any sensitive information.
“Naturally, it is not security-sensitive information that ends up there.”
Samuelsson characterizing the usage as exploratory and informal—meant only to provide a “ballpark” sense of prevailing ideas.
Experts Warn Against the Dangers of AI in Political Decision-Making
Despite the reassurances, experts remain skeptical of how far AI should be involved in governance.
Virginia Dignum, a professor of responsible artificial intelligence at Umeå University, stressed that AI systems like ChatGPT are incapable of forming genuine political opinions or understanding nuanced policy issues.
Instead, she warned, they merely regurgitate information based on the data and biases embedded in their training sets.
Dignum described the growing dependence on AI in political spheres as a “slippery slope,” cautioning against the normalization of tools created by tech companies being used to influence public policy.
“We didn’t vote for ChatGPT,” she said, emphasizing that elected officials—not algorithms—should be the ones making decisions that affect citizens’ lives.
As Kristersson’s comments continue to spark debate across Sweden, the controversy highlights broader global concerns about the role of AI in democratic institutions and the urgent need for clear ethical guidelines as governments increasingly integrate emerging technologies into their daily operations.