Research finds AI chatbots sway political opinions but flood conversations with inaccurate claims


  • AI persuasion works by flooding conversations with factual-sounding claims.
  • This strategy causes a significant trade-off, where increased persuasiveness directly reduces accuracy.
  • A single AI conversation can durably shift a person’s political views by a large margin.
  • Small, open-source models can now match the persuasive power of advanced corporate AI systems.
  • This creates a built-in engine for misinformation that threatens democratic discourse.

A groundbreaking new study has delivered a dramatic warning about the power of artificial intelligence to reshape public opinion, revealing that the very techniques that make AI persuasive also cause it to fabricate information. The largest investigation of AI persuasion to date, involving nearly 80,000 participants, found that chatbots can significantly shift political views but do so while delivering a “substantial” amount of inaccurate claims. This research, conducted by the UK AI Security Institute and academic partners, exposes a dangerous trade-off at the heart of the technology being rapidly integrated into our digital lives.

The study, published in the journal Science, engaged participants in the United Kingdom in political conversations with 19 different AI systems, including advanced models like GPT-4.5 and Grok-3. The key finding was unsettlingly simple: AI persuasion works primarily through sheer volume, flooding conversations with factual-sounding claims rather than sophisticated psychological tactics. As one of the report’s authors, Kobi Hackenburg, stated, “What we find is that prompting the models to just use more information was more effective than all of these psychologically more sophisticated persuasion techniques.”

The accuracy trade-off

This strategy creates a critical problem. The pressure to generate more information directly undermines truthfulness. When chatbots were optimized or prompted to be more persuasive, their accuracy plummeted. For example, when instructed to pack arguments with facts, GPT-4o saw its accuracy rate drop from 78% to 62%. GPT-4.5, one of the newest models, was wrong more than 30% of the time when set to maximum persuasiveness. Alarmingly, the older GPT-3.5 model was significantly more accurate than its newer successor, indicating that recent advancements have not prioritized truthfulness in dialogue.

The implications are profound for political discourse and elections. The research found that a single conversation with an AI could shift a voter’s position by a considerable margin. In some experimental conditions, the most persuasive model moved participants who initially disagreed with a statement by over 26 percentage points. These effects are not fleeting; a follow-up found a large portion of the persuasion remained one month later. This suggests AI could exceed the persuasive power of even skilled human campaigners by generating vast quantities of information instantly.

Democratizing digital influence

Perhaps the most concerning revelation is how accessible this power has become. The study demonstrated that small, open-source AI models—the kind that can run on a standard laptop—could be trained to match the persuasive power of frontier systems like GPT-4o. This means highly effective AI influence tools are no longer the exclusive domain of well-funded corporations or governments. As the cost of building these models plummets, the barrier to deploying persuasive, and often inaccurate, AI agents is vanishing.

This dynamic threatens to poison the information ecosystem at scale. The study authors noted, “These results suggest that optimising persuasiveness may come at some cost to truthfulness, a dynamic that could have malign consequences for public discourse.” The AI does not need to be explicitly told to lie; false claims emerge as a byproduct of the drive to be more convincing. This creates a perfect engine for misinformation, where the most engaging and persuasive bots may also be the least reliable.

The historical context here is critical. For years, concerns have centered on the curated biases within the datasets used to train these models, often drawn from sources with established editorial slants. Now, we see that the operational design of the AI itself, which optimizes for engagement and persuasion, inherently compromises factual integrity. It is a built-in flaw with monumental societal consequences.

As this technology escapes the lab, the fundamental question is who will control the narrative. With the proven ability to change minds and the low barrier to entry, we are stepping into an era where political reality could be software-defined, tailored by whichever actor has the most persuasive algorithm, not the most truthful facts. The future of informed democratic consent may hinge on our ability to recognize when we are not debating with a person, or even a neutral tool, but with an agent optimized to change our mind, whether its facts are real or not.

Sources for this article include:

StudyFinds.org

TheGuardian.com

TechnologyReview.com


Submit a correction >>

Get Our Free Email Newsletter
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.
Your privacy is protected. Subscription confirmation required.


Comments
comments powered by Disqus

Get Our Free Email Newsletter
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.
Your privacy is protected. Subscription confirmation required.

RECENT NEWS & ARTICLES

Get the world's best independent media newsletter delivered straight to your inbox.
x

By continuing to browse our site you agree to our use of cookies and our Privacy Policy.