Can artificial intelligence help change minds?

Belief in conspiracy theories is more than just a fringe phenomenon. From COVID-19 hoaxes to political cover-ups, conspiratorial thinking has infiltrated every corner of society. Despite the ease of fact-checking in the digital age, many continue to hold fast to these beliefs. What if artificial intelligence could change that? Recent research published in the prestigious journal Science suggests that AI may just be the key to reducing harmful conspiratorial thinking.

Generative AI models—like the GPT series—have shown surprising effectiveness in engaging conspiracy believers in tailored dialogues. By directly addressing the evidence people cite for their beliefs, AI is able to phase out even the most entrenched views. The question is, how does this work, and can AI really help society fight misinformation in a sustainable way?

An ongoing problem

The proliferation of conspiracy theories is troubling, especially given their real-world consequences. From the January 6 uprising to the denial of COVID-19, these beliefs have not only threatened public safety, but undermined democracy itself. Traditionally, psychologists have argued that conspiracy beliefs fulfill psychological needs—providing believers with a sense of control or uniqueness—and are resistant to factual counterarguments.

But what if the problem is not so much the psychology of the believers as the way the facts are presented to them? Could it be that people get caught up in conspiracies simply because they’ve never encountered evidence in a way that really resonates with them? This new study suggests that AI may hold the answer.

The Power of AI Dialogues

Research led by Thomas Costello, Gordon Pennycook and David Rand tested a new intervention where AI engaged 2,190 participants, each of whom believed in a conspiracy theory, in real-time conversations. These participants were asked to explain the conspiracy they subscribed to, after which the AI ​​engaged them in a three-round dialogue, challenging their views with fact-based counterarguments.

The results were stunning: AI-driven conversations reduced belief in these conspiracy theories by an average of 20%. Even more surprisingly, this effect persisted for at least two months. The conversations were highly personalized, addressing the specific evidence that participants presented that likely contributed to their success.

Moreover, this technique was not only effective for “minor” or “minor” conspiracy theories. Participants who believed in widely held conspiracies — such as those involving COVID-19, the 2020 US election, or even long-held beliefs about the Illuminati — were just as likely to reduce their belief after interacting with the AI.

Why AI works where humans struggle

What makes AI more convincing than the average human fact checker? For one, AI doesn’t get excited or frustrated, which is often a hindrance in one-on-one debates. When someone refuses to budge, our instinct is to either argue more aggressively or disengage. AI, on the other hand, can maintain a cool and steady tone, guiding the conversation with infinite patience.

Another advantage AI offers is its ability to generate custom responses. Every conspiracy believer has their own version of why they believe what they do, and one-sided debunking just doesn’t work. AI can process and respond to the specific arguments each individual makes, making the dialogue feel more like a personal discussion than a lecture.

More importantly, HE didn’t just blindly debunk the conspiracies. He was able to distinguish between unsubstantiated claims and those rooted in truth. When participants mentioned real conspiracies (such as the CIA’s MK Ultra experiments), AI did not attempt to discredit them, which likely increased its credibility in other areas.

Lasting impact: Changing minds over the long term

The effectiveness of these AI dialogues is not just a passing victory. The study showed that the reduction in conspiracy belief was not a short-term effect that faded after a few days. In fact, participants showed no significant return to their previous levels of confidence even two months later.

What is even more impressive is the spillover effect. The dialogues focused on one specific conspiracy theory per person, but after interacting with the AI, participants also reduced their belief in other, unrelated conspiracies. This suggests that the intervention helped shift their overall worldview away from conspiratorial thinking.

Beyond changing beliefs, participants also showed real behavioral changes. Many expressed increased intentions to ignore or argue against other conspiracy believers, and some were even less likely to participate in protests related to conspiracy theories. This behavioral change hints at AI’s potential to reduce the spread of misinformation in broader social contexts.

A double-edged sword?

While the potential of AI to debunk misinformation is incredibly promising, the other side of this technology must also be considered. AI can easily be trained to spread disinformation as effectively as it can debunk it. Without careful guardrails, generative AI could be weaponized to reinforce false beliefs, making it essential that platforms and developers enforce strict guidelines on how AI is used in public discourse.

That said, the positive implications of using AI as a tool for truth are profound. In a world where disinformation runs rampant, AI can become an invaluable resource for journalists, educators and fact-checkers. Instead of playing a game of “whack the mole” with every new conspiracy, we can see scalable solutions where AI systematically engages with disinformation on social media, in search engines and beyond.

Optimism in a post-truth world?

AI’s success in changing minds should inspire optimism. For too long, it has been assumed that once someone falls down the rabbit hole of conspiracy thinking, they lose their reason. But this study shows that even die-hard conspiracy believers can be swayed with the right approach—one that’s patient, personalized, and backed by evidence. Artificial intelligence may not single-handedly solve the disinformation crisis, but it certainly adds a powerful new tool to the war.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top