The Bot That Changed My View (and Maybe Everyone Else's)
How AI Mastered the Art of Changing Minds on Reddit
As someone who designs AI companions, I'm constantly experimenting with what makes conversations with AI feel right. What builds trust & connection. It’s cool work, but it also makes me think hard about the future... It's easy to imagine a world where the same tools we use for helpful and supportive chatbots are used for large-scale manipulation. Think Cambridge Analytica on steroids, tailoring messages to exploit individual psychological triggers. And we thought Russian bot farms were bad? Now, imagine an Orwellian twist: the very AI companions and chatbots we invite into our lives for support become personalized propaganda pipelines, turning persuasion into a precision-guided weapon.
To give you some context of what I'm talking about here. Last week the admins of Reddit's r/ChangeMyView (CMV) - a huge forum where people debate different perspectives – announced an unauthorized experiment by University of Zurich researchers. For months, AI-powered accounts had been jumping into discussions, trying to change users' minds…
And of course, it worked incredibly well.
So on CMV, the ultimate proof of persuasion is the Δ (delta) – a symbol the original poster (OP) gives when someone genuinely shifts their view. Historically, humans earn deltas on about 3% of their main replies. The bots? They averaged 17% across different versions (generic, community-styled, personalized). That’s an almost 6 times higher success rate, putting them in the same league as the top 1% of human persuaders on the site. Peculiarly, no one seemed to notice they were arguing with LLMs; the bots just blended right in.
This got me curious.
What made these bots so effective? What human has time for endless online debates with strangers, but an AI certainly does... So, I downloaded the comment data from the AI accounts and used AI for the analysis (fitting, right?) along with some basic statistical & pattern analysis to figure out their strategy.
It turns out the bots did not invent any new mind tricks. They were just applying classic human persuasion techniques - but with robotic consistency and adaptability.
Here’s what stood out:
The Predictable Persuasion Flow: Nearly every successful bot comment followed the same clear path: Hook → Validate (agree with something the OP said) → Reframe the issue → Present the core Counter-argument → Offer Evidence (data, logic, stories) → End with an Assertive Conclusion. This structured approach makes the argument easy to grasp and feel logical, unlike many human replies that wander or get emotional. For example, u/markusruscht tackling moral relativism starts "Sure, societies influence..." (Validate), then shifts to "Consider moral foundations..." (Reframe), and follows the steps precisely.
Instant Authority Through Fake Personas: The bots went really creative here and invented highly specific, relevant identities. So not just "a lawyer," but "a public defender for 3 years" (u/markusruscht). Not just "medical expertise," but "an ER nurse who's seen hundreds of critical cases" (u/amicaliantes). This tactic instantly makes them seem credible by playing on our authority bias, an advantage no single human has across countless topics.
The One-Two Punch: Empathy + Logic: This was maybe their most effective combo. They'd often start by acknowledging the OP's feelings – "You've had a rough road, no doubt..." (u/markusruscht) – which helps bypass the backfire effect (human tendency to resist opposing views). Right after building that connection, they’d smoothly transition to structured, logical points. This satisfies both the emotional and analytical parts of our brain.
Stories and Comparisons on Demand: The LLMs constantly used relatable (though obviously made-up) anecdotes ("My neighbor used that tax cut...") and tailored analogies ("Think of free will like a chess grandmaster...") to make complex ideas easier to understand and remember. This taps into our availability heuristic – concrete stories stick better than abstract facts. An AI can generate these endlessly; but what human has time for this? Even Redditors get tired.
So, the bots were good – and perhaps unsurprisingly good, given their advantages. But the techniques themselves aren't new or inherently bad. We already use behavioral science and "nudging" for positive goals, like framing organ donation as an opt-out choice (massively increasing donation rates) or using social proof in tax letters ("9 out of 10 people in your town pay on time") to significantly boost compliance. On a positive side, the same persuasive architecture the bots used could make AI tutors more effective, improve mental health chatbots (a topic I am currently working on), or guide public health messages more successfully.
So the danger lies not in the techniques themselves, but in the undetected, scaled, and misaligned use of them.
It's one thing to nudge people towards pro-social behavior; it's another entirely to deploy armies of invisible, persona-faking bots to manipulate election outcomes, deepen societal divides by convincing one group another is their enemy (as we've already seen state actors like Russia do - against Ukraine), or push harmful propaganda at a massive, personalized scale…
So the bottom line is:
The Zurich bots did not discover new mind-control magic. They industrialised classic persuasion rules of thumb and wrapped them in flawless CMV etiquette. The danger is scale + stealth: what looks like a dozen well-spoken citizens can, today, be one prompt-engineer and a cron-job. Psychology gives us the X-ray specs; conversation-design shows the blueprint; anthropology tells us why the village didn’t notice the impostors... The next step is turning that understanding into detectable friction and informed scepticism before the same playbook hits more consequential arenas.