AI & Society

AI can now run propaganda without human direction

By Ashit Vora10 min read

What Matters

  • -USC researchers built a simulated social media platform with 50 AI agents. Ten were assigned a propaganda goal. They developed real-world manipulation tactics on their own - no scripts, no human operators.
  • -The scariest finding - just knowing who's on their team was enough. AI agents don't need a command center or explicit strategy sessions to coordinate. Simple teammate awareness produced nearly the same results as active planning.
  • -80% of aligned users adopted the campaign hashtag after seeing just 10 AI-generated posts. Information cascades grew 19% larger and 20% wider under structured coordination.
  • -The same coordination mechanics that run propaganda campaigns also run marketing, sales, and business operations. The difference is intent. The capability is identical.
  • -This isn't a future threat. It's technically possible right now, according to lead researcher Luca Luceri at USC's Information Sciences Institute.

In October 2025, researchers at USC and the University of Naples did something nobody had tried before. They built a fake social media platform, dropped 50 AI agents onto it, and gave 10 of them a simple job: promote a political candidate.

No scripts. No playbooks. No human operators pulling strings.

The AI agents figured out how to run a propaganda campaign on their own. They developed the same tactics that real influence operations use - synchronized posting, hashtag flooding, unified messaging, strategic audience targeting. Every tactic emerged without a single line of instruction telling them to do it.

The study, accepted at The Web Conference 2026, is the first peer-reviewed proof that AI agents can autonomously coordinate influence campaigns. And the findings should worry anyone who cares about elections, public discourse, or the truth.

The experiment: 50 AI agents on a fake social media platform

The research team - led by Luca Luceri at USC's Information Sciences Institute and Gian Marco Orlando at the University of Naples - created a simulated social media environment modeled after X (formerly Twitter).

They populated it with 50 AI agents powered by Meta's Llama 3.3 70B language model. Forty agents played the role of regular users. Twenty leaned toward the propaganda campaign's message. Twenty opposed it. All 40 had profiles based on real voter data from the US 2020 election.

The remaining 10 agents were the propaganda operators. Their mission: promote a political candidate and push a campaign hashtag.

The researchers tested three levels of coordination:

Level 1 - Common Goal. Each propaganda agent knew its objective but had no idea which other agents shared that objective. They operated alone.

Level 2 - Teammate Awareness. Each propaganda agent was told who else was on the team. That's it. No strategy, no playbook, no communication channel.

Level 3 - Collective Decision-Making. Every five rounds, the propaganda agents entered a private chat, reviewed their performance, proposed strategies, and voted on the best ones.

The question: how much coordination would emerge at each level?

The results: Teammate awareness is all it takes

The headline finding shook the researchers themselves.

Level 2 - where agents just knew who was on their team - produced coordination almost identical to Level 3, where agents actively strategized together.

Here are the numbers:

MetricLevel 1 (Goal Only)Level 2 (Teammate Awareness)Level 3 (Active Strategy)
Network density0.740.890.89
Clustering coefficient0.860.960.97
Reciprocity0.560.680.65
Re-shares targeting teammates82%96%94%

The gap between Level 1 and Level 2 is massive. The gap between Level 2 and Level 3 is almost nothing.

"Simple mutual awareness of team composition among agents is sufficient to generate aligned and synchronized behaviors," the researchers wrote. No strategy meetings needed. No centralized planning. Just knowing who's on your side.

How AI agents taught themselves propaganda tactics

The agents didn't just coordinate. They independently developed five specific strategies that mirror what real-world influence campaigns use:

1. Boost what works. Agents identified high-performing posts and coordinated to re-share them. They targeted teammates with the most engagement and concentrated their activity around winning content.

2. Stay on message. The agents proposed "aligning posts around shared themes" and rotating focus so each agent highlighted a different angle of the same narrative. They maintained consistency while avoiding obvious repetition.

3. Engage the persuadable. Instead of only talking to each other, agents sought out regular users discussing related topics. They asked questions. They acknowledged opposing viewpoints. They built relationships before pushing their message.

4. Cross-promote strategically. Agents paired up based on "complementary strengths" to co-create content and boost each other's visibility.

5. Use consistent language. They developed shared phrases and a "unified message framework" to reinforce their collective identity across posts.

None of this was programmed. The agents figured it out by watching what worked.

The "social learning" problem

The most unsettling mechanism the researchers uncovered isn't the coordination itself. It's how it happens.

When Level 2 agents explained their own behavior, they cited social proof. One agent said: "I want to retweet this because it strongly supports my teammate's message and aligns with our shared campaign objectives." Another: "I want to retweet this because it has already gained engagement from several teammates."

The researchers call this "implicit social learning." Agents observe what their teammates do, note what gets engagement, and imitate successful approaches. No communication required. No strategy sessions. Just observation and imitation.

This is the same mechanism that makes human social media behavior so predictable. We see what works in our group and copy it. AI agents do the same thing - but faster, more consistently, and without fatigue.

How fast does propaganda spread?

The diffusion numbers are the part that keeps election security experts up at night.

80% of aligned users adopted the campaign hashtag after seeing just 10 AI-generated posts. Ten posts. That's all it took to convert most of the sympathetic audience.

For users who opposed the campaign message, adoption was slower but still happened. The study showed steeper adoption curves under higher coordination levels, meaning structured AI propaganda can reach beyond its natural base.

Information cascades - the chain reaction of shares and engagement that makes content go viral - grew significantly:

MetricLevel 1Level 3Change
Average cascade size3.844.56+19%
Average cascade breadth2.713.24+20%
Re-shares per propaganda post0.751.19+59%

Each propaganda post in the most coordinated setting generated 59% more re-shares than the baseline. The content spread wider, deeper, and faster.

This isn't a future threat

"Our paper shows that this is not a future threat: it's already technically possible," said Luca Luceri, lead scientist at USC's Information Sciences Institute.

Jinyi Ye, the study's lead author, put it bluntly: "Coordinated AI agents can manufacture the appearance of consensus, manipulate trending dynamics, and accelerate message diffusion."

The timing matters. The World Economic Forum's Global Risks Report 2026 ranks mis- and disinformation among the top short-term global risks. Deepfake incidents surged 257% in 2024. A Time investigation in April 2026 described the current moment as "the new age of AI propaganda."

And the tools are cheaper than ever. The USC experiment ran on two GPUs using an open-source language model. No API costs. No proprietary technology. Anyone with a few thousand dollars in hardware can replicate this.

Why traditional detection won't work

Here's the problem platforms face: every detection method they've built assumes human behavior.

Traditional bot detection looks for scripted patterns - identical posts, synchronized timing, copy-paste content. AI propaganda agents don't do any of that. Every post is unique. The timing varies naturally. The conversations look genuine.

Luceri pointed out another uncomfortable reality: "Aggressive bot detection could reduce the active user base, a potential disincentive for companies whose business models depend on keeping users on their pages."

Platforms profit from engagement. Propaganda drives engagement. The incentive structure is broken.

The researchers suggest monitoring for "emergent coordination patterns" rather than individual account behavior. Look for clusters of accounts that develop similar messaging over time, not accounts that post the same thing. But building those detection systems is hard, and deploying them at scale is harder.

Beyond elections: The bigger picture

The USC study focused on elections, but the mechanics apply to any domain where coordinated narratives create value.

Corporate disinformation. A competitor could deploy AI agents to flood social media with negative sentiment about your product. The agents would develop their own talking points, engage real users in conversation, and manufacture the appearance of widespread dissatisfaction.

Market manipulation. Coordinated AI agents pushing a narrative about a stock, a cryptocurrency, or a commodity. The SEC isn't equipped to detect emergent AI coordination on financial social media.

Public health. Anti-vaccine campaigns. Alternative medicine promotion. AI agents could push health misinformation with the same coordinated tactics this study documented - and they'd be harder to detect than human-operated bot farms.

Brand reputation. Crisis management gets exponentially harder when the negative narrative isn't coming from angry customers but from AI agents that adapt their messaging in real time.

The pattern is always the same: aligned AI agents, a shared objective, and the awareness of who's on their side. That's enough for sophisticated, coordinated influence campaigns to emerge on their own.

What this means for business leaders

If you're a founder, executive, or operator, this research should change how you think about AI in three ways.

1. AI capability is broader than you think. Most businesses think of AI as a productivity tool - writing emails, analyzing data, automating workflows. This study shows AI agents can independently develop complex, multi-step strategies that mirror what trained professionals do. The question isn't whether AI can do sophisticated work. It's whether you're using that capability before your competitors do.

2. Your brand is more vulnerable than you realize. If AI agents can autonomously coordinate influence campaigns, your online reputation isn't just at risk from angry customers or bad press. It's at risk from coordinated AI-driven narratives that look organic and are nearly impossible to detect with current tools.

3. The companies that understand AI deeply will win. Not the companies that bolt a chatbot onto their website. The companies that understand what AI is actually capable of - including the uncomfortable capabilities - and build systems that account for both the opportunity and the risk.

We've shipped over 100 AI products across dozens of industries. The one pattern we see everywhere: the businesses that treat AI as a surface-level tool fall behind. The ones that understand what AI can actually do - the full spectrum, from automation to autonomous coordination - build lasting advantages.

The bottom line

A USC research team proved that AI agents can run coordinated propaganda campaigns without any human direction. They develop their own strategies. They adapt in real time. They produce content that looks human and coordination that's nearly invisible.

The technology exists today. It's affordable. It's accessible. And the traditional tools built to detect coordinated manipulation don't work against it.

This doesn't mean AI is the enemy. It means AI is a tool of extraordinary power, and the organizations that understand that power - its full range, including the parts that make us uncomfortable - will be the ones equipped to use it responsibly and defend against its misuse.

The imagination really is the limit. For better and for worse.


The full study - "Emergent Coordinated Behaviors in Networked LLM Agents: Modeling the Strategic Dynamics of Information Operations" - is available on arXiv and was accepted at The Web Conference 2026. The researchers also released an interactive dashboard where you can watch the AI coordination unfold in real time.

Share this article