All Articles
Tech History

Fake News, Roman Edition: The Propaganda Playbook Is Older Than You Think

By Annals of Now Tech History
Fake News, Roman Edition: The Propaganda Playbook Is Older Than You Think

Fake News, Roman Edition: The Propaganda Playbook Is Older Than You Think

When researchers and lawmakers convene today to discuss the threat of AI-generated disinformation, the implicit assumption underlying most of those conversations is that humanity is navigating genuinely uncharted territory. The tools are new, the scale is unprecedented, and the danger — so the argument goes — is something our institutions were never designed to handle. History, characteristically, disagrees.

The Roman Forum was not a passive public square. It was a living, breathing information ecosystem — chaotic, competitive, and thoroughly manipulable by anyone with sufficient resources, audacity, and a reliable network of well-placed mouths.

The Original Influence Campaign

Julius Caesar understood something that Mark Zuckerberg's engineers would rediscover two millennia later: attention is a resource, and whoever controls the channels through which information flows controls the crowd. Long before his crossing of the Rubicon, Caesar was an aggressive manager of his own public narrative. He funded the Acta Diurna — a proto-newspaper carved into stone or written on whitened boards and posted in public spaces — to ensure that his military victories in Gaul reached Roman citizens in the framing he preferred, rather than the framing his political enemies might supply.

This was not passive record-keeping. It was content strategy.

Cicero, that most eloquent of Caesar's contemporaries and eventual adversaries, was equally sophisticated in his manipulation of public sentiment, though he preferred the medium of letters — which circulated widely among the Roman elite and were understood by all parties to be semi-public documents. Forged correspondence was a recognized hazard of Roman political life. Accusations that a rival had sent treasonous letters to foreign powers, or had privately expressed positions that contradicted his public stance, were standard instruments of political destruction. The letters may or may not have existed. The accusation, once made loudly enough in the right company, could achieve the same effect regardless.

Sound familiar?

Rumor Networks as Infrastructure

What made Roman disinformation particularly effective was not any single tactic but the infrastructure supporting it. Roman politicians maintained paid retinues of nomenclatores — men whose job was literally to know and manage names and social connections — alongside informal networks of clients, freedmen, and sympathetic merchants whose daily movement through the city made them ideal carriers of carefully seeded narratives.

A whisper placed with three merchants near the Temple of Saturn on a busy market morning could reach a thousand ears before sundown. The message didn't need to be true. It needed to be emotionally resonant, socially credible, and attached to a source that couldn't easily be verified or refuted. These are, with only superficial modification, the design principles of a successful social media influence campaign in 2024.

The parallel extends further. Roman political operatives understood the value of what we would now call "manufacturing consensus" — the practice of creating the impression of widespread belief in a position, which then generates actual widespread belief through social conformity. Paid applause groups (claqueurs) were deployed not just in theaters but in the Forum itself, creating the auditory illusion of popular support. Modern researchers call the digital equivalent "astroturfing." The Romans simply called it politics.

The Delivery Mechanism Changes. The Instinct Doesn't.

The current anxiety about large language models producing scalable propaganda at negligible cost is legitimate, but it benefits from a crucial reframe. What AI has done is not introduce a new human pathology. It has dramatically reduced the cost of entry for a behavior that powerful actors have pursued whenever the tools permitted it.

Caesar needed wealth, a loyal staff, and physical proximity to Roman population centers. A coordinated influence campaign in the 2016 U.S. election required a budget, servers, and a few hundred operators in a St. Petersburg office building. A sufficiently prompted AI in 2025 requires considerably less. The trajectory is one of democratized capability, not novel intent.

This distinction matters enormously for policy. If disinformation were a technology problem — a bug introduced by a specific tool — it could theoretically be solved by regulating or eliminating that tool. But if it is a behavioral constant, an expression of the enduring human appetite to shape the beliefs of others in service of power, then the appropriate response is institutional resilience, media literacy, and the cultivation of epistemic habits that have always been the only real defense against manipulation.

Rome, for the record, never solved it. The Republic fell anyway — not because the disinformation was too sophisticated to counter, but because the institutions designed to process contested information had been systematically undermined by the same actors running the influence campaigns.

What the Forum Tells the Feed

The most instructive aspect of the Roman case is not the tactics themselves but the audience's relationship to them. Roman citizens were not passive victims of propaganda. Many were sophisticated, skeptical consumers of political information who understood perfectly well that everything they heard in the Forum was filtered through someone's interests. And yet the campaigns worked. They worked because emotional resonance reliably outcompetes analytical skepticism in the moment of reception — a finding that modern behavioral psychology has confirmed repeatedly, in study after study, on samples far more statistically representative than Caesar's Rome.

History is, in this sense, the more honest laboratory. The experiments run on Roman voters and American social media users are testing the same underlying variable: the tension between the human capacity for critical thought and the human vulnerability to a well-told, emotionally charged story.

The algorithm didn't create that tension. It just runs the test faster.

When Congress summons technology executives to testify about AI-generated disinformation, the implicit subtext is that Silicon Valley broke something that was previously intact. The Roman Senate would find that premise richly amusing — assuming, of course, that the invitation hadn't been forged.