DALL-E AI generated image
Abusing AI for fake news.

War in the age of AI slop

Published Sunday, April 5, 2026 - 13:37

We wake each morning, reach for our phones, and unleash a stream of fabricated videos pulling us into a parallel world where we satisfy our desires, indulge our impulses, and remake justice and injustice in our own image.

Videos depicting Iranian missiles leveling Tel Aviv. Others showing US soldiers captured by the Iranian army. Clips of destruction that never happened in Lebanon, and others portraying towers collapsing in the UAE. All of it produced through advanced, low-cost, widely accessible AI applications.

These videos surged alongside the Israeli-American war on Iran at the end of last month, appearing next to real footage documented by people on their mobile phones in the streets; a crucial counterweight given the heavy censorship imposed on media in Israel and Iran, and Donald Trump’s threats to punish outlets that deviate from his shifting official line.

Yet, reports show hundreds of millions of views flowing to hyper-realistic fake videos made to reflect what people want to see, shape opinion, or generate profit for their creators.

So, who is producing this content, why, and how? Why are people drawn to absurdities that stage total devastation in Israel or elsewhere as spectacle?

Why do users circulate satellite images that are “real” only in the sense that they were generated by satellites, stripped of any factual grounding? What damage does this flood of falsehoods inflict, and can it be contained?

A flood of falsehoods

Lying is as old as war itself, and deception has always been one of its tools. But the technologies capable of producing videos and images that appear strikingly real have advanced at a staggering pace in recent years, so much so that false content now mimics truth with unnerving precision and can be generated in seconds.

In just the past few months, updated and more sophisticated versions of video-generation software have emerged. They allow users to animate real people or public figures, scripting them to say and do whatever one desires. They can reconstruct real streets, only to manipulate them into scenes of total ruin, or just as easily, into polished, embellished illusions.

Google has introduced its Veo application. OpenAI offers Sora. A Chinese platform, Seedance, has entered the field. Meanwhile, X provides similar capabilities through its Grok application—one that has recently drawn backlash after enabling users to turn real images of women and girls into explicit sexual videos.

The cheapest of these tools costs as little as $8 per month. Others can be accessed through intermediaries for less than $1.50 per minute, without any subscription at all.

At the same time, social media has evolved just as rapidly. Someone living near the Amazon rainforest—in Brazil, now a major hub for ad-driven content production—can produce Arabic-language videos claiming to document events in Dubai and reach millions across the Middle East.

Follow the money

Social media platforms such as Meta, X, TikTok and YouTube rely primarily on advertising, alongside other revenue streams. Advertisers and agencies place their products next to content—or with creators—who command the widest reach and influence.

The appeal of content, to both advertisers and platforms, is typically measured through metrics such as views, shares and engagement. What matters most, therefore, is visibility, interaction and attention.

Accordingly, these platforms pay content creators modest sums—small compared with their vast advertising revenues—through various monetization programs. Eligibility and payouts are tied to performance benchmarks such as view counts, watch time and engagement within specific timeframes. 

Social media controls the flow and spread of news

Recommendation algorithms amplify this dynamic further. They prioritize and recirculate content they predict will generate higher engagement, granting a structural advantage to material that is more attention-grabbing. As the cost of producing synthetic content continues to fall, this has contributed to a surge of dense, low-quality or repetitive material flooding users’ feeds—what has come to be known as AI slop.

Earnings vary depending on the platform and the geographic origin of views, but on average, Facebook pays between $4 and $6 per 1,000 views. The content creation economy is vast; its total revenue was estimated at no less than $184.9 billion last year.

India, where content creators are believed to have earned between $20 billion and $25 billion last year, has become a major supplier of such videos. Some creators there generate monthly incomes ranging from hundreds to thousands of dollars, depending on platform, audience size and market reach.

This unfolds in a country where the average monthly income for full-time workers ranges between 16,538 Indian rupees (about $180) and 21,103 rupees (about $228), according to 2025 data. It is little surprise, then, that India has become one of the fastest-growing markets for content creation.

One common tactic among producers of fake content is to create hundreds of accounts and pages that publish the same video from different geographic locations—while in reality they are all controlled by a single individual or coordinated group, monetizing the content at scale.

Beyond profit-seekers, fabricated videos are also produced by organized groups and companies working on behalf of institutions or states. These actors often operate as coordinated teams, generating thousands of accounts and pushing a continuous stream of content to serve the interests of those who fund or direct them.

Alongside the profit-driven creator and the state seeking political influence—deploying propaganda to rally supporters and disorient opponents—stand thousands, sometimes millions, who believe and circulate this content for a range of reasons. They like, share and amplify it, allowing its spread to resemble the division of a malignant virus within the human body: rapid, invasive and often lethal.

For example, in November 2023, an account affiliated with Israel’s Ministry of Foreign Affairs published a video of a woman presented as a Palestinian nurse, visibly frightened, claiming that Hamas controlled Al-Shifa Hospital—the same hospital later destroyed by Israel during its genocidal war. The video, which was later proven fabricated, amassed more than 16 million views before it was removed.

The harm

Synthetic noise flooding our screens creates a parallel life detached from reality, blurring judgment, fragmenting decisions, and disorienting us. We may feel brief clarity or righteousness, even as we sink deeper into illusion.

This manufactured content not only deceives; it incites hostility and violence against Muslims, Arabs, migrants, and minorities worldwide, legitimizing harm—even killing—and the destruction of hospitals and infrastructure, as seen in Gaza.

This distortion intensifies in war, as in Gaza and Syria, where manipulated or decontextualized images and videos spread hate, incite violence, and deepen divisions. Many users accept and circulate this content as truth.

Last year, for example, an audio recording attributed to a Druze cleric surfaced online, allegedly insulting the Prophet Muhammad. It spread virally across social media despite the cleric’s denial and subsequent confirmation by Syria’s Ministry of Interior that the recording was fabricated. Sectarian violence soon followed; within days, more than 130 people—most of them Druze—were killed south of Damascus.

As misinformation proliferates, suspicion expands alongside it. Even truthful content becomes suspect. Reality itself is recast as fabrication, while fabricated narratives are embraced as truth—a condition that can only be described as collective disorientation.

Donald Trump offers a stark illustration of this logic: dismissing anything he opposes as “fake,” while engaging in fabrication with equal bluntness. The White House itself, at one point, used clips from video games in a promotional video touting the so-called success of US strikes on Iran.

What is the solution?

There is no ark to carry us through this flood. Disconnecting offers no salvation; it only isolates us. The only path is confrontation—by governments, platforms, civil society, and users—through parallel, reinforcing efforts.

Governments must not become the internet’s primary arbiters; their role should be limited to compelling tech companies to detect or restrict such content. Many governments—including the US, Russia, Iran, and Israel—spread these falsehoods, and in our authoritarian region, this responsibility cannot be entrusted to the state.

Still, there are examples of constructive regulatory frameworks, such as the Digital Services Act and the AI Act implemented in the European Union.

These laws push technology companies—including developers of AI tools and the platforms that distribute content—to watermark synthetic media so users can identify it, to deprive its creators of advertising revenue tied to engagement, and to limit or remove such content, particularly when it risks causing real-world harm. Violations can carry penalties of up to 35 million euros (about $38 million) under the AI Act, and up to 6% of global annual revenue under the Digital Services Act.

Platforms bear immense responsibility but face a structural challenge: billions of users and endless uploads make manual review impossible. AI has thus become the first line of defense, detecting synthetic media and assessing its harm.

Companies must invest more in detection and watermarking, limit the spread of such content, and avoid promoting or monetizing misleading material, especially in contexts like war, elections, or social tensions.

None of this will happen without legal pressure, public pressure that threatens profits, or a measure of ethical commitment—often itself shaped by the need to maintain a certain standing in the market. One example is Anthropic, the company behind Claude, which refused to collaborate with the US Department of Defense on autonomous weapons systems or on surveillance of American citizens.

Your responsibility as a user

Ultimately, no solution exists without digital literacy—users able to distinguish real from fabricated content, regardless of purpose or bias.

Users must be equipped to identify misleading material by assessing sources, coherence, and signs of manipulation in images and videos, enabling them to filter the stream and discard falsehoods.

There are a few practical guidelines worth keeping in mind:

Do not trust any source on the internet simply because it has a website or a page, or because it publishes videos, images or reports—not even your friends who share such content. Many of us circulate these materials because, at some level, we want them to be true. The recent wave of viral stories about Benjamin Netanyahu offers a telling example.

It is not difficult to obtain news from professional, credible sources. Instead of endlessly scrolling through an avalanche of images and videos from dubious outlets—believing, perhaps, that some obscure “network of farmers in Kafr Al-Hanadwa” holds the definitive footage of events in Tehran or Tel Aviv—seek out verified reporting.

When in doubt, consult others. There are dozens of accounts dedicated to debunking fabricated content, including regional Arab initiatives such as the Arab Fact-Checkers Network and the Arabi Facts Hub, as well as country-specific platforms like Fatabyyano in Jordan, Verify-Sy in Syria and Matsadaqsh in Egypt.

Do not fragment the truth by sharing partial or misleading images and videos, and do not rush—it is better to be late with the truth than first with falsehood.

Without this, we risk living in a hall of mirrors, seeing only what we wish, while our views are shaped by fabricated content—some designed to mislead, some driven by profit or by systems and actors that benefit from confusion.