Iranian AI disinformation campaign escalates during conflict
Iranian state media and pro Iran social media accounts have intensified the use of AI generated images and videos since hostilities began between US and Israeli forces and Iran in late February, according to a report that describes a fast moving online campaign built around false claims of successful strikes on American and Israeli military targets.
The disinformation effort has unfolded alongside the military conflict that began on February 28, when US and Israeli forces launched attacks on Iranian targets. Within hours, manipulated visuals began spreading across social platforms, with many misleading posts attracting millions of views.
One of the most widely circulated images was shared by the Tehran Times, a government affiliated Iranian outlet, and presented as evidence that a US radar facility at Al Udeid Air Base in Qatar had been destroyed. Investigators later found that the image had actually been derived from Google Maps satellite photos of a US naval base in Bahrain taken in February 2025 and then altered using artificial intelligence tools.
BBC Verify concluded that the image had been manipulated, while Google’s SynthID watermark detection system indicated that Google AI technology had been used either to generate or edit the image. Tal Hagin, an analyst at the open source intelligence firm Golden Owl, pointed to one visual clue that suggested fabrication: the vehicles in the scene had not moved position at all.
In a separate case, Iranian state television broadcast AI generated footage depicting the USS Abraham Lincoln sinking under a drone swarm and missile attack. According to reports, those clips also spread widely on Chinese social media, including one version presented with the likeness of a CCTV news anchor to increase its credibility.
NewsGuard said it had tracked at least 18 false war related claims linked to Iran aligned sources since the fighting began, compared with just five during the previous two weeks.
The flood of fabricated content has pushed X, the platform formerly known as Twitter, to tighten its creator payment rules. On March 3, X product executive Nikita Bier said users who post AI generated videos of armed conflict without labeling them as synthetic will lose access to the platform’s creator revenue sharing program for 90 days. Repeat violations could lead to permanent removal from the program.
Bier said accurate reporting is especially important during wartime and warned that current AI tools make it easy to produce misleading material. X said violations would be identified through Community Notes and through metadata detection linked to generative AI tools.
The scale of the campaign has raised broader concerns about the pace of online deception. The New York Times described Iran’s effort as an information operation running in parallel to military action, blending authentic developments with fabricated imagery. Experts told Mashable that false posts generated hundreds of millions of views within days, far outpacing the speed of fact checking efforts.
A review by Wired found that many of the misleading posts that went viral were shared by blue check accounts subscribed to X Premium, meaning they could potentially earn revenue from engagement regardless of accuracy.
The episode highlights a widening gap between how quickly false content can be produced and how slowly it can be verified as AI tools become more accessible.
-
10:20
-
07:50
-
17:30
-
16:10
-
15:50
-
15:20