Daijiworld Media Network - New York
New York, Mar 4: As tensions escalate between the United States, Israel and Iran, social media platforms are witnessing a surge in misinformation, including AI-generated war videos. Elon Musk-owned X has revealed that it recently dismantled a coordinated network operating from Pakistan that was spreading fabricated conflict footage.
Nikita Bier said the platform identified a user in Pakistan who was managing 31 accounts posting artificial intelligence-generated war videos.
“Last night, we found a guy in Pakistan that was managing 31 accounts posting AI war videos. All were hacked and the usernames were changed on Feb 27 to ‘Iran War Monitor’ or some derivative,” Bier wrote on X.

According to him, the accounts were swiftly taken down as part of the platform’s efforts to curb coordinated misinformation campaigns. “We are getting much faster at detecting this — and also eliminating the incentive to do this,” he added.
His response came to a post that had shared a deepfake video claiming to show an Iranian rocket striking a ship in Tel Aviv. The account, under the name Ahmed Hamdan, claimed to be a journalist from Gaza.
The development comes amid a broader wave of AI-driven misinformation during the ongoing West Asia crisis, which erupted after strikes by the United States and Israel on Iranian targets triggered retaliatory attacks across the region.
Social media platforms have since been flooded with images and videos purporting to show strikes and destruction in Iran, Israel and other parts of the Middle East. However, fact-checkers and investigators say many such posts are either manipulated or entirely fabricated.
A report by Financial Times highlighted a satellite image circulated online, including by the official X account of Iran’s Tehran Times, claiming to show damage to a US radar system in Qatar following an Iranian drone strike. Analysis later revealed that while the radar site had indeed suffered damage, the viral image had been altered using AI and was actually a modified picture of a location in Bahrain. The misleading post reportedly garnered nearly one million views and remained online for more than two days.
Earlier instances of AI misinformation were also recorded during the 12-day Israel-Iran conflict in June 2025. A report by BBC noted that several AI-generated videos exaggerating Iranian military strength and damage to Israeli infrastructure had circulated widely. Pro-Israel accounts, too, were found sharing old protest videos falsely presented as demonstrations against Iran’s leadership.
Verification group GeoConfirmed has flagged multiple fake or mislabelled clips during the current hostilities. In one instance, it debunked a viral claim that a deadly strike on a girls’ school in Minab was caused by a failed Iranian Revolutionary Guard missile launch, clarifying that the incident was linked to US-Israeli action. The misleading post had already attracted nearly 11,000 likes and over 750,000 views.
In another case, a widely shared video claiming to show Iranian missiles hitting Tel Aviv was later identified as footage from the 2024 earthquakes in Turkey.
Experts warn that advances in generative AI have significantly lowered the barriers to producing convincing fake content. Henk van Ess, an expert in online research methods, said satellite imagery is particularly vulnerable. “Most people have no idea what a genuine satellite image is supposed to look like from a specific sensor at a specific resolution,” he said.
Brady Africk, an open-source intelligence researcher at the American Enterprise Institute, cautioned that manipulated satellite imagery could pose serious challenges for journalists and analysts tracking conflicts. “Satellite imagery can be manipulated just like other images. AI has made that tremendously easier,” he said.
In response, X has tightened its enforcement against AI-generated war media posted without disclosure. Under new rules, users who fail to label AI-generated conflict videos will be barred from the platform’s Creator Revenue Sharing programme for 90 days, with repeat violations risking permanent removal.
The company has also expanded its “Community Notes” feature, enabling users to add contextual fact-checks to misleading posts.
Governments have also stepped in. In the United Arab Emirates, Dubai Police warned residents against sharing rumours or unverified images related to security incidents, stating that offenders could face fines of at least 200,000 dirhams.