TLDR
- X suspends pay for undisclosed AI war videos for 90 days
- AI war clips must be labeled or creators lose ad revenue
- X cracks down on synthetic conflict footage monetization
- No disclosure? No earnings from AI-generated war content
- X ties creator payouts to AI transparency in wartime
X has introduced a 90-day monetization suspension for creators who post undisclosed AI-generated war videos. The policy links revenue access to clear disclosure standards during armed conflicts. The move reflects X’s effort to limit misleading synthetic media while preserving platform authenticity.
X ties monetization to AI disclosure rules
X updated its Creator Revenue Sharing policies to address AI-generated conflict footage. The company will suspend creators for 90 days if they fail to label synthetic war videos. The enforcement connects disclosure requirements directly to monetization eligibility.
The policy applies only to videos that depict armed conflicts. However, it does not ban all AI-generated content across the platform. Instead, X focuses on transparency when creators publish sensitive wartime material.
Nikita Bier, head of product at X, announced the change on Wednesday. He stated, “During times of war, people must have access to authentic information on the ground.” He added that modern AI tools make deceptive content easy to produce.
Enforcement relies on Community Notes and metadata signals
X will trigger enforcement through several detection methods. Posts flagged by Community Notes as AI-generated may prompt review. Metadata and technical signals from generative tools may support enforcement decisions.
The company will suspend accounts from revenue sharing for 90 days after a violation. Repeat offenders may face permanent removal from the monetization program. X will continue refining detection systems to strengthen policy enforcement.
Unlike traditional moderation, X will not automatically remove or label every violation. Instead, it will target financial incentives tied to viral content. This approach aims to discourage misleading posts without broadly restricting speech.
Middle East tensions intensify misinformation risks
The policy update follows heightened tensions in the Middle East. On February 28, the United States and Israel conducted joint airstrikes on Iran. The strikes triggered sharp reactions across global markets and social platforms.
Bitcoin briefly fell to around $63,000 after the airstrikes. However, it later recovered and traded near $70,000, according to market data. Online discussions about the conflict surged across X and other platforms.
AI tools have also entered modern military operations. On March 1, the United States military used Anthropic’s Claude model for intelligence analysis linked to the strikes. As a result, concerns about synthetic media and real-time misinformation have increased.
X stated that the updated rules aim to maintain authenticity on user timelines. The company emphasized that wartime environments demand reliable and verifiable information. X chose to connect financial rewards to responsible content practices.
The platform faces broader industry pressure to manage manipulated media. Governments and civil groups have urged stronger action against deepfakes during geopolitical crises. Through this targeted policy, X seeks to balance open expression with accountability.
X now reinforces its stance that transparency must accompany synthetic war footage. Creators who follow disclosure rules can retain monetization access. However, those who ignore the policy risk losing revenue privileges for at least 90 days.





