TLDRs;
- Indonesia drafts stricter AI regulations, targeting deepfakes as Nezar Patria urges platforms to provide free detection tools.
- Deepfake content has surged 550% in five years, raising alarms over misinformation and digital safety.
- Jakarta’s policies align with global efforts, following China’s watermarking rules and EU’s proposed AI transparency laws.
- The detection battle remains tough, as deepfake creation tools advance faster than available verification technology.
Indonesia is intensifying its push for tighter artificial intelligence (AI) regulation as concerns over deepfakes continue to mount.
At an event in Jakarta on September 10, Nezar Patria, the country’s deputy minister for communications and digital, called on major technology companies to provide free tools that help users identify AI-generated content.
Patria pointed to research from Sensity AI showing a staggering 550% rise in deepfake content over the past five years. He warned that the actual scale could be much larger, given the rapid accessibility of generative AI tools. According to him, while the technology behind deepfakes is advancing rapidly, ordinary users lack the resources to verify what they see online.
Tech Giants Called to Step Up
The Indonesian government believes major platforms such as Google, Meta, and others already have the algorithms and computational capacity to deploy large-scale detection systems. What is missing, Patria argued, is public access to these tools.
“Detection capabilities shouldn’t be locked away behind private walls,” Patria emphasized, suggesting that transparency tools must be integrated into the platforms millions rely on daily.
By offering detection features for free, companies could help users spot hoaxes, misinformation, and manipulated videos before they spread widely.
Indonesia Aligns With Global AI Regulation
Indonesia’s move mirrors broader international efforts to confront the deepfake challenge. China already requires watermarks on AI-generated content, while the European Union has proposed new laws mandating clear labeling and transparency for synthetic media.
Data shows that more than 69 countries have introduced over 1,000 AI-related policy proposals, many aimed at reducing risks associated with misinformation and harmful synthetic content. Jakarta’s approach signals that Southeast Asia’s largest economy intends to play an active role in shaping ethical AI use, not just within its borders but as part of a global movement.
Indonesia already enforces digital safety measures through the ITE Law and the PDP Law. The government is now drafting a new set of rules specifically focused on ethical and responsible AI deployment, positioning itself among nations prioritizing both innovation and public protection.
A Race Between Creation and Detection
Despite the urgency, experts note that detection technologies face an uphill battle. The development of generative adversarial networks (GANs) has made creating realistic deepfakes faster and cheaper than ever. In contrast, detection systems must constantly evolve to keep pace with new manipulation techniques.
Even institutions like the U.S. Defense Advanced Research Projects Agency (DARPA) are investing heavily in deepfake detection, underscoring the scale of the technical challenge. Indonesia’s demand for free tools is therefore not only about user empowerment but also about bridging a critical accessibility gap.
As the world witnesses more governments demanding transparency in AI, Indonesia’s regulatory push adds weight to the argument that AI innovation must be balanced with safeguards against misuse. For now, the success of these measures will depend on how tech giants respond, and whether they are willing to place public safety ahead of commercial advantage.