TLDRs:
- DeepSeek urges China to improve transparency in its AI regulatory framework.
- China’s pre-deployment filing system grows rapidly but remains opaque for developers.
- Open-source AI exemptions could pose risks if not properly monitored.
- Compliance tools and independent audits gain importance under China’s strict AI rules.
Researchers from DeepSeek and Alibaba have highlighted the strengths and challenges of China’s AI regulatory framework in a paper recently published in Science.
While acknowledging the country’s efforts to regulate AI responsibly, the authors call for greater transparency and more structured feedback mechanisms to ensure fairness and safety.
The framework includes pre-deployment filing for AI models, self-assessments on content safety, exemptions for open-source AI and research, and a phased approach to regulation implementation. Despite these measures, China has yet to introduce a comprehensive national AI law, though proposals regarding AI misuse liability are under consideration.
Rapid Growth, Opaque Processes
China’s pre-deployment filing system has expanded quickly. By December 2024, the Cyberspace Administration of China (CAC) listed 302 generative AI services, including 238 added that year.
By April 2025, the registry had reached 3,739 tools from around 2,353 firms, growing by 250–300 entries monthly.
However, filings remain largely opaque. Rejected applications often receive minimal explanation, and approvals are shared mainly through periodic public lists. This lack of clarity complicates compliance efforts for AI developers and leaves room for misinterpretation.
Risks of Open-Source Exemptions
The paper warns that exemptions for open-source AI could introduce risks if not carefully managed. China’s regulations cover services with “public opinion attributes or social mobilization capacity,” but the definition remains vague. This ambiguity affects whether exemptions truly ease burdens for small research teams or open-source developers.
DeepSeek and Alibaba argue that leading AI firms should adopt independent verification mechanisms and provide greater transparency to reduce risks while fostering innovation.
Compared with the U.S. and Europe, China frames openness as a safety measure rather than a regulatory risk, highlighting a distinct regulatory philosophy.
Compliance and Governance Opportunities
China’s National Information Security Standardization Technical Committee (TC260) has outlined over 30 safety risks and oversight steps, covering bias checks, supply chain reviews, and controls on politically sensitive outputs.
These measures emphasize continuous monitoring and regular compliance checks, creating a growing market for compliance tools, automated evaluation platforms, and third-party auditing services.
Vendors of model evaluation software, red-teaming services, and content moderation APIs can capitalize on this demand, especially as China enforces labeling requirements for AI-generated content. These developments underscore the importance of independent oversight and verification in maintaining responsible AI deployment.
Looking Ahead
While China’s AI framework has achieved scale, DeepSeek’s recommendations suggest that increased transparency, clearer guidance, and independent monitoring are crucial to managing risks and promoting innovation.
As AI technologies continue to evolve globally, China’s approach could shape the trajectory of open-source and commercial AI adoption both domestically and internationally.




