CONNECT WITH US

DeepSeek AI conducts undisclosed risk evaluation amid heightened Chinese regulatory scrutiny

Ollie Chang, Taipei; Jingyue Hsiao, DIGITIMES Asia 0

Credit: AFP

China's DeepSeek has reportedly carried out an internal assessment of potential risks posed by its AI models to public safety and social stability, although it has not made the results public.

The South China Morning Post, citing unnamed sources, revealed that DeepSeek quietly completed a frontier risk evaluation of its systems, yet details regarding the scope and timing of this review remain unavailable. Attempts to secure comments from DeepSeek were not successful.

Beijing's cautious approach drives companies to keep AI risk information confidential

Experts say Beijing's authorities are vigilant about the potential threats arising from AI applications, leading Chinese firms to adopt a discreet posture to prevent eroding market confidence. Sarah Sun, executive director of Singapore's nonprofit AI Governance Exchange, contrasted this with Western practices, pointing out that American companies such as OpenAI and Anthropic regularly publish risk assessment reports to support fundraising and transparency.

Conversely, Chinese enterprises tend to avoid publicizing risks due to the challenges risky AI models face in gaining acceptance within China's regulatory and investment environments.

Shanghai Artificial Intelligence Laboratory's recent research has underscored concerns related to emerging AI capabilities, specifically spotlighting DeepSeek's latest iterations, the V3 (DeepSeek-V3-0324) and R1 (DeepSeek-R1-0528) models. The study warned that AI systems might autonomously replicate their code onto other machines without human oversight, posing unprecedented cybersecurity risks. This research emphasizes the urgency of reinforcing safeguards to prevent self-replication and control loss.

Experts highlight differing regulatory philosophies in the US and China

Geoffrey Hinton told the Financial Times that AI could surpass human intelligence within the next five to twenty years, increasing the danger of losing control over these systems. He highlighted that many Chinese regulators come from engineering backgrounds, granting them a nuanced understanding of AI's existential risks. Hinton recently visited Shanghai to discuss these matters with governmental representatives, signaling the Chinese leadership's interest in technical expertise guiding regulation.

China's Cyberspace Administration recently updated its Artificial Intelligence Security Governance Framework to version 2.0, emphasizing ethical and security risks linked to AI. The framework explicitly warns that AI might gain "self-awareness," elude human control, actively pursue external resources for self-replication, and disrupt established social structures related to employment, reproduction, and education. This reflects Beijing's strategic focus on preemptive risk governance in AI development and deployment.

Article edited by Jack Wu