Chinese artificial intelligence start-up DeepSeek reemerged in the public eye on Friday, delivering a rare glimpse into the ambitions and ethical considerations guiding one of China’s most secretive AI labs.
At the World Internet Conference in Wuzhen, reported by SCMP, senior researcher Chen Deli reaffirmed the company’s commitment to developing artificial general intelligence (AGI) — while acknowledging the technology’s potentially “dangerous” societal impact.
Speaking on a panel alongside the heads of five other companies collectively known as China’s “six little dragons” of AI, Chen emphasized both optimism about AI’s technical possibilities and concern for its disruptive social consequences.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).
“Humans will be completely freed from work in the end, which might sound good but will actually shake society to its core,” he said, urging AI firms to act as “whistle-blowers” to warn the public about which professions will be automated first.
The Wuzhen appearance marked DeepSeek’s second recent public engagement. Earlier, Wu Shaoqing, the company’s head of AI governance, participated in a panel on ethical AI guardrails at the Global Open-Source Innovation Meetup in Hangzhou. DeepSeek, spun out from quantitative hedge fund High-Flyer in 2023, has maintained a low profile, with founder Liang Wenfeng largely absent from public view since meeting President Xi Jinping in February.
Chen described the current phase of AI development as a “honeymoon phase” between humans and machines, warning that most jobs could eventually be automated.
The pursuit of AGI and China’s industrial focus
Founded with the explicit mission of building AGI — a system capable of human-level cognitive reasoning — DeepSeek prioritizes long-term research over short-term commercial applications. Chen highlighted that the firm avoids following transient market trends, a strategy he said was essential for tackling the technical and ethical challenges inherent to AGI.
Nevertheless, Chen acknowledged that such systems could be dangerous, echoing the concerns of hundreds of AI experts worldwide who signed an open letter last month calling for a temporary moratorium on superintelligent AI until there is a broad public consensus and safety verification. Chinese signatories include Zhipu AI CEO Zhang Peng and Tsinghua University professor Andrew Yao, signaling that caution is not absent from China’s own AI research community.
Despite these warnings, Chen argued that halting AI development is unrealistic given the sector’s profit incentives.
“You could even say the mark of success for this AI revolution is that it replaces the vast majority of human jobs,” he said.
The comments reflect a rare frankness in China’s AI discourse, where the potential risks of automation and societal disruption are typically muted in public statements.
Other Chinese firms are also accelerating efforts in advanced AI. Zhipu AI and Alibaba Group, whose CEO Eddie Wu Yongming announced a “super AI cloud” at the same conference, are developing computing infrastructure capable of handling the massive workloads needed for AGI. These companies, along with DeepSeek and others in the “six little dragons” cohort, are central to China’s push for technological leadership amid U.S. export restrictions on advanced AI chips.
China’s AI governance framework
China has rapidly established a structured approach to AI governance, combining state oversight, industry self-regulation, and ethical guidelines. Unlike the U.S., where regulation is largely reactive and sector-specific, and the EU, which enforces preemptive rules through frameworks like the AI Act, China’s approach blends top-down direction with incentive-driven innovation:
- State Planning and Strategic Directives: The government defines AI development as a strategic priority. Policy documents such as the New Generation Artificial Intelligence Development Plan (2017) and the 2023 Guidelines for Responsible AI set out long-term objectives, including leadership in foundational models, industrial AI, and ethics.
- Ethical Guidelines and Risk Management: China has issued standards emphasizing security, fairness, privacy, and controllability of AI systems. Firms are expected to conduct risk assessments, but enforcement remains flexible, relying on a combination of industry self-regulation and government audits.
- Integration with Industrial Policy: AI development is closely linked to national initiatives such as Made in China 2025 and semiconductor self-sufficiency programs. Companies like DeepSeek are expected to align their research with broader industrial goals, including AI chips, cloud infrastructure, and applications in defense and finance.
In contrast, the U.S. has largely relied on a market-driven, reactive regulatory framework, focusing on sector-specific guidance rather than holistic rules. Agencies like the FTC, NIST, and SEC have issued principles on fairness, transparency, and risk management, but there is no comprehensive federal law governing AI safety or AGI development.
The EU, on the other hand, has pursued a precautionary model through the AI Act, which categorizes AI systems by risk level and imposes strict rules on high-risk applications, including mandatory documentation, auditing, and human oversight. The European approach emphasizes public trust and safety before adoption, reflecting a regulatory culture that prioritizes societal consensus over market speed.
China’s regulatory stance — permissive in pursuit of global leadership, yet increasingly codified in ethical standards — creates a unique environment for AGI research. It balances technological ambition, industrial strategy, and social stability, whereas the U.S. encourages rapid innovation with limited preemptive oversight, and the EU prioritizes caution and accountability over speed.
Chen’s warnings, set against this backdrop, illustrate the duality of China’s AI strategy: a race for global leadership in next-generation intelligence, coupled with acknowledgment of potential societal disruption.
“It would not be alarmist to consider such systems dangerous to society,” he said — a rare admission for a lab building toward that very frontier.



