
China’s top leader Xi Jinping last week presided over a Politburo study session on artificial intelligence. It was the second such AI-focused study session, the previous dating back to 2018, when Xi had just started his second five-year term and China’s government had recently published the New Generation AI Development plan, setting ambitious targets for AI development and governance as far out as 2030. Like in 2018, the most extensive public account of the session and Xi’s remarks comes in the form of a Xinhua report, which has been translated by Ben Murphy of the Center for Security and Emerging Technology (CSET) at Georgetown University.
Xi’s reported remarks echoed some of the party-state’s longstanding thinking on AI, while also reflecting today’s context: the Chinese startup DeepSeek’s impressive releases in December and January, more than two years of US and allied semiconductor controls designed to slow China’s advances, and intense economic pressures both at home and abroad. How should we understand the importance of this event and the messages Xi and the Politburo are sending? What does the carefully crafted language of the official media release suggest about their priorities and preoccupations?
DigiChina invited a group of specialists to share their insights on these questions. This Forum may be updated as additional contributions come in.
Note: Some contributors below adopt their own translations for quoted terms and passages, differing from CSET. I’ve attempted to flag substantive differences in footnotes.
–Graham Webster, Editor-in-Chief, DigiChina
KRISTY LOKE
Independent researcher
There are three key takeaways from China’s recent Politburo study session on “Strengthening AI Development and Governance”:
First, regarding China’s AI strategy, the study session confirms that Xi remains highly committed to the “new whole-of-nation” approach.[1] The approach balances (a) strong organizational and resource support from the government with (b) the acknowledgement that it is the market, not the government, that can effectively deliver core science and technology (S&T) breakthroughs. This shows that DeepSeek’s success has not led to a strategic adjustment and that we can expect the government to play a pivotal but measured role in supporting AI development.
Second, regarding China’s core AI goals, the study session calls attention to AI adoption and applications. This is highly consistent with the Four Orientations (四个面向) concept introduced by Xi in 2020, which proposed that economic transformation and value creation should guide China’s S&T priorities. In other words, China views its competition with the United States as one that can be won via AI adoption, instead of a race toward the elusive artificial general intelligence (AGI). This emphasis on tech diffusion helps explain why, following the release of ChatGPT, the Chinese government chooses to focus on market and S&T reform, and improving the accessibility of compute and data nationwide, instead of placing key resources behind handpicked AGI champions.
Third, regarding AI governance, the study session sets the expectation that AI development should be “beneficial, safe, and fair”, with Xi reportedly emphasizing that “artificial intelligence can be a global public good that benefits mankind.” This reflects China’s growing commitment to supporting the Global South via tech capacity building and sharing governance-related know-how. It also echoes China’s long-term emphasis on applying a human-centric approach to AI development and the Politburo’s post-ChatGPT call on society to balance AI development with governance.
PAUL TRIOLO
Honorary Senior Fellow, Center for China Analysis, Asia Society Policy Institute
Politburo study sessions are important opportunities for experts to inform senior leaders about a particularly critical sector or topic, and for Chinese leaders, and in particular Party General Secretary Xi Jinping, to signal an industry on his views of the way forward on a particular issue, and also to signal the bureaucracy about his preferences.
At the 9th study session in 2018, in a much different world, including around AI and US–China relations, Xi called for the healthy development of AI. The topics of AI and semiconductors have featured in a number of the study sessions over the past decade, highlighting how close the senior leadership is to these issues.
The timing of this particular session is important, coming just a few months after the DeepSeek effect has swept China’s AI sector, with Xi and other Chinese leaders all paying close attention to the issue of AI development and increasingly concerned about US technology controls, domestic capabilities to overcome those controls, and the increasing sense that the US intends to race towards AGI with the goal of gaining a decisive strategic advantage over China and using it.
Xi’s call to “concentrate our efforts on mastering core technologies such as high-end microchips and foundational software and construct an independent/indigenous and controllable (自主可控)[2] AI foundational software and hardware system with collaborative operations” is a direct response to what the Chinese leadership likely sees as an existential threat from falling behind on AI development, and the challenge posed by US policies such as the upcoming AI Diffusion Framework, which will attempt to isolate China from global AI development and give the US government a veto over where advanced AI capabilities are developed and deployed. For Xi and the Politburo, game on for AI supremacy.
HELEN TONER
Director of Strategy and Foundational Research Grants, Center for Security and Emerging Technology, Georgetown University
What I find most notable about this readout is how consistent it is with past CCP messaging on AI, despite the many changes in the underlying technology. While it does mention the "rapid progress" that has characterized the field of AI recently, there do not seem to be too many updates on how the Politburo is thinking about AI's challenges and opportunities. There is no mention of major developments in recent years (such as generative AI, large language models, reasoning models, agents, general-purpose AI), nor of potential future developments that are becoming common features of Silicon Valley discourse (such as AGI or superintelligence). The choice of expert speaker reinforces that these developments do not seem to be of special interest to the Politburo: As noted by Concordia AI, Prof. Zheng Nanning is an old-school machine learning researcher focused on computer vision and pattern recognition, not in the areas that are currently booming.
This is only one brief glimpse into the CCP's thinking on AI, and we know there has been a flurry of activity on creating standards and regulations for generative AI. Clearly they are not blind to recent developments. But it appears that at least to an external audience—and perhaps in their own thinking as well—they don’t want to center these newer issues, even while reiterating what a high priority AI is for the CCP's top leadership.
The other element that struck me was the prominent billing given to international engagement and Global South capacity-building, with the whole final paragraph dedicated to these issues. It's a safe bet that with the United States retreating from its international leadership role, this theme will continue to pervade China's AI policy.
JOHANNA COSTIGAN
Associate Director of Research, Paulson Institute
“We must use AI to lead a paradigm shift in scientific research and speed up S&T innovation and breakthroughs in every field,” Xi advised. He wants Chinese researchers to unleash the next “paradigm shift”—a term that has long been used to describe AI capability leaps among foreign AI researchers and appears to be gaining traction within the CCP.
For example, it was featured in multiple distinct contexts in a People’s Daily article published in mid-March. The first comports with the highly sci-fi way the AI industry talks about paradigm shifts, as redefining “the boundaries of human wisdom.” But a subsequent mention applies an AI "paradigm shift" to upgrading traditional industries—a top policy priority in itself that aligns with the readout’s emphasis on AI’s real-word deployment. A third reference shifts into the modalities of AI systems themselves.
Beyond industrial applications, which occupy a special place in policymakers’ minds, it appears paradigm-shift thinking has broad resonance in the post-DeepSeek era. Recently, for example, it was used to describe changes in academic research and the forthcoming transformation in administrative law from “fuzzy" judgments to “scientific and intelligent decision-making.”
Beijing’s conception of an AI paradigm shift, then, appears to encompass the version touted by figures like Dario Amodei and Sam Altman, who reference new techniques behind “reasoning” models. But China’s usage also emphasizes applications in scientific breakthroughs, harnessing AI for economic development, and creating relevant laws and regulations.
Contrary to assumptions that China won’t develop AGI because the CCP wouldn’t want or allow an uncontrollable superhuman intelligence, policymakers appear tacitly supportive, as long as whatever ends up being called AGI has economic utility and basic safety mechanisms. Rather than quashing AGI out of a fear of controllability, Beijing is more likely to both promote the A(G)I “paradigm shift” and seek out innovative ways to govern it.
SCOTT SINGER
Visiting Scholar, Carnegie Endowment for International Peace
Xi Jinping's recent Politburo study session on AI reinforces China's post-DeepSeek moment strategy: accelerating domestic AI adoption while pursuing technological self-sufficiency across the entire AI stack. The session underscores China's view of AI as a strategic technology and critical engine for economic growth amid domestic and international pressures.
What stands out against this backdrop of technological acceleration is Xi's especially strong statement on AI risks. His description of "unprecedented risks and challenges" marks his most direct acknowledgment of AI safety concerns to date. The speech outlines specific mitigation measures, calling for "technical monitoring, risk warning, and emergency response systems" alongside accelerated development of "relevant laws, regulations, policy systems, application specifications, and ethical guidelines." These concrete directives signal imminent regulatory action and suggest that while Chinese leadership views AI as an economic opportunity, it also recognizes the need to address substantial risks from powerful AI models. This apparent shift lends credibility to the China AI Safety and Development Association, indicating that China's self-proclaimed AI Safety Institute–equivalent is gaining meaningful influence.
Xi's adoption of the "global public good" framing—terminology directly borrowed from international statements like the 2024 Venice Statement at the International Dialogue on AI Safety—demonstrates China's engagement with global AI safety discourse. This language suggests that internationally developed concepts can be integrated into China's domestic context, potentially creating narrow but viable pathways for developing shared norms and standards in AI governance, even as technological competition intensifies. Collectively, these developments suggest that China's AI safety movement, though still a minority position in Chinese tech policy circles, is gaining meaningful momentum and influence domestically.
GABRIEL WAGNER
Affiliate, Concordia AI; Yenching Scholar
AND JASON ZHOU
Senior Research Manager, Concordia AI
In his speech, Xi Jinping warns that while AI brings “unprecedented development opportunities,” it also poses “unprecedented risks and challenges.” This is the first time the Chinese leadership has described AI risks as “unprecedented,” suggesting elevated concern. Yet mentions of AI safety are not new. As early as the 2018 Politburo study session, leaders called for ensuring AI is “safe, reliable, and controllable”—a phrase Xi reiterated in 2025.[1] The 2024 Third Plenum similarly emphasized the need to “institute oversight systems to ensure AI safety.”
What sets the 2025 session apart is both the language and the level of detail. The readout includes an entire paragraph focused on safety—unusual for such high-level documents, where safety is often addressed in a sentence or two. It makes a concrete call for “technology monitoring, early risk warning, and emergency response” systems. While this specific phrase is new, 2021 New Generation AI Ethics Guidelines (zh, en) and the February 2025 revision of the National Emergency Response Plan reference similar mechanisms.
Xi also urged faster development of “laws and regulations, policies and systems, application norms and ethical guidelines.” This may suggest a push to fast-track domestic AI safety rules. Still, given that standard-setting agencies already have detailed workplans, it remains unclear whether this signaling will materially speed up implementation.
Xi’s speech leaves open what specific AI risks are at issue. He names no particular technologies—such as LLMs—and offers no concrete risk scenarios. While current regulations focus mainly on political content, privacy, and bias, Chinese standard-setting bodies and national security experts increasingly reference advanced risks, including existential threats. It is thus plausible—though unconfirmed—that frontier risks were part of this session’s agenda as well.
Overall, Xi’s speech amplifies rather than redirects China’s AI safety posture, reiterating existing priorities under a stronger spotlight.
KEVIN NEVILLE
Writer and Researcher
China’s AI development is often framed as an exceptional or uniquely concerning phenomenon. This risks overlooking the broader forces shaping these policy decisions.
China’s investments in AI, data infrastructure, and automation are best understood as part of a global pattern: Governments and industries worldwide are responding to intensifying economic volatility, shifting geopolitical alignments, environmental stresses, and evolving technology landscapes. These challenges are not unique to China; they are features of a rapidly changing international environment that all major actors must navigate.
The Politburo’s second AI-focused study session reflects this dynamic. Echoing goals set out in the 2017 New Generation AI Development Plan—innovation leadership, ethical governance, and international cooperation—Xi Jinping’s recent remarks signal sharpened urgency. References to "grasping the initiative," "controllability," and "self-reliance" reflect pressures mounting across the system: US-led semiconductor controls, the need for breakthroughs in foundational software and hardware, and growing awareness (if not expectation) of techno-fragmentation.
China’s framing of AI as an "international public good" fits broader diplomatic efforts, but the intensified focus on internal capacity building reveals deeper structural concerns. This is not merely about competition; it is about managing exposure to systemic risks—geopolitical, technological, and environmental—that no major actor can fully escape.
In this light, China's AI strategy mirrors broader international efforts to secure resilience in an increasingly unstable world. More durable insights may arise from understanding these dynamics beyond a narrow lens of anomalous rivalry.
EDITOR'S NOTES
[1] 新型举国体制 – Translated by CSET as "new structure for leveraging national capabilities" and elsewhere as "new whole state system." See e.g. Zhang, L., & Lan, T. (2022). The new whole state system: Reinventing the Chinese state to promote innovation. Environment and Planning A, 55(1), 201-221. https://doi.org/10.1177/0308518X221088294
[2] DigiChina generally translates the 自主 in 自主可控 as independent or indigenous. The clear meaning is independent of problematic foreign dependencies or interference.
[2] 安全、可靠、可控。– The word “safe” 安全 here can also be translated as “secure”. In 2018, DigiChina translated the phrase as “secure, reliable, and controllable.” The appropriate English translation of the term is challenging, rarely perfect, and varies by context.