What do we know about China’s new AI safety institute?

Left out of the international AISI network at its outset, China positions CNAISDA to take a seat at the table

Published

February 6, 2025

Article Banner Picture

Published

February 6, 2025


As the global AI safety community and world leaders prepare to convene at the AI Action Summit in Paris on February 10–11, the question of who will represent China has resurfaced. Over the past 15 months, a growing number of national AI Safety Institutes (AISIs) from around the world have worked together to promote technical research and evaluations of advanced machine learning models. Yet as these efforts morphed into a formalized network of national AISIs, China’s involvement in related convenings and joint testing exercises remained uncertain. 

Ahead of the Paris AI Action Summit, the Chinese government appears to have finally put forth its equivalent of an AISI. The China AI Safety and Development Association (中国人工智能发展与安全研究网络, CNAISDA), which is online at cnaisi.cn, describes itself as “representing China in dialogue and collaboration with AI security research institutions around the world.”1 On an official list of Paris summit side events and an event registration page, it is labeled as “the Chinese equivalent of the AI Safety Institute.”

Details are scarce for now: The group’s website is only in English and contains limited information. We could find only a few references to the organization on Chinese social media, where its Chinese-language name translates to the “China AI Development and Safety Research Network.” (See below for more on the name and the translation of “safety.”) It is unclear who, if anyone, leads the group. But CNAISDA appears to be China’s answer to the AISI.

More details are likely to surface at its inaugural event on February 11, but here’s what we know so far about this unusual institution.

Why does it matter if China has an AISI?

For almost a decade, the Chinese government has consistently signaled its ambition to play a leading role in global AI governance. As early as 2017, the State Council’s New Generation Artificial Intelligence Development Plan declared that “China will actively participate in global governance of AI.” More recently, the Ministry of Foreign Affairs in 2023 published China’s “Global AI Governance Initiative” just days before the UK government hosted the first AI Safety Summit. Interest in questions often grouped under the banner of “AI safety,” including catastrophic risk stemming from AI systems, had been slowly building among scholars close to the Chinese government. How this orientation would mesh with global efforts, however, remained in question.

Chinese representatives have been at the table for every global AI safety summit so far, beginning with the UK-hosted meeting at Bletchley Park in November 2023, which also marked the launch of the first two AISIs by the UK and U.S. governments. Despite some fierce opposition to Chinese participation in the summit, China—represented by a delegation of academics led by a vice minister of science and technology—joined the meetings and signed the resulting Bletchley Declaration. That outcome document said, among other things, that its signatories “resolve to support an internationally inclusive network of scientific research on frontier AI safety.”

At the next international AI safety meeting, hosted by South Korea in May 2024, Chinese representatives were once again in attendance, with Chinese company Zhipu AI signing the resulting voluntary Frontier AI Safety Commitments. Yet China was not among the countries that, as part of the Seoul Statement, committed to developing a global network of AISIs to collaborate on technical AI safety research. When the U.S. Commerce Department held the inaugural convening of the International Network of AISIs (INAISI) in San Francisco in November 2024, China was conspicuously absent from the list of 10 initial members: Australia, Canada, the EU, France, Japan, Kenya, South Korea, Singapore, the UK, and the United States. Although a growing number of Chinese institutions conduct AI safety work that is analogous to the activities of an AISI, and though individual Chinese experts from these institutions joined some of the San Francisco meetings, China lacked formal membership at the INAISI convening.

Observers privately speculated about why China was absent. Had it not moved quickly to establish an AISI because of bureaucratic friction or a lack of precedent for forming this type of institution? Did international organizers seek to exclude China and set up the INAISI with “like-minded” partners? Had Chinese officials feared announcing an AISI would lead to embarrassment if it was not invited to participate? The back story remains unclear. Yet it seems that the absence of a Chinese AISI at least in part precluded China from formally joining important technical AI safety discussions and collaborations.

How is China’s AISI equivalent different from its international counterparts?

Unlike many of its counterparts, CNAISDA is effectively a consortium of several existing research institutions. There is no clear indication at this time that it is a formal government office, though some of the participating institutions sit under government ministries, and all are subject to state supervision. It is not yet clear whether it will have a secretariat, though Tsinghua University is listed as the main contact point.

Its nascent website, which includes numerous non-functional links and appears only in English, lists the constituent “institutes” as:

  • Beijing Academy of Artificial Intelligence (BAAI)
  • China Academy of Information and Communications Technology (CAICT)
  • China Center for Information Industry Development (CCID)
  • Institute of Automation of the Chinese Academy of Sciences (CASIA)
  • Peking University
  • Shanghai Artificial Intelligence Laboratory (SAIL)
  • Shanghai Qi Zhi Institute
  • Tsinghua University

Several of these institutions have already been engaged in AI safety-related work that aligns closely with the core functions of AISIs as discussed in the INAISI mission statement, including technical research and evaluations, standards development, and international cooperation. The list also notably excludes the Beijing Institute for AI Safety and Governance (Beijing-AISI), launched in September, and a similarly named Shanghai group launched in July. It is not clear how the arrival of CNAISDA might affect the work of member institutions or others.

At this point it is not clear from publicly available information who will lead CNAISDA and how it will operate as a multi-institution entity. No director has been announced, though the website names ten “experts”—many of them major AI luminaries with leadership positions at the constituent institutes—without specifying their roles. The only public hints of who may play more of a leading role come from the speaker lineup of CNAISDA’s launch event alongside the Paris AI Action Summit, which includes Andrew Yao (a Turing Award winner and dean of the College of AI at Tsinghua), Xue Lan (renowned public policy expert and dean of Tsinghua’s Institute for AI International Governance), and Wei Liang (Vice President of CAICT, an influential state-linked technical think tank). Yet their exact roles remain speculative at this stage.

CNAISDA is also unique in its prominent focus on not just AI safety but also AI development. While we have referred in this piece to AI safety, it is important to note that the relevant Mandarin word for “safety” (安全) can also be translated as “security,” as for instance in the compounds “national security” and “cybersecurity” (国家安全 and 网络安全). In the context of the AI safety summits and AISIs, the intended meaning is clearly “AI safety,” yet within China the term carries both meanings. It is also a loaded term in China’s official verbiage, since fostering a balance between security and development is one of Chinese leader Xi Jinping’s core messages. Therefore it is unsurprising that a Chinese institution would be charged with both goals. 

To understand CNAISDA’s likely ambit, it is also important to consider that significant Chinese policy thinking has gone into concepts of AI security that are different from the general international meaning of AI safety today (even though the same word is used in Mandarin). For instance, one of CNAISDA’s member groups, the government think tank CAICT, published a framework for AI security in 2018—a time when the now well-established AI safety community had yet to coalesce. In any language, people and institutions bring differing ideas to the concept of “AI safety,” but in the Chinese context this comes with translation inconsistencies and a history of national-level policy thinking.

Looking ahead

We can only speculate about the implications of this new Chinese AISI-like structure at this point, and next week’s Paris launch events are likely to reveal more.

While the creation of CNAISDA may pave the way for China being officially accepted as an attending country at the Paris summit and future AI safety convenings, as well as a member of the INAISI, opposition to Chinese participation is likely to persist—especially amid a heightened sense of competition in light of DeepSeek’s recently released V3 and R1 models. Yet the creation of a Chinese AISI equivalent represents a continued Chinese desire to have a seat at the international AI safety table.

Footnote

1 Confusingly, another website that was first captured by the Internet Archive in September 2024 uses the same Chinese-language name and a different English translation, the “China AI Development and Safety Network.” In late October and early November, multiple Chinese and international AI policy experts in Beijing observed that China’s AISI equivalent was not yet announced. The earlier site does not name individuals or organizations associated with it.

This article reflects the analysis of the authors and does not represent any institutional view.