Forum: Analyzing an Expert Proposal for China’s Artificial Intelligence Law

Article Banner Picture

A few months after the introduction of OpenAI's ChatGPT captured imaginations around the world, China's State Council quietly announced that it would work toward drafting an Artificial Intelligence Law. The government had already acted relatively quickly, drafting, significantly revising, and finally implementing on August 15 rules on generative AI that build on existing laws. Still, broader questions about AI's role in society remain, and the May announcement signaled that more holistic legislative thinking was on the horizon.

A team from the Chinese Academy of Social Sciences (CASS) this month released a scholars' draft of an AI Law for China. When the Chinese government announces that it will draft a law, the future of the effort is uncertain. In some cases, one or more groups of scholars drafts up a proposal. These sometimes feed directly into legislative work and their influence is seen in an official National People's Congress draft for public comment. Sometimes—as, for example, with an early 2000s effort toward a Personal Information Protection Law (PIPL)—the process falters. In the case of the PIPL, CASS scholar Zhou Hanhua described in a DigiChina interview how a team was asked by a government office to work on the issue, producing a draft in 2005, before the PIPL drafting effort stalled for years. By the late '10s when the effort was picked up again, technology and law had changed so much that their 2005 draft was not fit for purpose, and little of its content is visible in the law that went into effect in 2021.

In the case of this scholars' draft of an AI Law, the accompanying explanation notes that it is to serve as a reference for legislative work and is expected to be revised in a 2.0 version. Although the connection between this text and any eventual Chinese AI Law is uncertain, its publication from a team led by Zhou Hui, deputy director of the CASS Cyber and Information Law Research Office and chair of a research project on AI ethics and regulation, makes it an early indication of how some influential policy thinkers are approaching the State Council-announced AI Law effort.

We invited DigiChina community members to share their analysis of the scholars' draft, a translation of which was led by Concordia AI and is published here. Their responses are below. –Graham Webster


JASON ZHOU
Senior Research Manager, Concordia AI

This draft model law is the first step of a likely lengthy and complicated process for China to develop an Artificial Intelligence Law. Given the early stage (and non-governmental nature) of this draft, it is perhaps most useful to focus on the law’s structure and the key risks it tackles. Overall, the draft model law attempts a flexible and pragmatic approach, while also including provisions to mitigate frontier risks from increasingly advanced AI models. 

The key regulatory tool in the draft model law is an AI research, development, and provision “negative list.” Items on the list would require a government-approved permit before being developed or released. As the drafters note in their explanation, this mechanism seeks to balance between safety and development, while keeping safety as the bottom line. Therefore, the drafters adopt a differentiated approach based on risk, which bears some resemblance to the approach taken in the EU AI Act. Whether the model law strikes the right balance between fostering innovation and ensuring safety will depend on the exact composition of the negative list—a challenging task for regulators. 

In that case, what AI capabilities are considered more high risk? The draft model law is hazy on this point, declining to specify how the negative list would be developed and its relationship to the Ministry of Science and Technology’s S&T ethics reviews. From the principles listed in Chapter 1 of the model law, it is clear that adhering to socialist core values, privacy, anti-discrimination, transparency, data security, and human control are all major concerns. Meanwhile, other provisions focus on frontier AI capabilities. Article 43 sets out more stringent oversight of “foundation models,” which though not defined in the draft, are understood to be models trained on large amounts of data that can be used for many different applications and are the subject of a United Kingdom expert task force. Foundation models would require a yearly “social responsibility report” by an independent body of primarily external members. Articles 25 and 50 also call for ensuring human control of AI through both human oversight and technical means, including when AI operates autonomously. As my colleagues at Concordia AI are documenting in forthcoming work [now available online here –Ed.], several influential Chinese experts have already expressed concerns about frontier AI risks, and this model law’s provisions on foundation models and human control offers further evidence of a degree of convergence between expert Chinese and international views. However, technical and policy implementation of such safeguards remains to be seen.

China’s existing system of an algorithm registry and security reviews for certain generative AI products already possesses strong policy levers, which a negative list system proposed by the model law could augment if effectively implemented. However, no matter how complete the legal framework, the rapid pace of AI development demands agility, and China will likely adjust regulations and execution as different concerns are prioritized. Thus, understanding Chinese perceptions of AI risks is a critical complement to analyzing laws and regulation, and foundation models are especially critical given their outsized role in AI development. 


MINGLI SHI

The draft model law suggests the formation of a new agency, the National AI Office, to coordinate and supervise the administration of AI technology. The aim is to prevent the potential regulatory chaos, often likened to the mythical notion of “九龙治水” (jiǔlóngzhìshuǐ, nine dragons governing water), where multiple regulatory bodies assert administrative authority over the same subject. A similar concern was taken into consideration during the development of China’s privacy legislation, with respected scholars like Professor Zhou Hanhua advocating for a new agency focused on personal data protection.

Despite this awareness and later endeavors, as well as the fact that the Personal Information Protection Law authorizes the Cyberspace Administration of China to coordinate relevant regulatory work, the supervision of China’s privacy landscape remains fragmented across various government bodies including the State Administration for Market Regulation, the Ministry of Industry and Information Technology, the Ministry of Public Security, and other sector-specific regulators. Therefore, to achieve the purpose of the model law drafters, the new authority, aka the National AI Office would need to be explicitly granted concrete powers of investigation, regulation, and enforcement. At the same time, it is also crucial to limit the authority of existing government bodies that might have skin in the AI game. Without these measures, the nine-dragons-governing-water problem would persist, and the new agency might also struggle to effectively facilitate inter-agency collaboration and coordination.

Additionally, the draft model law introduces the idea of a “negative list” approach, to prevent the regulations from stifling the progress of AI technology. Activities on the list would require specific administrative approval, while those not included would only need to undergo regulatory filing. The success of this approach clearly hinges on what the list would look like, which could tip the balance in either direction. If the list becomes extensive, broad, or vague—causing confusion or uncertainties for affected entities—it might still hold back technological progress. Right now, the draft model law lacks details or criteria on this question.


HUNTER DORWART
Associate, Bird & Bird

This expert draft AI Law provides an important window into the approach Chinese policymakers may take with respect to future AI rules and the extent to which these rules converge and diverge with other global regulatory approaches. While much can be said about the scope, application, and requirements of the draft law itself, one important question is how the separate legal framework for AI will relate to existing regulations (including those that cover similar digital technologies) both within and outside of China.

On the one hand, the expert draft law defers in many places to technical standards and regulations already in force—indicating that organizations will be expected to implement these where appropriate. These instruments generally apply to a narrower group of services or technical components. For instance, Chinese authorities have recently issued guidance on machine learningalgorithm ethics, and labeling and annotating training data, while promulgating regulations on recommender systemsdeep synthesis, and generative AI. With the draft AI law, this sector-specific approach would be complemented by a risk-based system for AI products. Although there is no list of prohibited applications, some services will need to obtain a license and submit to heightened screening before deploying to market.

On the other hand, the expert draft law introduces provisions that overlap with Chinese data protection law, which could raise new challenges for organizations. Some of these obligations can be folded into pre-existing compliance programs. Disclosing transparency information in privacy policies; conducting multi-purpose impact assessments; maintaining processing records and training logs; appointing personnel responsible for data protection and AI ethics; implementing security measures; and providing information to regulatory authorities all come to mind. Additionally, the principles-based approach set forth in the expert draft law resonates with existing standards. Covered entities—including AI researchers, developers, providers, and users (a typology also found in the EU AI Act)—must generally abide by varying requirements, such as explaining how the AI products work and ensuring fairness in input data for algorithm training.

However, there are other areas where the provisions will be challenging for organizations to reconcile with their existing compliance obligations. Allocating responsibilities with entrusted processors under the PIPL and the different roles in the expert draft law may raise novel questions for apportioning liability in commercial agreements, while the potential overlap of different filing requirements and security assessments will make global compliance programs more difficult to operationalize. As an example, one of the stipulated goals of AI governance in China is to put safeguards on the use of automated decision-making (ADM) that negatively impacts individuals. Already the PIPL contains provisions to address this—but to my knowledge, these have not been enforced.

Overall, it remains to be seen how Chinese policymakers will fashion AI rules to balance between the goals of technological development, national security, and the protection of rights and interests of individuals, but in a way that complements China’s larger framework for data governance. It will likewise be critical to gauge how enforcement oversight will be divided and co-managed by different bodies, including the newly formed Data Bureau and the Cyberspace Administration of China (CAC). We can expect greater clarity in the near future.


JOHANNA COSTIGAN
Writer and editor

Like China’s actual AI regulations, the “Model Law” from a team at the Chinese Academy of Social Sciences (CASS) shows that Chinese AI researchers share unsurprisingly parallel concerns with their international counterparts. Everyone wants to innovate (and profit-generate) while maintaining safety (or appearing to). 

The CASS authors propose toeing this line by creating an adjustable “negative list” of AI firms and services that are off-limits, thereby clarifying that everything else is not. Wang Jun, one of the authors, noted that the AI industry’s scalability is currently limited by consumers’ distrust in it. Without being surreptitious, she linked transparent regulations with industrial development. “Clear and specified obligations can significantly improve the business environment,” she said

That sentiment stands in stark contrast to the “pause” and “risk mitigation” letters that western AI business leaders, among others, have endorsed. Signed by Elon Musk and Sam Altman, respectively, these letters treat AI like it’s a faraway meteor heading to earth instead of a product companies have actively developed and deployed. 

As researchers like Emily Bender have pointed out, the problem with these letters lies in their support of using “longtermist” approaches to control AI. That logic enables an erasure of known, current problems and quandaries introduced by artificial intelligence in service of humoring the sometimes boyish, other times valid visions of worst-case scenarios. Even as it tries to both stimulate and regulate AI, the CASS Model Law usefully lacks comparable hypocrisy—probably because it is a fully-conceived draft law written by researchers rather than a one-lined doomsday prediction. 

The Model Law proposes that China establish a National Artificial Intelligence Office (a worldwide first), stipulates separate requirements for developers and providers, and creates a negative list. These changes would make AI development more transparent—if not to the people, then at least to the party-state. 

The amount of good that could do is debatable. But it is surely better than the United States’ zero-regulation approach, which has helped dangerous narratives with conspicuous industry support gain traction. These range from Altman’s warning that regulation might “slow down American industry in such a way that China or somebody else makes faster progress” to pointing at a possible apocalypse to distract from reality. It will be worth watching how many of the CASS proposals end up in the official draft AI law once it is introduced to the NPC. Hopefully, the final version maintains this draft’s emphasis on transparency—and immediacy. 

Either way, though it is by no means unfiltered, the CASS draft shows that at least parts of China’s AI community are turning over the right stones. That’s also the case in the U.S. The difference is that the Americans making good points do not yet have lawmakers’ full attention. Maybe China’s eventual AI law will change that by inspiring a regulatory layer to the U.S.-China AI competition. For political-logistical reasons, any lessons learned from China’s regulatory experiments would have to remain unsaid. 


QIHENG CHEN

The draft AI model law offers a peek into the thinking of influential scholars at the Chinese Academy of Social Sciences. The draft fleshes out a clearer allocation of responsibilities along the AI value chain than we see in existing regulations. While the bulk of responsibilities would fall on downstream providers who market AI products and services, certain requirements would hinge on inputs from the upstream. The model law asks upstream model developers to provide necessary assistance, for example, in algorithm explainability and model risk evaluation. This would prevent model developers from evading responsibilities through contract language or with the pretext of protecting business secrets. 

This draft suggests China’s pipelined AI Law will be a general law governing single-purpose models as well as general-purpose foundational models that can be fine-tuned to perform a wide range of tasks. The model law recognizes the “gate-keeper” role of foundational model providers. Special obligations are imposed on them, including having an independent body chiefly composed of outsiders to oversee model development. The draft also leaves room for harmonizing the AI Law with regulations specific to foundational models. 

All of the above marks major progress from the current measures on generative AI that group upstream and downstream players together as service providers. However, the model law still lacks differential treatment of open-source models. These models could be contributions from researchers who do not have the resources to meet ongoing compliance requirements. Open-source models will be vital to the dispersion of knowledge and will exert competitive constraints on dominant firms that typically keep state-of-the-art models proprietary. The debate will continue in China regarding the balance between ensuring compliance and maintaining vibrancy in open-source ecosystems.

Additionally, the model law proposes creating a dedicated AI regulator to avoid the kind of bureaucratic turf battles seen in cybersecurity and data protection. This will not be an easy task, since existing regulatory bodies already have authority over some of the functional issues AI raises (e.g., competition or data privacy), as well as the industries in which it operates. Clearly delineating the boundaries of a dedicated AI regulator’s power requires an answer to the question: What are the novel harms and risks created by AI that cannot be addressed by applying existing laws and regulations? This question is being hotly debated across the world, and China could make its contribution in the run-up to formulating its AI Law.