DigiChina Editor's Note: This is a guest translation organized by Concordia AI. It was edited by Kwan Yee Ng and Jason Zhou, with contributions from Ben Murphy, Rogier Creemers, and Hunter Dorwart. This translation has not been edited by DigiChina for accuracy or house style.
For context and analysis on this unofficial scholars' draft, please see the DigiChina Forum on this document.
Artificial Intelligence Law Model Law Version 1.0 (Expert Suggestion Draft)
Chinese Academy of Social Sciences Major National Condition Research Project
Research Group for “Investigation on the Status of the Construction of China’s Artificial Intelligence Ethics Review and Regulatory System”
Chapter 1: General Principles
Article 1: (Legal Basis) In order to regulate the research and development, provision, and usage of artificial intelligence (AI), safeguard national sovereignty, development, and security, to promote the safe development of AI, and to protect the legitimate rights and interests of individuals and organizations, this law is formulated in accordance with the People’s Republic of China (PRC) Constitution.
Article 2: (Scope of Use) This law applies to the research and development, provision, and use of AI, as well as the regulation of AI, within the borders of the PRC.
Activities related to the research and development, provision, and use of AI conducted outside the territory of the PRC that affect or may affect the national security, public interests, or the legitimate rights and interests of individuals or organizations of the PRC, are subject to this law.
Article 3: (Governing Principles) The State should coordinate development and security, adhere to the combination of promoting innovation and governance according to law, and implement inclusive and prudent regulation.
Article 4: (Principle of People-Centered) Activities related to the research and development, provision, or usage of AI should be people-centered and direct intelligence for good. Ensure that humans can continuously supervise and control AI, with the ultimate goal of always promoting the welfare of humanity.
Article 5: (Safety/Security Principles)1 Those researching and developing, providing, or using AI should adopt necessary measures to ensure the safety/security of the research and development, provision, and usage of the AI and its related network data.
Article 6: (Principles of Openness, Transparency, and Explainability) Those researching and developing, providing, or using AI should adhere to the principles of openness, transparency, and explainability, and adopt necessary policies to ensure that the purpose, principles, and effects of AI research and development and AI use adhere to the above.
Article 7: (Principle of Accountability) Those engaged in the research and development, provision, and use of AI should be individually responsible for their respective activities in research and development, provision, and use.
Article 8: (Principles of Fairness and Equality) Those engaged in the research and development, provision, and use of AI should adhere to the principle of fairness and must not subject individuals or organizations to unreasonable differential treatment.
Those engaged in the research and development, provision, and use of AI should fully consider the needs of special groups such as minors, the elderly, and people with disabilities, etc.
Article 9: (Green Principle) The State encourages the application of energy-saving technologies in the research and development, provision, and use of AI, as well as the efficient use of resources and the protection of the ecological environment.
Article 10: (Principle of Promoting Development and Innovation) The State supports the construction of AI infrastructure, and promotes the open sharing of public computing power, public data, and other related public resources. It encourages individuals and organizations to legally share computational power, data, and other related resources.
The State encourages AI research and development, as well as applications, protects AI intellectual property (IP) rights in accordance with the law, and innovatively explores IP systems that are adapted to the development of AI.
Article 11: (International Cooperation) The State actively conducts international, multi-disciplinary, and multi-cultural exchange and cooperation in the domain of AI. It actively participates in the formulation and implementation of international rules and standards related to AI. It promotes the mutual recognition of AI-related rules and standards between other countries, regions, and international organizations. It promotes formulating a widely accepted international governance framework, standards, and norms for AI.
Article 12: (Department Performing these Duties) The National AI Office is the governing body responsible for AI development and management. Other relevant government departments and military-related departments should closely cooperate, strengthen coordination, and carry out relevant work in accordance with the law, based on their respective responsibilities.
The AI management departments and other relevant departments of provinces, autonomous regions, directly-administered municipalities, provincial capitals, cities with special economic zones, and larger cities designated by the State Council will be responsible for AI development and management work in the scope of their jurisdictions, according to relevant national regulations.
Article 13: (Collaborative Governance) The state establishes and improves an AI governance mechanism that involves government management, corporate responsibilities, industry self-governance, social supervision, and user self-discipline, to promote collaborative governance by diverse entities.
Article 14: (Legality and Legitimacy) Activities related to the research and development, provision, and usage of AI should comply with laws and administrative regulations, respect social ethics and moral principles, and adhere to the following provisions:
- Adhere to socialist core values and do not generate content that incites the subversion of state power, calls for the overthrow of the socialist system, endangers national security and interests, damages the national image, incites the splitting of the country, undermines national unity and social stability, advocates terrorism or extremism, propagates ethnic hatred or discrimination, or involves violence, obscenity, pornography, or other false and harmful information prohibited by laws and administrative regulations.
- Respect IP rights and business ethics, protect trade secrets, do not use advantages in algorithms, data, platforms, etc. to engage in monopolistic or unfair competitive practices.
- Protect the legal rights and interests of consumers and workers in accordance with the law, respect the lawful rights and interests of others, and do not harm the physical and mental health of others. Do not infringe upon others' right of publicity, reputation, honor, privacy, and personal information.
Chapter 2: Support and Promotion of AI
Article 15: (AI Development Plan) The State implements the Next Generation AI Development Plan, adheres to the simultaneous promotion of AI research and development, product applications, and industrial cultivation, and comprehensively supports scientific, economic, and social development as well as national security.
Provincial-level and above people's governments should incorporate the development of AI into their national economic and social development plans, and formulate AI development plans as needed.
Article 16: (Construction of Compute Infrastructure) Comprehensively deploy and implement the construction of a national integrated big data centers system, optimize the layout of data center infrastructure construction, and cultivate and strengthen data center clusters.
The State introduces measures for supplying public compute resources for AI, promoting the construction and utilization of public computing resource platforms, strengthening scientific allocation of compute, and providing assistance of public computing for the development of AI technology and the AI industry.
Encourage and support higher education institutions, scientific research institutions, enterprises, and other organizations to build AI compute infrastructure, open-source development platforms, and open-source communities. Carry out market-oriented transactions of computing resources, guide various industries in the rational and orderly use of computing resources, and improve the efficiency of computing infrastructure utilization.
Article 17: (Algorithm and Foundation Model Innovation) The State supports the innovation of AI algorithms, strengthens the legal protection of foundation models, and promotes innovative development and applications of foundation models.
Article 18: (Data Factors Supply) The State supports the construction of foundational and specialized databases in the field of AI, promotes the efficient aggregation and shared utilization of data resources, and expands the scope of public data supply for AI applications.
It encourages and guides relevant entities to carry out coordinated research and development on big data and AI technology, supports relevant entities to deeply integrate data with industry knowledge, develop data products, and serves the requirements of algorithm design, model training, product validation, application to different scenarios, etc.
Article 19: (Industrial Development and Application Innovation) Accelerate the transformation of key AI technologies into applications, promote technology integration with business model innovations, promote innovation of key fields of intelligent products, actively cultivate emerging business forms of AI, and create an internationally competitive AI industry cluster.
Promote the integrated innovation of AI with various industries, carry out pilot demonstrations of AI applications in key industries and fields, promote the scaling of AI applications, and support the application and promotion of new technologies, products, services, and models of AI.
Article 20: (Professional Talents Training) The State supports higher education institutions in improving the layout of the academic field of AI, establishing AI as an academic major, and promoting the construction of first-level disciplines in the field of AI.
It encourages higher education institutions, research organizations, and enterprises to conduct basic theoretical research on major scientific frontiers in the field of AI and R&D of key generic technologies, and undertake major scientific and industrial innovation projects.
The State supports the establishment of project management innovation mechanisms, innovative talent evaluation mechanisms, and scientific and technological achievements incentive mechanisms, etc. that are beneficial for promoting AI development.
Article 21: (Financial and Fund Support) The State encourages various localities and departments to leverage investment guidance funds to provide support, focusing on specialized assistance in core areas and key links of AI.
The State encourages social capital to participate in the development of the AI industry.
Article 22: (Pilot Programs in State Agencies) Encourage government agencies, public institutions, state-owned enterprises, and other organizations authorized by law to manage public affairs to take the lead in piloting applications of AI technology in areas such as government services and public management, etc. Priority should be given to the procurement and use of safe/secure and reliable AI products and services.
Chapter 3: AI Management System
Article 23: (Categorized Management System) The State establishes a Negative List system for AI, implements permit management for products and services on the Negative List, and implements filing management for products and services outside of the Negative List.
The state body responsible for AI, considering the significance of AI in economic and social development, as well as the potential harm to national security, public interest, the legal rights and interests of individuals and organizations, and economic order if attacked, tampered with, destroyed, or illegally acquired and utilized, will take the lead in formulating and periodically updating the Negative List for AI products and services.
Article 24: (Negative List Management System) Before undertaking the research and development or provision of products and services that are within the scope of the AI Negative List, an administrative permit from the state body responsible for AI should be obtained.
It is prohibited to engage in the research and development or provision of AI activities within the scope of the Negative List without permission or beyond the scope of the granted permission.
Article 25: (Conditions for a Negative List Permit) To apply for a permit for research and development or provision of AI within the scope of the Negative List, the following conditions must be met:
- Being a legal person lawfully established within the territory of the PRC;
- The main person in charge must be a Chinese citizen;
- Having expert personnel with specialized knowledge on quality control, safety assurance, human supervision, and compliance management, etc.;
- Having a sound AI quality management system, network data security management system, and science and technology ethics review system;
- Having safe and controllable technical assurance measures for AI;
- Having AI emergency response mechanisms suited to the risks;
- Having the premises, facilities, and funding suited to AI research, development, or provision;
- Other provisions under laws or administrative regulations.
Article 26: (Application for Negative List Permit) AI researchers, developers, and providers applying for permits to research and develop or provide AI products on Negative Lists should submit the following materials:
- A letter of application
- Legal person certification, evidence of premises, funding, etc;
- Evidence that the main person in charge is a Chinese citizen;
- Qualifications of specialist personnel responsible for quality control, safety assurance, human supervision and compliance management;
- AI quality management system, network data security system, science and technology ethics review system, risk management system, and their implementation situation;
- Technical documentation catalogs and daily summary records of technical assurance measures, conforming to the provisions of this Law;
- AI security assessment reports;
- Other materials specified by laws and regulations.
Article 27: (Approval of Negative List Permit) After receiving the application for research and development or provision of AI within the Negative List, the state body responsible for AI should conduct a preliminary review within 10 working days.
Upon preliminary review, if the application materials submitted by the researchers, developers, and providers of AI are found to not meet the requirements, the state body responsible for AI may require them to be supplemented or corrected. Where the AI researcher, developer or provider does not supplement or correct them without proper reason, their application will be considered withdrawn.
Upon preliminary review, if all the required materials are complete, the state body responsible for AI should complete the review within 45 days from the date of accepting the application, and make a decision to either grant or deny the permit. If permission is granted, a permit for the research and development or provision of AI will be issued to the applicant; if permission is denied, the applicant should be notified in writing and the reasons explained.
If a decision cannot be made within 45 days, with the approval of the person responsible within the state body responsible for AI, an extension of 10 days can be given, and the applicant should be notified about the period and reasons for the extension.
Article 28: (Revocation of Negative List Permits) If AI researchers, developers, or providers violate the stipulations of this Law during their research and development or provision activities, and major security incidents, repeated security incidents, or repeated administrative punishments occur, the state body responsible for AI may suspend permits and order them to make corrections within a designated time frame. If the corrections are not made within the required time, or security incidents or administrative punishments occur again following the suspension, the state body responsible for AI may revoke permits.
Article 29: (Reapplication of Permits Under the Negative List) Permits for AI research and development or provision within the Negative List should specify the duration and scope of the usage license.
Where exceeding the scope of the permit, or where technological improvements, changes in usage scenarios, or changes in user communities, etc. lead to changes in AI risks, Negative List AI researchers, developers, and providers should reapply for a research and development or provision permit.
Six months before the expiration of the usage permit, Negative List AI researchers, developers, and providers may apply for renewal of their research and development or provision permits.
Negative List AI researchers, developers, and providers terminating their AI research and development or provision under the permit should, within three months of termination, apply to the state body responsible for AI for cancellation of their research and development or provision permit.
Article 30: (Disclosure of Permit) AI researchers, developers, and providers within the Negative List must indicate the permit number in a prominent position on the AI products and services they provide.
Article 31: (Complaints and Reports) Individuals and organizations that discover unlawful activities related to the research and development or provision of AI within the Negative List have the right to file complaints and reports with the state body responsible for AI. The state body responsible for AI should promptly verify and address such complaints and reports.
Article 32: (Clarification Mechanism) Individuals and organizations that have doubts or questions about the research and development or provision of AI within the Negative List have the right to request clarification from the state body responsible for AI. The state body should promptly respond and address these requests.
Chapter 4: Obligations of AI Researchers, Developers and Providers
Section 1: General Stipulations
Article 33: (Safety and Security-Related Obligations) Before putting AI into use or putting it on the market, AI researchers, developers, and providers should conduct security testing to prevent AI security risks associated with network data. Moreover, they should guarantee that the safety/security and stable operations of the AI’s entire life cycle conform to the predetermined goals.
AI researchers, developers, and providers should announce safety/security best practices in a timely manner, and guide users to safely and correctly use the AI.
AI researchers, developers, and providers must ensure traceability, in accordance with the records, technical documentation, and so forth stipulated in this law, so that when an accident occurs, they can trace and locate problems in a timely and accurate way, thus guaranteeing the safety/security of the AI.
The relevant organizations and individuals are encouraged to notify AI researchers, developers, and providers of security vulnerabilities in their products and services.
AI researchers, developers, and providers should fulfill their obligations related to security vulnerability management in accordance with the relevant regulations, to ensure that security vulnerabilities are remedied and appropriately disclosed in a timely manner, and they should also guide and support users in taking preventive measures.
Article 34: (Obligation to Remedy and Notify) AI researchers, developers, and providers should strengthen risk monitoring, and they should immediately take remedial measures when they discover security defects and vulnerabilities.
When a security incident occurs, AI researchers and developers should immediately take measures to remedy the situation, and notify AI providers. AI providers should immediately take measures to deal with the situation and, in accordance with the relevant national regulations, should inform users in a timely manner and also report the following to the government body with authority over the matter:
- The process by which the security incident occurred, and the scope of its impact;
- The remedial measures that the AI provider has already taken, and the measures that users can take to ameliorate the damage;
- Ways to contact the AI provider.
If the AI provider takes measures that can effectively prevent actual harm to users, it may opt not to notify users; if the state body responsible for AI believes that this might cause harm, it has the authority to require the provider to notify affected users.
When the situations in paragraphs one and two of this article occur, the AI provider should notify the AI researchers and developers in a timely way. The AI researchers and developers should immediately conduct an evaluation. If the evaluation reveals that problems or risks existed in the research and development stage, the researchers and developers should immediately notify other AI providers, and while the problem or risk remains unresolved, they should suspend or take down the AI product or service. When the other AI providers receive the notification, they should immediately take measures to deal with the situation, so as to control risks associated with the product or service.
Article 35: (Obligations Related to Openness and Transparency) When AI providers provide AI services that interact with natural persons, they should, before the user begins using the AI, and in a concise, clear, and easily understood manner, inform natural persons who interact with the AI products and services that they are interacting with an AI service. However, this excludes instances in which natural persons can determine this from the use scenario.
AI providers that provide deep synthesis services should still include prominent markings on synthesized content, in accordance with the relevant national regulations, so as to point out to the audience the deep synthesis situation.
Before users use AI, AI providers should take appropriate measures to communicate the following information to users:
- The basic principles, purpose and intent, and main operating mechanism of the product or service
- The permitting or filing public disclosure information, associated with the product or service;
- The user’s rights and channels for remediation;
- Other information as stipulated in laws and administrative regulations.
AI providers should fully guarantee that special groups such as minors, the elderly, and disabled persons can understand the aforementioned information, taking fully into account such factors as the use scenario of the product or service, nature, audience, and level of technological development in the industry.
AI researchers and developers should cooperate with providers in fulfilling the aforementioned obligations.
Article 36: (Explainability-Related Obligations) When AI products and services have a large impact on individuals’ rights and interests, AI users have the right to demand that providers provide an explanation on the decision-making processes, methods, etc. of their products and services, and they have the right to lodge complaints over unreasonable explanations. Providers should give timely feedback to user demands, taking fully into account factors such as risks, the scenario, and the level of technological development in the industry. Researchers and developers should cooperate with providers in fulfilling the obligations in this article.
Article 37: (Fairness-Related Obligations) During the course of training data processing and labeling, design and development of algorithms and models, and verification and testing, AI researchers and developers should take the necessary measures to effectively prevent harmful biases and discrimination.
During the process of providing products and services, AI providers should strengthen management of input data and output data, so as to effectively prevent harmful biases and discrimination.
Article 38: (Risk Management) AI providers should construct and implement a robust, whole-life-cycle risk management system, taking fully into account such factors as the usage scenario of the product or service, its nature, its audience, and the level of technological development in the industry. Before a product or service goes into use, and during the course of usage, AI providers should identify AI risks, evaluate risks, and adopt reasonable and necessary measures to prevent and control risks. AI providers should keep records of risk identification, evaluation, and management, and should save them for no fewer than three years.
AI researchers and developers should establish and operate a risk management system for the research and development process, including design, data collection and training, model choice, and testing and verification, and they should identify risks and adopt reasonable and necessary measures to reduce risks. While guaranteeing that trade secrets are protected, AI researchers and developers should cooperate in providing risk evaluation and management records, and should support AI providers in completing their AI risk management obligations. AI providers should keep risk evaluation and management records, and should save them for no fewer than three years.
Article 39: (Safety/Security Evaluation) AI researchers, developers, and providers should carry out safety/security evaluations in accordance with relevant national regulations.
Article 40: (Science and Technology Ethics Review) AI researchers, developers, and providers should carry out science and technology ethics reviews in accordance with relevant national regulations.
Article 41: (Authorized Representatives) As stipulated in paragraph two of Article Two of this law, AI researchers, developers, and providers outside the borders of the PRC should establish dedicated agencies or designated representatives inside the borders of the PRC, who will be responsible for handling AI-related affairs, and should report the names of the relevant agencies or representatives and the means to contact them to the state body responsible for AI.
Section 2: Obligations of AI Researchers and Developers
Article 42: (Strengthened Obligations for AI Researchers and Developers on the Negative List) AI researchers and developers on the Negative List should fulfill the following obligations:
- Formulate and maintain technical documentation that accords with the requirements of this law, and cooperate with providers in fulfilling relevant obligations;
- During the research and development process, formulate and operate a quality management system that fulfills the requirements of this law, and cooperate with providers in fulfilling relevant obligations;
- During the research and development process, conduct safety/security evaluations, and cooperate with providers in fulfilling relevant obligations;
- Other obligations as stipulated in laws and administrative regulations.
Article 43: (Special Obligations of Foundation Model Researchers and Developers) Foundation model researchers and developers should abide by the following stipulations:
- Establish a robust security risk management system in accordance with national regulations, and effectively and in a timely manner prevent, monitor, and handle risks to national security, the interests of the public, the legitimate rights and interests of individuals and organizations, and economic order;
- Establish robust model management and data management systems for the foundation model in accordance with national regulations;
- Respect the principles of openness, fairness, and impartiality, formulate rules of use for foundation models, clearly define the obligations that researchers, developers, and providers of the foundation model should fulfill, and do not abuse a dominant market position;
- Assist other researchers, developers, and providers in fulfilling relevant obligations;
- Necessary measures such as stopping provision of services should be taken against researchers, developers, and providers that severely violate the stipulations of this law;
- Establish an independent institution composed mainly of external members that supervises the research and development situation of the foundation model; every year, release a social responsibility report, and accept supervision by society.
Section 3: Obligations of AI Providers
Article 44: (Obligation to File) AI providers not on the Negative List should file the following information with the government body with authority over the matter within 10 working days of providing a product or service:
- The name of and contact information of the AI provider;
- The trademarks or names of the product or service, the form in which the product or service is provided, its area of application, the types of algorithms in it, and a safety/security self-assessment report;
- Proposed public content for the algorithm filing;
- Other information as stipulated in laws and administrative regulations.
If the information that was filed changes, the filer should undertake modification procedures within 10 working days of the date of the modifications.
An AI provider that has completed filing should clearly display the filing number in a prominent position on the website it uses for external provision of services, in its apps, etc.
Article 45: (Filing Process) After the government body with authority over the matter receives the filing materials, if the materials are complete, it should put them on file within 30 working days, issue a file number, and publicly announce it. If the materials are incomplete, it should notify the filer within 10 working days to add additional materials.
Article 46: (Auditing Obligations) The AI provider should conduct an audit at least once every two years, and inspect its input data, algorithms and models, output data, and so forth for compliance, and should review and evaluate whether or not its product and service activities accord with laws and administrative regulations.
Article 47: (Internal Management System) AI providers should adopt the following series of measures to ensure that their AI accords with the stipulations of laws and administrative regulations:
- Formulate systems and corresponding operating rules for internal data security, risk control, quality management, and so forth;
- Maintain an auto-generated log of AI products and services provided;
- Conduct regular employee education and training;
- Adopt appropriate technical measures for compliance in areas such as robustness and resistance to attacks;
- Other measures as stipulated in laws and administrative regulations.
Article 48: (Termination Mechanism) When AI providers terminate the provision of a product or service, they should adopt the following proper planning measures:
- Publicize the termination plan, users’ rights, and so on 30 working days in advance;
- Within 30 working days from the date of termination, delete users’ personal information; in accordance with the relevant national regulations, take the necessary steps to deal with the data, training data, algorithms, and models generated during the process of providing the AI product or service;
- Other measures as stipulated in laws and administrative regulations.
Article 49: (Canceling Permits) AI providers on the Negative List should notify the government body with authority over the matter 30 working days in advance, explain the relevant circumstances, and, following termination of service, return their permit certificate to the body that originally granted them.
AI providers not on the Negative List should undertake cancellation filing procedures within 20 working days of their termination of service.
Article 50: (Strengthened Obligations of AI Providers on the Negative List) AI providers on the Negative List should also fulfill the following obligations:
- Conduct safety/security assessments, and ensure safety/security and stability in accordance with the requirements of this law;
- Formulate and maintain technical documentation that accords with the requirements of this law, so as to prove that the provision of AI accords with the requirements of this law relating to AI on the Negative List;
- Establish and operate a whole-life-cycle quality management system that accords with the requirements of this law;
- During the process of automated operation of AI products and services, ensure that humans can take measures such as intervention and taking over at any time;
- Other obligations as stipulated in laws and administrative regulations.
Chapter 5: Comprehensive AI Governance Mechanism
Article 51: (Responsibilities of the State Body Responsible for AI) The state body responsible for AI should fulfill the following AI regulatory responsibilities in accordance with the law:
- Conduct education and publicity on AI ethics and safety/security, guide and supervise the development, provision, and use of AI;
- Formulate AI regulatory rules and guidance, and organize the establishment of standards for AI ethics, safety/security, management, etc.;
- Organize monitoring, evaluation, and audit work for AI technology, and guide professional institutions to carry out activities in accordance with the law;
- Establish mechanisms for AI risk monitoring and warning, and organize work to obtain, analyze, study, and make warnings based on information on risks regarding AI;
- Establish emergency response mechanisms for AI safety/security incidents;
- Receive and process complaints and reports related to the development, provision, and use of AI technology and products;
- Investigate and deal with unlawful activities related to the development, provision, and use of AI;
- Other responsibilities stipulated by laws and administrative regulations.
Article 52: (Security Review System) Those that undertake AI research, development, or provision activities that impact or may impact national security should undergo a security review in accordance with relevant national regulations.
The decision of the security review conducted in accordance with the law is final.
Article 53: (Deadline for Preliminary Procedures) In accordance with this law and relevant national regulations, AI researchers, developers, providers, and users who apply for a security review, filing, or administrative licensing for new applications of AI technologies should have a clear working time frame specified by the state body responsible for AI and promptly handle the work and respond within the allotted time frame.
Article 54: (Calling in for a Meeting) If the state body responsible for AI and local AI regulatory authorities at all levels, while performing their duties, discover significant risks or safety/security incidents in the research and development or provision of AI, they may call the AI researcher, developer, or provider in for a meeting in accordance with the scope of authority and procedures afforded by regulations, and require them to take the following measures:
- Rectify and eliminate the eliminate hidden dangers, in accordance with requirements;
- Provide appropriate explanations for their research and development or provision activities, explain the responsibilities for development, management, and operation of their AI services, measures taken to ensure fairness, safety/security, and stability, and the impact on parties with relevant interests, etc.;
- Entrust professional institutions to conduct compliance audits of their AI research and development or provision activities.
AI researchers, developers, and providers who promise to conduct rectification and comply within a specified period of time and can effectively prevent the harm caused by AI research and development, provision, or usage activities can avoid suspending relevant activities. If the state body responsible for AI believes that such activities may cause harm, it may order the suspension of relevant activities.
Article 55: (Innovation Regulation) The state body responsible for AI should formulate a list of minor illegal acts in the research and development, provision, and usage activities of the AI industry that should not be subject to administrative penalties in accordance with the law. Through measures such as criticism and education, guided interviews, and consultative meetings and discussions, citizens, legal persons, and other organizations can comply with the law to carry out AI research, development, provision, and use activities.
Article 56: (Regulatory Sandbox) The state body responsible for AI should establish an AI regulatory testing mechanism, which will release concrete regulations and guidance on the following matters:
- The conditions for participating in regulatory testing;
- The operating mechanism of regulatory testing;
- Obligations and liability reduction mechanisms for AI researchers, developers, and providers.
Article 57: (Law Enforcement Mechanism) The state body responsible for AI should strengthen the construction of specialized teams and professional techniques to improve the efficiency of AI regulatory and law enforcement work.
The state body responsible for AI should determine the administrative law enforcement procedures of the system in accordance with the law, and establish an administrative law enforcement oversight system.
Article 58: (Technical Governance) The state supports enterprises, scientific research institutions, and other organizations in researching and developing technologies related to AI monitoring and warning, safety/security assessments, emergency response, etc. and encourages the application of regulatory and compliance technologies in the field of AI.
Article 59: (Overseas Countermeasures) If organizations or individuals outside the PRC engage in AI research, development, provision, or use that infringes on the legitimate rights and interests of PRC citizens, or harms the PRC’s national security or public interests, the state body responsible for AI can list them on a Negative List, make an announcement, and take measures to restrict or prohibit their research, development, provision, or use of AI within the territory of the PRC in accordance with the law.
Article 60: (Reciprocal Measures) If any country or region adopts discriminatory prohibitions, restrictions, or other similar measures against the PRC regarding research, development, investment, trade, etc. related to AI, the PRC can take reciprocal measures against that country or region based on the actual situation.
Chapter 6: Legal Responsibilities
Article 61: (General Provisions on Administrative Penalties) Those who violate Articles 34 through 51 of this Law regarding researching, developing, or providing AI, should be ordered by the state body responsible for AI to make corrections, given a warning, be confiscated of their illegal gains, and ordered to suspend or terminate the provision of products or services. Those who refuse to make corrections should be fined up to one million RMB. The supervisory individual directly responsible and other directly responsible personnel should be fined 10,000 to 100,000 RMB.
If there is an illegal act stipulated in the preceding paragraph for which the circumstances are serious, the state body responsible for AI should order rectification, confiscate the illegal income, and impose a maximum fine of 50 million RMB or up to 4% of the previous year's revenue. The state body responsible for AI may also order the suspension of relevant business activity or suspension of business for rectification, notify the relevant competent authorities to revoke relevant business permits or revoke the business license. The supervisory individual directly responsible and other directly responsible personnel should be fined 100,000 to 1 million RMB.
Article 62: (Discretionary Measures for Administrative Fines) The fines stipulated in this Law may supplement or substitute for measures such as ordering rectification. When the state body responsible for AI decides the quantity of administrative fines, it should follow the principles of legality, proportionality, fairness and justice, combining punishment with education, and considering all aspects, and should sufficiently consider the following factors:
- The nature, severity, duration, scope of impact, and degree of damage of the unlawful act and its consequences;
- Whether the unlawful act was intentional or negligent;
- Whether remedial measures have been taken to mitigate the potential damages caused by the unlawful act;
- Whether the state body responsible for AI was notified in accordance with the provisions of this law;
- Whether reasonable and effective organizational and technical measures have been taken to manage the risks of AI in accordance with the provisions of this law;
- Whether relevant standards on AI, security, etc. were adhered to or whether relevant certifications were obtained.
- Previous unlawful behavior;
- Factors that increase or reduce penalties as stipulated by other laws and regulations.
Article 63: (Violation of the Permit) Those who violate the provisions of Article 32 of this Law by researching, developing, or providing AI without a permit or beyond the scope of the permit should be ordered by the state body responsible for AI to stop relevant activities according to its duties, and should be fined up to a maximum of 4% of the previous year's revenue.
Article 64: (Violation of Filing) If an AI provider obtains the filing through improper means such as concealing relevant information or providing false materials, the state body responsible for AI should revoke the filing, issue a warning, and circulate a notice of criticism; If the circumstances are serious, the information updates should be suspended and a fine between 10,000 and 100,000 RMB should be imposed.
Article 65: (Credit Sanctions) Those who engage in illegal activities as stipulated in this Law should be recorded in credit records in accordance with relevant laws and administrative regulations and should be publicly disclosed.
Article 66: (Civil Tort Liability) If the AI that is researched, developed, or provided infringes on personal rights and interests, causing damage, and the researcher, developer, or provider fails to fulfill their obligations under this Law, they should bear tort liability such as compensation for damages, unless the developer or provider can prove that they are not at fault.
The liability for compensation for damages stipulated in the preceding paragraph should be determined based on the losses suffered by the user or affected individuals or organizations, or the benefits obtained by the researcher, developer, or provider as a result. If it is difficult to determine the losses suffered by users or affected individuals or organizations and the benefits obtained by providers as a result, the compensatory payment amount should be determined based on the actual situation.
Article 67: (Public Interest Litigation) If AI is provided in violation of the provisions of this Law and infringes on the rights and interests of a large number of individuals, the People's Procuratorate, consumer organizations as stipulated by law, and organizations determined by the state body responsible for AI may bring a lawsuit to the People's Court in accordance with the law.
Article 68: (Link Between Public Security Management Penalties and Criminal Liability) If a violation of the provisions of this Law constitutes a violation of public security management, public security penalties should be imposed in accordance with the law. If the actions constitute a crime, criminal liability should be pursued in accordance with the law.
Article 69: (Non-Prosecution Given Compliance) If an AI researcher, developer, or provider violates the provisions of this Law and bears administrative liability, and if the compliance construction of the actor meets the effectiveness standards after evaluation, it should be filed as a minor illegal act and exempted from punishment in accordance with the law.
Those who commit a violation of the provisions of this Law that constitute a crime should be included in corporate compliance reform pilot work. If the compliance construction of that company meets the effectiveness standards after evaluation, the People's Procuratorate may, based on the evaluation’s conclusions and in accordance with the law, make a decision not to approve arrest, decide to change the compulsory measures, or choose not to prosecute. It can instead provide sentencing proposals for lenient punishment, or submit procuratorial opinions for lenient punishment or disciplinary action to the relevant competent authority.
Article 70: (Responsibilities For State Bodies That Fail To Perform Duties) State bodies that fail to perform duties stipulated in this law should be ordered by their superior bodies or the state body responsible for AI to make corrections. The directly responsible supervisory individuals or other directly responsible personnel should be punished according to law.
Officials of state bodies that neglect their duties, abuse their powers, or use their position for personal gain, in a non-criminal manner, should be punished according to law.
If according to law, the problem is not the fault of administrative law enforcement, the administrative law enforcement responsibility of the relevant staff will not be investigated.
Chapter 7 Supplementary Provisions
Article 71: (Definitions) The meaning of following terms used in this law:
- Artificial Intelligence refers to automated systems that operate with a degree of autonomy, serve particular goals, and are able to influence the physical or virtual environment through prediction, recommendation, or decision-making, etc. This includes data, features, models, service provision interfaces and the embedded terminal devices, etc.
- AI Researchers and Developers refers only to individuals or organizations that conduct AI research and development activities such as designing algorithms, labeling training data, conducting feature extraction, training and optimizing models, and performing testing and deployment.
- AI Providers refers to individuals or organizations that provide AI for commercial purposes or relevant technical support for such services. Or, it refers to individuals or organizations that provide AI services or relevant technical support for such services to the public, regardless of whether it is for commercial purposes.
- AI Users refers to individuals or organizations that make use of AI in accordance with its nature and usages.
Article 72: (Date of Implementation) This law should be effective on [day, month, year].
Article 73: (Public Disclosure System for Negative List) The state body responsible for AI should disclose the Negative List no later than six months before the date of implementation of this law, and publicly disclose it in a timely manner after regular revisions.
1 The term 人工智能安全 can be translated as either AI safety or AI security, because the term 安全 carries both meanings. We have translated it as “safety,” “security,” and “safety/security” in different locations in the text based on our judgment of the intended meaning. We chose “security" when we understood the focus to be more on cybersecurity, data security, or social stability-related issues, and we chose “safety” when the subject matter pertained more closely to the field of AI safety as understood by the international scientific community. We chose “safety/security” when both meanings appeared intended.
Alternative source: https://redian.news/wxnews/536749
Archived copy of alternative source: https://web.archive.org/web/20230822200944/https://redian.news/wxnews/536749
第七十二条（施行日期） 本法自 年 月 日起施行。