Translation: Artificial Intelligence Law, Model Law v. 1.0 (Expert Suggestion Draft) – Aug. 2023

Article Banner Picture

DigiChina Editor's Note: This is a guest translation organized by Concordia AI. It was edited by Kwan Yee Ng and Jason Zhou, with contributions from Ben Murphy, Rogier Creemers, and Hunter Dorwart. This translation has not been edited by DigiChina for accuracy or house style.

For context and analysis on this unofficial scholars' draft, please see the DigiChina Forum on this document.

TRANSLATION

Artificial Intelligence Law Model Law Version 1.0 (Expert Suggestion Draft)

Chinese Academy of Social Sciences Major National Condition Research Project

Research Group for “Investigation on the Status of the Construction of China’s Artificial Intelligence Ethics Review and Regulatory System”

Chapter 1: General Principles

Article 1: (Legal Basis) In order to regulate the research and development, provision, and usage of artificial intelligence (AI), safeguard national sovereignty, development, and security, to promote the safe development of AI, and to protect the legitimate rights and interests of individuals and organizations, this law is formulated in accordance with the People’s Republic of China (PRC) Constitution. 

Article 2: (Scope of Use) This law applies to the research and development, provision, and use of AI, as well as the regulation of AI, within the borders of the PRC.

Activities related to the research and development, provision, and use of AI conducted outside the territory of the PRC that affect or may affect the national security, public interests, or the legitimate rights and interests of individuals or organizations of the PRC, are subject to this law.

Article 3: (Governing Principles) The State should coordinate development and security, adhere to the combination of promoting innovation and governance according to law, and implement inclusive and prudent regulation.

Article 4: (Principle of People-Centered) Activities related to the research and development, provision, or usage of AI should be people-centered and direct intelligence for good. Ensure that humans can continuously supervise and control AI, with the ultimate goal of always promoting the welfare of humanity.

Article 5: (Safety/Security Principles)1 Those researching and developing, providing, or using AI should adopt necessary measures to ensure the safety/security of the research and development, provision, and usage of the AI and its related network data.

Article 6: (Principles of Openness, Transparency, and Explainability) Those researching and developing, providing, or using AI should adhere to the principles of openness, transparency, and explainability, and adopt necessary policies to ensure that the purpose, principles, and effects of AI research and development and AI use adhere to the above. 

Article 7: (Principle of Accountability) Those engaged in the research and development, provision, and use of AI should be individually responsible for their respective activities in research and development, provision, and use. 

Article 8: (Principles of Fairness and Equality) Those engaged in the research and development, provision, and use of AI should adhere to the principle of fairness and must not subject individuals or organizations to unreasonable differential treatment. 

Those engaged in the research and development, provision, and use of AI should fully consider the needs of special groups such as minors, the elderly, and people with disabilities, etc.

Article 9: (Green Principle) The State encourages the application of energy-saving technologies in the research and development, provision, and use of AI, as well as the efficient use of resources and the protection of the ecological environment.

Article 10: (Principle of Promoting Development and Innovation) The State supports the construction of AI infrastructure, and promotes the open sharing of public computing power, public data, and other related public resources. It encourages individuals and organizations to legally share computational power, data, and other related resources.

The State encourages AI research and development, as well as applications, protects AI intellectual property (IP) rights in accordance with the law, and innovatively explores IP systems that are adapted to the development of AI. 

Article 11: (International Cooperation) The State actively conducts international, multi-disciplinary, and multi-cultural exchange and cooperation in the domain of AI. It actively participates in the formulation and implementation of international rules and standards related to AI. It promotes the mutual recognition of AI-related rules and standards between other countries, regions, and international organizations. It promotes formulating a widely accepted international governance framework, standards, and norms for AI.

Article 12: (Department Performing these Duties) The National AI Office is the governing body responsible for AI development and management. Other relevant government departments and military-related departments should closely cooperate, strengthen coordination, and carry out relevant work in accordance with the law, based on their respective responsibilities.

The AI management departments and other relevant departments of provinces, autonomous regions, directly-administered municipalities, provincial capitals, cities with special economic zones, and larger cities designated by the State Council will be responsible for AI development and management work in the scope of their jurisdictions, according to relevant national regulations.

Article 13: (Collaborative Governance) The state establishes and improves an AI governance mechanism that involves government management, corporate responsibilities, industry self-governance, social supervision, and user self-discipline, to promote collaborative governance by diverse entities.

Article 14: (Legality and Legitimacy) Activities related to the research and development, provision, and usage of AI should comply with laws and administrative regulations, respect social ethics and moral principles, and adhere to the following provisions:

  1. Adhere to socialist core values and do not generate content that incites the subversion of state power, calls for the overthrow of the socialist system, endangers national security and interests, damages the national image, incites the splitting of the country, undermines national unity and social stability, advocates terrorism or extremism, propagates ethnic hatred or discrimination, or involves violence, obscenity, pornography, or other false and harmful information prohibited by laws and administrative regulations.
  2. Respect IP rights and business ethics, protect trade secrets, do not use advantages in algorithms, data, platforms, etc. to engage in monopolistic or unfair competitive practices. 
  3. Protect the legal rights and interests of consumers and workers in accordance with the law, respect the lawful rights and interests of others, and do not harm the physical and mental health of others. Do not infringe upon others' right of publicity, reputation, honor, privacy, and personal information. 

Chapter 2: Support and Promotion of AI

Article 15: (AI Development Plan) The State implements the Next Generation AI Development Plan, adheres to the simultaneous promotion of AI research and development, product applications, and industrial cultivation, and comprehensively supports scientific, economic, and social development as well as national security.

Provincial-level and above people's governments should incorporate the development of AI into their national economic and social development plans, and formulate AI development plans as needed.

Article 16: (Construction of Compute Infrastructure) Comprehensively deploy and implement the construction of a national integrated big data centers system, optimize the layout of data center infrastructure construction, and cultivate and strengthen data center clusters.

The State introduces measures for supplying public compute resources for AI, promoting the construction and utilization of public computing resource platforms, strengthening scientific allocation of compute, and providing assistance of public computing for the development of AI technology and the AI industry.

Encourage and support higher education institutions, scientific research institutions, enterprises, and other organizations to build AI compute infrastructure, open-source development platforms, and open-source communities. Carry out market-oriented transactions of computing resources, guide various industries in the rational and orderly use of computing resources, and improve the efficiency of computing infrastructure utilization.

Article 17: (Algorithm and Foundation Model Innovation) The State supports the innovation of AI algorithms, strengthens the legal protection of foundation models, and promotes innovative development and applications of foundation models.

Article 18: (Data Factors Supply) The State supports the construction of foundational and specialized databases in the field of AI, promotes the efficient aggregation and shared utilization of data resources, and expands the scope of public data supply for AI applications.

It encourages and guides relevant entities to carry out coordinated research and development on big data and AI technology, supports relevant entities to deeply integrate data with industry knowledge, develop data products, and serves the requirements of algorithm design, model training, product validation, application to different scenarios, etc.

Article 19: (Industrial Development and Application Innovation) Accelerate the transformation of key AI technologies into applications, promote technology integration with business model innovations, promote innovation of key fields of intelligent products, actively cultivate emerging business forms of AI, and create an internationally competitive AI industry cluster.

Promote the integrated innovation of AI with various industries, carry out pilot demonstrations of AI applications in key industries and fields, promote the scaling of AI applications, and support the application and promotion of new technologies, products, services, and models of AI.

Article 20: (Professional Talents Training) The State supports higher education institutions in improving the layout of the academic field of AI, establishing AI as an academic major, and promoting the construction of first-level disciplines in the field of AI.

It encourages higher education institutions, research organizations, and enterprises to conduct basic theoretical research on major scientific frontiers in the field of AI and R&D of key generic technologies, and undertake major scientific and industrial innovation projects.

The State supports the establishment of project management innovation mechanisms, innovative talent evaluation mechanisms, and scientific and technological achievements incentive mechanisms, etc. that are beneficial for promoting AI development.

Article 21: (Financial and Fund Support) The State encourages various localities and departments to leverage investment guidance funds to provide support, focusing on specialized assistance in core areas and key links of AI.

The State encourages social capital to participate in the development of the AI industry.

Article 22: (Pilot Programs in State Agencies) Encourage government agencies, public institutions, state-owned enterprises, and other organizations authorized by law to manage public affairs to take the lead in piloting applications of AI technology in areas such as government services and public management, etc. Priority should be given to the procurement and use of safe/secure and reliable AI products and services.

Chapter 3: AI Management System

Article 23: (Categorized Management System) The State establishes a Negative List system for AI, implements permit management for products and services on the Negative List, and implements filing management for products and services outside of the Negative List.

The state body responsible for AI, considering the significance of AI in economic and social development, as well as the potential harm to national security, public interest, the legal rights and interests of individuals and organizations, and economic order if attacked, tampered with, destroyed, or illegally acquired and utilized, will take the lead in formulating and periodically updating the Negative List for AI products and services.

Article 24: (Negative List Management System) Before undertaking the research and development or provision of products and services that are within the scope of the AI Negative List, an administrative permit from the state body responsible for AI should be obtained.

It is prohibited to engage in the research and development or provision of AI activities within the scope of the Negative List without permission or beyond the scope of the granted permission.

Article 25: (Conditions for a Negative List Permit) To apply for a permit for research and development or provision of AI within the scope of the Negative List, the following conditions must be met:

  1. Being a legal person lawfully established within the territory of the PRC;
  2. The main person in charge must be a Chinese citizen;
  3. Having expert personnel with specialized knowledge on quality control, safety assurance, human supervision, and compliance management, etc.;
  4. Having a sound AI quality management system, network data security management system, and science and technology ethics review system;
  5. Having safe and controllable technical assurance measures for AI;
  6. Having AI emergency response mechanisms suited to the risks;
  7. Having the premises, facilities, and funding suited to AI research, development, or provision;
  8. Other provisions under laws or administrative regulations.

Article 26: (Application for Negative List Permit) AI researchers, developers, and providers applying for permits to research and develop or provide AI products on Negative Lists should submit the following materials:

  1. A letter of application
  2. Legal person certification, evidence of premises, funding, etc;
  3. Evidence that the main person in charge is a Chinese citizen;
  4. Qualifications of specialist personnel responsible for quality control, safety assurance, human supervision and compliance management;
  5. AI quality management system, network data security system, science and technology ethics review system, risk management system, and their implementation situation;
  6. Technical documentation catalogs and daily summary records of technical assurance measures, conforming to the provisions of this Law;
  7. AI security assessment reports;
  8. Other materials specified by laws and regulations.

Article 27: (Approval of Negative List Permit) After receiving the application for research and development or provision of AI within the Negative List, the state body responsible for AI should conduct a preliminary review within 10 working days.

Upon preliminary review, if the application materials submitted by the researchers, developers, and providers of AI are found to not meet the requirements, the state body responsible for AI may require them to be supplemented or corrected. Where the AI researcher, developer or provider does not supplement or correct them without proper reason, their application will be considered withdrawn.

Upon preliminary review, if all the required materials are complete, the state body responsible for AI should complete the review within 45 days from the date of accepting the application, and make a decision to either grant or deny the permit. If permission is granted, a permit for the research and development or provision of AI will be issued to the applicant; if permission is denied, the applicant should be notified in writing and the reasons explained.

If a decision cannot be made within 45 days, with the approval of the person responsible within the state body responsible for AI, an extension of 10 days can be given, and the applicant should be notified about the period and reasons for the extension.

Article 28: (Revocation of Negative List Permits) If AI researchers, developers, or providers violate the stipulations of this Law during their research and development or provision activities, and major security incidents, repeated security incidents, or repeated administrative punishments occur, the state body responsible for AI may suspend permits and order them to make corrections within a designated time frame. If the corrections are not made within the required time, or security incidents or administrative punishments occur again following the suspension, the state body responsible for AI may revoke permits.

Article 29: (Reapplication of Permits Under the Negative List) Permits for AI research and development or provision within the Negative List should specify the duration and scope of the usage license.

Where exceeding the scope of the permit, or where technological improvements, changes in usage scenarios, or changes in user communities, etc. lead to changes in AI risks, Negative List AI researchers, developers, and providers should reapply for a research and development or provision permit.

Six months before the expiration of the usage permit, Negative List AI researchers, developers, and providers may apply for renewal of their research and development or provision permits.

Negative List AI researchers, developers, and providers terminating their AI research and development or provision under the permit should, within three months of termination, apply to the state body responsible for AI for cancellation of their research and development or provision permit.

Article 30: (Disclosure of Permit) AI researchers, developers, and providers within the Negative List must indicate the permit number in a prominent position on the AI products and services they provide.

Article 31: (Complaints and Reports) Individuals and organizations that discover unlawful activities related to the research and development or provision of AI within the Negative List have the right to file complaints and reports with the state body responsible for AI. The state body responsible for AI should promptly verify and address such complaints and reports.

Article 32: (Clarification Mechanism) Individuals and organizations that have doubts or questions about the research and development or provision of AI within the Negative List have the right to request clarification from the state body responsible for AI. The state body should promptly respond and address these requests.

Chapter 4: Obligations of AI Researchers, Developers and Providers

Section 1: General Stipulations

Article 33: (Safety and Security-Related Obligations)  Before putting AI into use or putting it on the market, AI researchers, developers, and providers should conduct security testing to prevent AI security risks associated with network data. Moreover, they should guarantee that the safety/security and stable operations of the AI’s entire life cycle conform to the predetermined goals.

AI researchers, developers, and providers should announce safety/security best practices in a timely manner, and guide users to safely and correctly use the AI.

AI researchers, developers, and providers must ensure traceability, in accordance with the records, technical documentation, and so forth stipulated in this law, so that when an accident occurs, they can trace and locate problems in a timely and accurate way, thus guaranteeing the safety/security of the AI.

The relevant organizations and individuals are encouraged to notify AI researchers, developers, and providers of security vulnerabilities in their products and services.

AI researchers, developers, and providers should fulfill their obligations related to security vulnerability management in accordance with the relevant regulations, to ensure that security vulnerabilities are remedied and appropriately disclosed in a timely manner, and they should also guide and support users in taking preventive measures.

Article 34: (Obligation to Remedy and Notify)  AI researchers, developers, and providers should strengthen risk monitoring, and they should immediately take remedial measures when they discover security defects and vulnerabilities.

When a security incident occurs, AI researchers and developers should immediately take measures to remedy the situation, and notify AI providers. AI providers should immediately take measures to deal with the situation and, in accordance with the relevant national regulations, should inform users in a timely manner and also report the following to the government body with authority over the matter:

  1. The process by which the security incident occurred, and the scope of its impact;
  2. The remedial measures that the AI provider has already taken, and the measures that users can take to ameliorate the damage;
  3. Ways to contact the AI provider.

If the AI provider takes measures that can effectively prevent actual harm to users, it may opt not to notify users; if the state body responsible for AI believes that this might cause harm, it has the authority to require the provider to notify affected users.

When the situations in paragraphs one and two of this article occur, the AI provider should notify the AI researchers and developers in a timely way. The AI researchers and developers should immediately conduct an evaluation. If the evaluation reveals that problems or risks existed in the research and development stage, the researchers and developers should immediately notify other AI providers, and while the problem or risk remains unresolved, they should suspend or take down the AI product or service. When the other AI providers receive the notification, they should immediately take measures to deal with the situation, so as to control risks associated with the product or service.

Article 35: (Obligations Related to Openness and Transparency)  When AI providers provide AI services that interact with natural persons, they should, before the user begins using the AI, and in a concise, clear, and easily understood manner, inform natural persons who interact with the AI products and services that they are interacting with an AI service. However, this excludes instances in which natural persons can determine this from the use scenario.

AI providers that provide deep synthesis services should still include prominent markings on synthesized content, in accordance with the relevant national regulations, so as to point out to the audience the deep synthesis situation.

Before users use AI, AI providers should take appropriate measures to communicate the following information to users:

  1. The basic principles, purpose and intent, and main operating mechanism of the product or service
  2. The permitting or filing public disclosure information, associated with the product or service;
  3. The user’s rights and channels for remediation;
  4. Other information as stipulated in laws and administrative regulations.

AI providers should fully guarantee that special groups such as minors, the elderly, and disabled persons can understand the aforementioned information, taking fully into account such factors as the use scenario of the product or service, nature, audience, and level of technological development in the industry.

AI researchers and developers should cooperate with providers in fulfilling the aforementioned obligations.

Article 36: (Explainability-Related Obligations)  When AI products and services have a large impact on individuals’ rights and interests, AI users have the right to demand that providers provide an explanation on the decision-making processes, methods, etc. of their products and services, and they have the right to lodge complaints over unreasonable explanations. Providers should give timely feedback to user demands, taking fully into account factors such as risks, the scenario, and the level of technological development in the industry. Researchers and developers should cooperate with providers in fulfilling the obligations in this article.

Article 37: (Fairness-Related Obligations)  During the course of training data processing and labeling, design and development of algorithms and models, and verification and testing, AI researchers and developers should take the necessary measures to effectively prevent harmful biases and discrimination.

During the process of providing products and services, AI providers should strengthen management of input data and output data, so as to effectively prevent harmful biases and discrimination.

Article 38: (Risk Management)  AI providers should construct and implement a robust, whole-life-cycle risk management system, taking fully into account such factors as the usage scenario of the product or service, its nature, its audience, and the level of technological development in the industry. Before a product or service goes into use, and during the course of usage, AI providers should identify AI risks, evaluate risks, and adopt reasonable and necessary measures to prevent and control risks. AI providers should keep records of risk identification, evaluation, and management, and should save them for no fewer than three years.

AI researchers and developers should establish and operate a risk management system for the research and development  process, including design, data collection and training, model choice, and testing and verification, and they should identify risks and adopt reasonable and necessary measures to reduce risks. While guaranteeing that trade secrets are protected, AI researchers and developers should cooperate in providing risk evaluation and management records, and should support AI providers in completing their AI risk management obligations. AI providers should keep risk evaluation and management records, and should save them for no fewer than three years.

Article 39: (Safety/Security Evaluation)  AI researchers, developers, and providers should carry out safety/security evaluations in accordance with relevant national regulations.

Article 40: (Science and Technology Ethics Review)  AI researchers, developers, and providers should carry out science and technology ethics reviews in accordance with relevant national regulations.

Article 41: (Authorized Representatives)  As stipulated in paragraph two of Article Two of this law, AI researchers, developers, and providers outside the borders of the PRC should establish dedicated agencies or designated representatives inside the borders of the PRC, who will be responsible for handling AI-related affairs, and should report the names of the relevant agencies or representatives and the means to contact them to the state body responsible for AI.

Section 2: Obligations of AI Researchers and Developers

Article 42: (Strengthened Obligations for AI Researchers and Developers on the Negative List) AI researchers and developers on the Negative List should fulfill the following obligations:

  1. Formulate and maintain technical documentation that accords with the requirements of this law, and cooperate with providers in fulfilling relevant obligations;
  2. During the research and development process, formulate and operate a quality management system that fulfills the requirements of this law, and cooperate with providers in fulfilling relevant obligations;
  3. During the research and development process, conduct safety/security evaluations, and cooperate with providers in fulfilling relevant obligations;
  4. Other obligations as stipulated in laws and administrative regulations.

Article 43: (Special Obligations of Foundation Model Researchers and Developers)  Foundation model researchers and developers should abide by the following stipulations:

  1. Establish a robust security risk management system in accordance with national regulations, and effectively and in a timely manner prevent, monitor, and handle risks to national security, the interests of the public, the legitimate rights and interests of individuals and organizations, and economic order;
  2. Establish robust model management and data management systems for the foundation model in accordance with national regulations;
  3. Respect the principles of openness, fairness, and impartiality, formulate rules of use for foundation models, clearly define the obligations that researchers, developers, and providers of the foundation model should fulfill, and do not abuse a dominant market position;
  4. Assist other researchers, developers, and providers in fulfilling relevant obligations;
  5. Necessary measures such as stopping provision of services should be taken against researchers, developers, and providers that severely violate the stipulations of this law;
  6. Establish an independent institution composed mainly of external members that supervises the research and development situation of the foundation model; every year, release a social responsibility report, and accept supervision by society.

Section 3: Obligations of AI Providers

Article 44: (Obligation to File) AI providers not on the Negative List should file the following information with the government body with authority over the matter within 10 working days of providing a product or service:

  1. The name of and contact information of the AI provider;
  2. The trademarks or names of the product or service, the form in which the product or service is provided, its area of application, the types of algorithms in it, and a safety/security self-assessment report;
  3. Proposed public content for the algorithm filing;
  4. Other information as stipulated in laws and administrative regulations.

If the information that was filed changes, the filer should undertake modification procedures within 10 working days of the date of the modifications.

An AI provider that has completed filing should clearly display the filing number in a prominent position on the website it uses for external provision of services, in its apps, etc.

Article 45: (Filing Process)  After the government body with authority over the matter receives the filing materials, if the materials are complete, it should put them on file within 30 working days, issue a file number, and publicly announce it. If the materials are incomplete, it should notify the filer within 10 working days to add additional materials.

Article 46: (Auditing Obligations)  The AI provider should conduct an audit at least once every two years, and inspect its input data, algorithms and models, output data, and so forth for compliance, and should review and evaluate whether or not its product and service activities accord with laws and administrative regulations.

Article 47: (Internal Management System)  AI providers should adopt the following series of measures to ensure that their AI accords with the stipulations of laws and administrative regulations:

  1. Formulate systems and corresponding operating rules for internal data security, risk control, quality management, and so forth;
  2. Maintain an auto-generated log of AI products and services provided;
  3. Conduct regular employee education and training;
  4. Adopt appropriate technical measures for compliance in areas such as robustness and resistance to attacks;
  5. Other measures as stipulated in laws and administrative regulations.

Article 48: (Termination Mechanism)  When AI providers terminate the provision of a product or service, they should adopt the following proper planning measures:

  1. Publicize the termination plan, users’ rights, and so on 30 working days in advance;
  2. Within 30 working days from the date of termination, delete users’ personal information; in accordance with the relevant national regulations, take the necessary steps to deal with the data, training data, algorithms, and models generated during the process of providing the AI product or service;
  3. Other measures as stipulated in laws and administrative regulations.

Article 49: (Canceling Permits)  AI providers on the Negative List should notify the government body with authority over the matter 30 working days in advance, explain the relevant circumstances, and, following termination of service, return their permit certificate to the body that originally granted them.

AI providers not on the Negative List should undertake cancellation filing procedures within 20 working days of their termination of service.

Article 50: (Strengthened Obligations of AI Providers on the Negative List)  AI providers on the Negative List should also fulfill the following obligations:

  1. Conduct safety/security assessments, and ensure safety/security and stability in accordance with the requirements of this law;
  2. Formulate and maintain technical documentation that accords with the requirements of this law, so as to prove that the provision of AI accords with the requirements of this law relating to AI on the Negative List;
  3. Establish and operate a whole-life-cycle quality management system that accords with the requirements of this law;
  4. During the process of automated operation of AI products and services, ensure that humans can take measures such as intervention and taking over at any time;
  5. Other obligations as stipulated in laws and administrative regulations.

Chapter 5: Comprehensive AI Governance Mechanism

Article 51: (Responsibilities of the State Body Responsible for AI) The state body responsible for AI should fulfill the following AI regulatory responsibilities in accordance with the law:

  1. Conduct education and publicity on AI ethics and safety/security, guide and supervise the development, provision, and use of AI;
  2. Formulate AI regulatory rules and guidance, and organize the establishment of standards for AI ethics, safety/security, management, etc.; 
  3. Organize monitoring, evaluation, and audit work for AI technology, and guide professional institutions to carry out activities in accordance with the law;
  4. Establish mechanisms for AI risk monitoring and warning, and organize work to obtain, analyze, study, and make warnings based on information on risks regarding AI; 
  5. Establish emergency response mechanisms for AI safety/security incidents;
  6. Receive and process complaints and reports related to the development, provision, and use of AI technology and products;
  7. Investigate and deal with unlawful activities related to the development, provision, and use of AI;
  8. Other responsibilities stipulated by laws and administrative regulations.

Article 52: (Security Review System) Those that undertake AI research, development, or provision activities that impact or may impact national security should undergo a security review in accordance with relevant national regulations. 

The decision of the security review conducted in accordance with the law is final.

Article 53: (Deadline for Preliminary Procedures) In accordance with this law and relevant national regulations, AI researchers, developers, providers, and users who apply for a security review, filing, or administrative licensing for new applications of AI technologies should have a clear working time frame specified by the state body responsible for AI and promptly handle the work and respond within the allotted time frame.

Article 54: (Calling in for a Meeting) If the state body responsible for AI and local AI regulatory authorities at all levels, while performing their duties, discover significant risks or safety/security incidents in the research and development or provision of AI, they may call the AI researcher, developer, or provider in for a meeting in accordance with the scope of authority and procedures afforded by regulations, and require them to take the following measures:

  1. Rectify and eliminate the eliminate hidden dangers, in accordance with requirements;
  2. Provide appropriate explanations for their research and development or provision activities, explain the responsibilities for development, management, and operation of their AI services, measures taken to ensure fairness, safety/security, and stability, and the impact on parties with relevant interests, etc.;
  3. Entrust professional institutions to conduct compliance audits of their AI research and development or provision activities.

AI researchers, developers, and providers who promise to conduct rectification and comply within a specified period of time and can effectively prevent the harm caused by AI research and development, provision, or usage activities can avoid suspending relevant activities. If the state body responsible for AI believes that such activities may cause harm, it may order the suspension of relevant activities.

Article 55: (Innovation Regulation) The state body responsible for AI should formulate a list of minor illegal acts in the research and development, provision, and usage activities of the AI industry that should not be subject to administrative penalties in accordance with the law. Through measures such as criticism and education, guided interviews, and consultative meetings and discussions, citizens, legal persons, and other organizations can comply with the law to carry out AI research, development, provision, and use activities.

Article 56: (Regulatory Sandbox) The state body responsible for AI should establish an AI regulatory testing mechanism, which will release concrete regulations and guidance on the following matters: 

  1. The conditions for participating in regulatory testing;
  2. The operating mechanism of regulatory testing;
  3. Obligations and liability reduction mechanisms for AI researchers, developers, and providers.

Article 57: (Law Enforcement Mechanism) The state body responsible for AI should strengthen the construction of specialized teams and professional techniques to improve the efficiency of AI regulatory and law enforcement work.

The state body responsible for AI should determine the administrative law enforcement procedures of the system in accordance with the law, and establish an administrative law enforcement oversight system.

Article 58: (Technical Governance) The state supports enterprises, scientific research institutions, and other organizations in researching and developing technologies related to AI monitoring and warning, safety/security assessments, emergency response, etc. and encourages the application of regulatory and compliance technologies in the field of AI.

Article 59: (Overseas Countermeasures) If organizations or individuals outside the PRC engage in AI research, development, provision, or use that infringes on the legitimate rights and interests of PRC citizens, or harms the PRC’s national security or public interests, the state body responsible for AI can list them on a Negative List, make an announcement, and take measures to restrict or prohibit their research, development, provision, or use of AI within the territory of the PRC in accordance with the law.

Article 60: (Reciprocal Measures) If any country or region adopts discriminatory prohibitions, restrictions, or other similar measures against the PRC regarding research, development, investment, trade, etc. related to AI, the PRC can take reciprocal measures against that country or region based on the actual situation.

Chapter 6: Legal Responsibilities

Article 61: (General Provisions on Administrative Penalties) Those who violate Articles 34 through 51 of this Law regarding researching, developing, or providing AI, should be ordered by the state body responsible for AI to make corrections, given a warning, be confiscated of their illegal gains, and ordered to suspend or terminate the provision of products or services. Those who refuse to make corrections should be fined up to one million RMB. The supervisory individual directly responsible and other directly responsible personnel should be fined 10,000 to 100,000 RMB.

If there is an illegal act stipulated in the preceding paragraph for which the circumstances are serious, the state body responsible for AI should order rectification, confiscate the illegal income, and impose a maximum fine of 50 million RMB or up to 4% of the previous year's revenue. The state body responsible for AI may also order the suspension of relevant business activity or suspension of business for rectification, notify the relevant competent authorities to revoke relevant business permits or revoke the business license. The supervisory individual directly responsible and other directly responsible personnel should be fined 100,000 to 1 million RMB.

Article 62: (Discretionary Measures for Administrative Fines) The fines stipulated in this Law may supplement or substitute for measures such as ordering rectification. When the state body responsible for AI decides the quantity of administrative fines, it should follow the principles of legality, proportionality, fairness and justice, combining punishment with education, and considering all aspects, and should sufficiently consider the following factors:

  1. The nature, severity, duration, scope of impact, and degree of damage of the unlawful act and its consequences;
  2. Whether the unlawful act was intentional or negligent;
  3. Whether remedial measures have been taken to mitigate the potential damages caused by the unlawful act;
  4. Whether the state body responsible for AI was notified in accordance with the provisions of this law;
  5. Whether reasonable and effective organizational and technical measures have been taken to manage the risks of AI in accordance with the provisions of this law;
  6. Whether relevant standards on AI, security, etc. were adhered to or whether relevant certifications were obtained. 
  7. Previous unlawful behavior;
  8. Factors that increase or reduce penalties as stipulated by other laws and regulations.

Article 63: (Violation of the Permit) Those who violate the provisions of Article 32 of this Law by researching, developing, or providing AI without a permit or beyond the scope of the permit should be ordered by the state body responsible for AI to stop relevant activities according to its duties, and should be fined up to a maximum of 4% of the previous year's revenue.

Article 64: (Violation of Filing) If an AI provider obtains the filing through improper means such as concealing relevant information or providing false materials, the state body responsible for AI should revoke the filing, issue a warning, and circulate a notice of criticism; If the circumstances are serious, the information updates should be suspended and a fine between 10,000 and 100,000 RMB should be imposed.

Article 65: (Credit Sanctions) Those who engage in illegal activities as stipulated in this Law should be recorded in credit records in accordance with relevant laws and administrative regulations and should be publicly disclosed.

Article 66: (Civil Tort Liability) If the AI that is researched, developed, or provided infringes on personal rights and interests, causing damage, and the researcher, developer, or provider fails to fulfill their obligations under this Law, they should bear tort liability such as compensation for damages, unless the developer or provider can prove that they are not at fault.

The liability for compensation for damages stipulated in the preceding paragraph should be determined based on the losses suffered by the user or affected individuals or organizations, or the benefits obtained by the researcher, developer, or provider as a result. If it is difficult to determine the losses suffered by users or affected individuals or organizations and the benefits obtained by providers as a result, the compensatory payment amount should be determined based on the actual situation.

Article 67: (Public Interest Litigation) If AI is provided in violation of the provisions of this Law and infringes on the rights and interests of a large number of individuals, the People's Procuratorate, consumer organizations as stipulated by law, and organizations determined by the state body responsible for AI may bring a lawsuit to the People's Court in accordance with the law.

Article 68: (Link Between Public Security Management Penalties and Criminal Liability) If a violation of the provisions of this Law constitutes a violation of public security management, public security penalties should be imposed in accordance with the law. If the actions constitute a crime, criminal liability should be pursued in accordance with the law.

Article 69: (Non-Prosecution Given Compliance) If an AI researcher, developer, or provider violates the provisions of this Law and bears administrative liability, and if the compliance construction of the actor meets the effectiveness standards after evaluation, it should be filed as a minor illegal act and exempted from punishment in accordance with the law.

Those who commit a violation of the provisions of this Law that constitute a crime should be included in corporate compliance reform pilot work. If the compliance construction of that company meets the effectiveness standards after evaluation, the People's Procuratorate may, based on the evaluation’s conclusions and in accordance with the law, make a decision not to approve arrest, decide to change the compulsory measures, or choose not to prosecute. It can instead provide sentencing proposals for lenient punishment, or submit procuratorial opinions for lenient punishment or disciplinary action to the relevant competent authority.

Article 70: (Responsibilities For State Bodies That Fail To Perform Duties) State bodies that fail to perform duties stipulated in this law should be ordered by their superior bodies or the state body responsible for AI to make corrections. The directly responsible supervisory individuals or other directly responsible personnel should be punished according to law. 

Officials of state bodies that neglect their duties, abuse their powers, or use their position for personal gain, in a non-criminal manner, should be punished according to law. 

If according to law, the problem is not the fault of administrative law enforcement, the administrative law enforcement responsibility of the relevant staff will not be investigated. 

Chapter 7 Supplementary Provisions

Article 71: (Definitions) The meaning of following terms used in this law: 

  1. Artificial Intelligence refers to automated systems that operate with a degree of autonomy, serve particular goals, and are able to influence the physical or virtual environment through prediction, recommendation, or decision-making, etc. This includes data, features, models, service provision interfaces and the embedded terminal devices, etc. 
  2. AI Researchers and Developers refers only to individuals or organizations that conduct AI research and development activities such as designing algorithms, labeling training data, conducting feature extraction, training and optimizing models, and performing testing and deployment.  
  3. AI Providers refers to individuals or organizations that provide AI for commercial purposes or relevant technical support for such services. Or, it refers to individuals or organizations that provide AI services or relevant technical support for such services to the public, regardless of whether it is for commercial purposes. 
  4. AI Users refers to individuals or organizations that make use of AI  in accordance with its nature and usages. 

Article 72: (Date of Implementation) This law should be effective on [day, month, year]. 

Article 73: (Public Disclosure System for Negative List) The state body responsible for AI should disclose the Negative List no later than six months before the date of implementation of this law, and publicly disclose it in a timely manner after regular revisions. 

Notes:

1 The term 人工智能安全 can be translated as either AI safety or AI security, because the term 安全 carries both meanings. We have translated it as “safety,” “security,” and “safety/security” in different locations in the text based on our judgment of the intended meaning. We chose “security" when we understood the focus to be more on cybersecurity, data security, or social stability-related issues, and we chose “safety” when the subject matter pertained more closely to the field of AI safety as understood by the international scientific community. We chose “safety/security” when both meanings appeared intended.

2 “国务院已经批准的较大的市” is a term of art that refers to cities that have been afforded special governance authorities. For more information, please see the National People’s Congress or Baidu Encyclopedia.


CHINESE-LANGUAGE ORIGINAL

Source: https://mp.weixin.qq.com/s/85D8TjMkN9Tl-oWjq15JiQ

Alternative source: https://redian.news/wxnews/536749

Archived copy of alternative source: https://web.archive.org/web/20230822200944/https://redian.news/wxnews/536749

人工智能法示范法1.0(专家建议稿)

中国社会科学院国情调研重大项目

《我国人工智能伦理审查和监管制度建设状况调研》课题组

第一章  总则

第一条(立法依据) 为了规范人工智能的研发、提供和使用活动,维护国家主权、发展和安全利益,促进人工智能安全发展,保护个人、组织的合法权益,根据宪法,制定本法。

第二条(适用范围) 在中华人民共和国境内从事人工智能的研发、提供和使用活动及其监管,适用本法。

在中华人民共和国境外从事人工智能的研发、提供和使用活动,影响或可能影响中华人民共和国国家安全、公共利益或者个人、组织合法权益的,适用本法。

第三条(治理原则) 国家统筹发展和安全,坚持促进创新和依法治理相结合,实施包容审慎监管。

第四条(以人为本原则) 从事人工智能研发、提供和使用活动应当以人为本、智能向善,确保人类能够始终监督和控制人工智能,始终以促进人类福祉为最终目标。

第五条(安全原则) 从事人工智能研发、提供和使用活动,应当采取必要措施保障所研发、提供和使用的人工智能及其相关网络数据的安全。

第六条(公开透明可解释原则) 从事人工智能研发、提供和使用活动应当遵循公开、透明、可解释原则,采取必要措施对所研发和使用的人工智能的目的、原理和效果予以说明。

第七条(可问责原则) 从事人工智能研发、提供和使用活动,应当分别对其研发、提供和使用活动负责。

第八条(公平平等原则) 从事人工智能研发、提供和使用活动应当遵循公平原则,不得对个人、组织实行不合理的差别待遇。

从事人工智能研发、提供和使用活动应当充分考虑未成年人、老年人、残障人士等特殊群体的需求。

第九条(绿色原则) 国家鼓励人工智能研发、提供和使用活动应用节能技术,高效利用资源,保护生态环境。

第十条(促进发展创新原则) 国家支持人工智能基础设施建设,推动公共算力、公共数据和其他相关公共资源开放共享,鼓励个人、组织依法开放共享算力、数据和其他相关资源。

国家鼓励人工智能研发和应用,依法保护人工智能领域知识产权,创新探索适应人工智能发展的知识产权制度。

第十一条(国际合作)  国家积极开展人工智能领域的跨国家、跨领域、跨文化交流与合作,积极参与人工智能相关国际规则和标准的制定和实施,推动与其他国家、地区、国际组织之间的人工智能相关规则、标准的互认,推进形成具有广泛共识的国际人工智能治理框架和标准规范。

第十二条(履职部门) 国家人工智能办公室是人工智能发展和管理的主管机关。其他有关政府部门和军队有关部门按照职责分工,密切配合,加强协调,依法做好有关工作。

省、自治区、直辖市,省、自治区的人民政府所在地的市,经济特区所在地的市和国务院已经批准的较大的市人工智能主管机关和其他相关部门,按照国家有关规定,在本辖区范围内负责人工智能发展和管理工作。

第十三条(协同共治) 国家建立和完善政府管理、企业履责、行业自治、社会监督、用户自律的人工智能治理机制,促进多元主体协同共治。

第十四条(合法正当) 从事人工智能研发、提供和使用活动应当遵守法律、行政法规,尊重社会公德和伦理道德,遵守以下规定:

(一)坚持社会主义核心价值观,不得生成煽动颠覆国家政权、推翻社会主义制度,危害国家安全和利益、损害国家形象,煽动分裂国家、破坏国家统一和社会稳定,宣扬恐怖主义、极端主义,宣扬民族仇恨、民族歧视,暴力、淫秽色情,以及虚假有害信息等法律、行政法规禁止的内容;

(二)尊重知识产权、商业道德,保守商业秘密,不得利用算法、数据、平台等优势,实施垄断和不正当竞争行为;

(三)依法保护消费者、劳动者权益,尊重他人合法权益,不得危害他人身心健康,不得侵害他人肖像权、名誉权、荣誉权、隐私权和个人信息权益。

第二章  人工智能支持与促进

第十五条(人工智能发展规划) 国家实施新一代人工智能发展规划,坚持人工智能研发攻关、产品应用和产业培育共同推进,全面支撑科技、经济、社会发展和国家安全。

省级以上人民政府应当将人工智能发展纳入本级国民经济和社会发展规划,并根据需要制定人工智能发展规划。

第十六条(算力基础设施建设) 全面部署实施全国一体化大数据中心体系建设,优化数据中心基础设施建设布局,培育壮大数据中心集群。

国家出台人工智能公共算力资源供给办法,推动公共算力资源平台建设与利用,加强算力科学调度,为人工智能技术与产业发展提供公共算力支持。

鼓励和支持高等院校、科研机构、企业和其他组织建设人工智能算力基础设施、开源开发平台和开源社区,开展算力资源市场化交易,引导各行业合理有序使用算力资源,提升算力基础设施利用效能。

第十七条(算法和基础模型创新) 国家支持人工智能算法创新,依法加强基础模型保护,促进基础模型创新开发、应用推广。

第十八条(数据要素供给) 国家支持建设人工智能领域基础数据库和专题数据库,促进数据资源高效汇聚和共享利用,扩大面向人工智能应用的公共数据供给范围。

鼓励引导相关主体开展大数据与人工智能技术协同研发,支持相关主体将数据与行业知识深度融合,开发数据产品,服务算法设计、模型训练、产品验证、场景应用等需求。

第十九条(产业发展与应用创新) 加快人工智能关键技术转化应用,促进技术集成与商业模式创新,推动重点领域智能产品创新,积极培育人工智能新兴业态,打造具有国际竞争力的人工智能产业集群。

推动人工智能与各行业融合创新,在重点行业和领域开展人工智能应用试点示范,推动人工智能规模化应用,支持人工智能新技术、新产品、新服务、新模式的应用推广。

第二十条(专业人才培养) 国家支持高等院校完善人工智能领域学科布局,设立人工智能专业,推动人工智能领域一级学科建设。

鼓励高等学校、科研机构和企业等开展面向人工智能领域重大科学前沿问题的基础理论研究和关键共性技术研发,承担重大科技和产业创新专项。

国家支持建立有利于促进人工智能发展的项目管理创新机制、创新人才评定机制、科技成果转化激励机制等。

第二十一条(财政资金支持) 国家鼓励各地方各部门发挥投资引导基金扶持作用,聚焦人工智能核心领域与关键环节开展专项扶持。

国家鼓励社会资本参与人工智能产业发展。

第二十二条(国家机关先行先试) 鼓励政府机关、事业单位、国有企业及其他依法具有管理公共事务职能的组织,在政务服务、公共管理等领域开展人工智能技术应用先行先试,优先采购和使用安全可靠的人工智能产品和服务。

第三章  人工智能管理制度

第二十三条(分类管理制度) 国家建立人工智能负面清单制度,对负面清单内的产品、服务实施许可管理,对负面清单外的产品、服务实施备案管理。

国家人工智能主管机关根据人工智能在经济社会发展中的重要程度,以及一旦遭到攻击、篡改、破坏或者非法获取、非法利用,对国家安全、公共利益或者个人、组织合法权益、经济秩序造成的危害程度,牵头制定并定期更新人工智能产品、服务负面清单。

第二十四条(负面清单管理制度) 开展人工智能负面清单内产品、服务的研发、提供活动前,应当取得国家人工智能主管机关的行政许可。

禁止未经许可或超越许可范围开展负面清单内人工智能研发、提供活动。

第二十五条(负面清单许可条件) 申请负面清单内人工智能研发、提供许可,应当具备下列条件:

(一)在中华人民共和国境内依法设立的法人;

(二)主要负责人是中国公民;

(三)有与风险相适应的具备质量保障、安全保障、人类监督、合规管理等专业知识的专职人员;

(四)有健全的人工智能质量管理体系、网络数据安全管理制度、科技伦理审查制度;

(五)有安全可控的人工智能技术保障措施;

(六)有与风险相适应的人工智能应急处置机制;

(七)有与人工智能研发、提供相适应的场所、设施和资金;

(八)法律、行政法规的其他规定。

第二十六条(负面清单许可的申请) 人工智能研发者、提供者申请负面清单内人工智能产品研发、提供许可,应提交以下材料:

(一)申报书;

(二)法人资格、场所、资金等证明;

(三)主要负责人为中国公民的证明;

(四)质量保障、安全保障、人类监督、合规管理专职人员的资质情况;

(五)人工智能质量管理体系、网络数据安全制度、科技伦理审查制度、风险管理制度及执行情况;

(六)符合本法规定的技术文件目录和技术保障措施日志摘要;

(七)人工智能安全评估报告;

(八)法律法规规定的其他材料。

第二十七条(负面清单许可的审批) 国家人工智能主管机关受理负面清单内人工智能研发、提供许可申请后,在10个工作日内进行初步审查。

经初步审查,发现人工智能的研发者和提供者提交的申请材料不符合要求的,国家人工智能主管机关可以要求其补充或者更正。人工智能研发者、提供者无正当理由不补充或者更正的,该申请即被视为撤回。

初步审查材料齐全的,国家人工智能主管机关应当自受理申请之日起45日内审查完毕,作出许可或者不予许可的决定。予以许可的,向申请人颁发人工智能研发、提供许可证;不予许可的,应当书面通知申请人并说明理由。

45日内不能作出决定的,经国家人工智能主管机关负责人批准,可以延长十日,并应当将延长期限的理由告知申请人。

第二十八条(负面清单许可的撤销) 人工智能研发者、提供者在研发、提供活动中违反本法规定,发生重大安全事件、多次发生安全事件、多次受到行政处罚的,国家人工智能主管机关可以中止许可并责令限期改正;期限届满未改正,或中止许可后再次发生安全事件或受到行政处罚的,国家人工智能主管机关可以撤销许可。

第二十九条(负面清单许可的再次申请) 负面清单内人工智能研发、提供许可证应当载明使用许可的期限和范围。

超出许可范围,或技术改进、使用场景变更、用户群体变化等导致人工智能的风险发生变化的,负面清单内人工智能研发者、提供者应当再次申请研发、提供许可。

使用许可期限届满前六个月,负面清单内人工智能研发者、提供者可以申请更新研发、提供许可。

负面清单内人工智能研发者、提供者停止许可内人工智能研发、提供的,应当在停止之日起三个月内,向国家人工智能主管机关申请注销研发、提供许可。

第三十条(许可证公开) 负面清单内人工智能研发者、提供者应当在所提供的人工智能产品、服务的显著位置,注明许可证编号。

第三十一条(投诉、举报) 个人和组织发现违法从事负面清单内人工智能研发、提供活动的,有权向国家人工智能主管机关投诉、举报,国家人工智能主管机关应当及时核实、处理。

第三十二条(释疑机制) 个人和组织对负面清单内人工智能研发、提供活动有疑义的,有权请求国家人工智能主管机关予以说明,国家人工智能主管机关应当及时答复、处理。

第四章  人工智能研发者、提供者义务

第一节  一般规定

第三十三条(安全性义务) 人工智能研发者、提供者应当在将人工智能投入使用或投放市场前进行安全测试,防范网络数据人工智能安全风险,并保障人工智能全生命周期的安全、稳健运行,符合预期目的。

人工智能研发者和提供者应当及时发布安全最佳实践,引导使用者安全、正确地使用人工智能。

人工智能研发者、提供者按照本法规定的记录、技术文件等要求,保证可追溯性,在发生事故时可以及时并准确地追溯、定位问题,保障人工智能的安全性。

鼓励相关组织和个人向人工智能研发者、提供者通报其产品、服务存在的安全漏洞。

人工智能研发者、提供者应当按照相关规定履行安全漏洞管理义务,确保其安全漏洞得到及时修补和合理发布,并指导支持使用者采取防范措施。

第三十四条(补救和通知义务) 人工智能研发者、提供者应当加强风险监测,发现安全缺陷、漏洞等风险时,应当立即采取补救措施。

发生安全事件时,人工智能研发者应当立即采取处置措施,并通知人工智能提供者。人工智能提供者应当立即采取处置措施,按照国家有关规定及时告知使用者并向主管机关报告以下事项:

(一)安全事件的发生过程、影响范围;

(二)人工智能提供者已采取的补救措施和使用者可以采取的减轻危害的措施;

(三)人工智能提供者的联系方式。

人工智能提供者采取措施可以有效避免对使用者造成实质损害的,可以不通知使用者;国家人工智能主管机关认为可能造成危害的,有权要求其通知受影响的使用者。

发生本条第一款、第二款情况的,人工智能提供者应当及时通知人工智能研发者。人工智能研发者应当立即进行评估。如果评估发现研发阶段存在问题风险的,应当立即通知其他人工智能提供者,在问题风险解决前,暂停或下架人工智能产品、服务。其他人工智能提供者收到通知的,应当立即采取处置措施,管控产品、服务风险。

第三十五条(公开透明性义务) 人工智能提供者提供与自然人互动的人工智能服务,应当在使用者使用人工智能前,以简洁、清晰、易于理解的方式,告知与人工智能产品、服务进行交互的自然人,其正在与人工智能服务进行交互。但是,自然人从使用的场景中可以判断的除外。

提供深度合成服务的人工智能提供者还应当根据国家有关规定,对合成内容进行显著标识,向公众提示深度合成情况。

人工智能提供者应当在使用者使用人工智能前,以适当的方式告知使用者以下信息:

(一)产品、服务的基本原理、目的意图和主要运行机制;

(二)产品、服务的许可或备案公示信息;

(三)使用者享有的权利和救济渠道;

(四)法律、行政法规规定的其他信息。

人工智能提供者应当综合考虑产品、服务的场景、性质、受众和行业技术发展水平等因素,充分保障未成年人、老年人、残障人士等特殊群体可以理解上述信息。

人工智能研发者应当配合提供者履行上述义务。

第三十六条(可解释性义务) 人工智能产品、服务对个人权益有重大影响的,人工智能使用者有权要求提供者对产品、服务决策的过程、方式等作出解释,有权对不合理的解释进行投诉。提供者应当综合考虑风险、场景和行业技术发展水平等因素,对使用者的要求及时作出反馈。研发者应当配合提供者履行本条义务。

第三十七条(公平性义务) 人工智能研发者应当在训练数据处理和标注、算法模型设计研发和验证测试过程中,采取必要措施,有效防范有害的偏见和歧视。

人工智能提供者应当在提供产品、服务过程中,加强对输入数据、输出数据的管理,有效防范有害的偏见和歧视。

第三十八条(风险管理) 人工智能提供者综合考虑产品、服务的场景、性质、受众和行业技术发展水平等因素,建立健全和实施全生命周期风险管理制度。在产品、服务投入使用前以及使用过程中,人工智能提供者应当对人工智能的风险进行识别、评估,并采取合理必要措施防控风险。人工智能提供者应当保留风险识别、评估和管理的记录,保存期限不少于三年。

人工智能研发者应当对设计、数据收集训练、模型选择、测试验证等研发过程,建立并运行风险管理制度,并对人工智能的风险进行识别并采取合理必要的措施降低风险。人工智能研发者应当在保障商业秘密的前提下,配合提供风险评估和管理记录,支持人工智能提供者完成人工智能服务的风险管理义务。人工智能提供者应当保留风险评估和管理的记录,保存期限不少于三年。

第三十九条(安全评估) 人工智能研发者、提供者应当按照国家有关规定开展安全评估。

第四十条(科技伦理审查) 人工智能研发者、提供者应当按照国家有关规定开展科技伦理审查。

第四十一条(授权代表) 本法第二条第二款规定的中华人民共和国境外的人工智能研发者、提供者,应当在中华人民共和国境内设立专门机构或者指定代表,负责处理人工智能相关事务,并将有关机构的名称或者代表的姓名、联系方式等报送国家人工智能主管机关。

第二节 人工智能研发者义务

第四十二条(负面清单内人工智能研发者增强义务) 负面清单内人工智能研发者应当履行以下义务:

(一)制定并保存符合本法要求的技术文件,并配合提供者履行相关义务;

(二)在研发过程,制定并运行符合本法要求的质量管理体系,并配合提供者履行相关义务;

(三)在研发过程,进行安全评估,配合提供者履行相关义务;

(四)法律、行政法规规定的其他义务。

第四十三条(基础模型研发者特殊义务) 基础模型研发者应遵守以下规定:

(一)按照国家规定建立健全安全风险管理制度,及时有效预防、监测、处置对国家安全、公共利益或者个人、组织合法权益、经济秩序造成的风险;

(二)按照国家规定建立健全基础模型的模型管理和数据管理制度;

(三)遵循公开、公平、公正的原则,制定基础模型的使用规则,明确使用基础模型的研发者、提供者应当履行的义务,不得滥用市场支配地位;

(四)协助其他研发者、提供者履行相关义务;

(五)对严重违反本法规定的研发者、提供者,应当采取停止提供服务等必要措施;

(六)成立主要由外部成员组成的独立机构对基础模型的研发情况进行监督;每年发布社会责任报告,接受社会监督。

第三节  人工智能提供者义务

第四十四条(备案义务) 不在负面清单内的人工智能提供者,应当在提供产品、服务之日起十个工作日内,向主管机关备案以下信息:

(一)人工智能提供者的姓名或名称、联系方式;

(二)人工智能产品、服务的商标或名称、提供形式、应用领域、算法类型、安全自评估报告;

(三)算法备案拟公示内容;

(四)法律、行政法规规定的其他信息。

备案信息发生变更的,应当在变更之日起十个工作日内办理变更手续。

完成备案的人工智能提供者应当在其对外提供服务的网站、应用程序等的显著位置注明其备案编号。

第四十五条(备案流程) 主管机关收到备案材料后,材料齐全的,应当在三十个工作日内予以备案,发放备案编号并进行公示;材料不齐全的,应当在十个工作日内通知备案人补充材料。 

第四十六条(审计义务) 人工智能提供者应当至少每两年进行一次审计,核验输入数据、算法模型、输出数据等的合规性,对人工智能产品、服务活动是否遵守法律、行政法规的情况进行审查和评价。

第四十七条(内部管理制度) 人工智能提供者应当采取下列措施确保人工智能符合法律、行政法规的规定:

(一)制定内部数据安全、风险控制、质量管理等制度和相应的操作规程;

(二)保存提供人工智能产品、服务自动生成的日志;

(三)定期对从业人员进行教育和培训;

(四)采取相应的鲁棒性、抗攻击性等合规技术措施;

(五)法律、行政法规规定的其他措施。

第四十八条(终止机制) 人工智能提供者终止提供产品、服务的,应当采取以下妥善安排措施:

(一)提前三十个工作日公示终止方案、使用者权利等;

(二)自终止之日起三十个工作日内,删除使用者的个人信息;根据国家有关规定,对人工智能产品、服务提供过程中产生的数据、训练数据、算法模型作出必要处理;

(三)法律、行政法规规定的其他措施。

第四十九条(许可注销) 负面清单内人工智能提供者应当提前三十个工作日通知主管机关,说明有关情况,并在终止服务后将许可证交回原许可机关。

不在负面清单内人工智能提供者应当在终止服务之日起二十个工作日内办理注销备案手续。

第五十条(负面清单人工智能提供者增强义务) 负面清单内人工智能提供者,还应当履行如下义务:

(一)按照本法要求进行安全评估,确保安全、稳健;

(二)制定并保存符合本法要求的技术文件,以证明所提供的人工智能符合本法对负面清单内人工智能的要求;

(三)建立并运行符合本法要求的全生命周期质量管理体系;

(四)在人工智能产品、服务自主运行过程中,确保人类可以随时采取介入、接管等措施;

(五)法律、行政法规规定的其他义务。

第五章  人工智能综合治理机制

第五十一条(国家人工智能主管机关职责) 国家人工智能主管机关依法履行以下人工智能监管职责:

(一)开展人工智能伦理、安全教育与宣传,指导、监督人工智能开发、提供和使用;

(二)制定人工智能监管规则、指引,组织制定人工智能伦理、安全、管理等方面标准;

(三)组织人工智能技术监测、评估、审计工作,指导专业机构依法开展活动;

(四)建立人工智能风险监测预警机制,组织人工智能领域风险信息的获取、分析、研判、预警工作;

(五)建立人工智能安全事件应急处置机制;

(六)接受、处理与人工智能技术及产品开发、提供和使用有关的投诉、举报;

(七)调查、处理违法人工智能开发、提供和使用活动;

(八)法律、行政法规规定的其他职责。

第五十二条(安全审查制度) 从事人工智能研发、提供活动,影响或可能影响国家安全的,应当按照国家有关规定通过安全审查。

依法作出的安全审查决定为最终决定。

第五十三条(前置程序期限) 人工智能研发者、提供者、使用者依照本法及国家有关规定,就人工智能新技术新应用申报安全审查、备案或申请行政许可的,国家人工智能主管机关应当明确工作期限,并在期限内及时处理、作出答复。

第五十四条(约谈) 国家人工智能主管机关和地方各级人工智能主管机关在履行职责中,发现人工智能研发、提供活动存在较大风险或者发生安全事件的,可以按照规定的权限和程序对该人工智能研发者、提供者进行约谈,要求其采取下列措施:

(一)按照要求进行整改,消除隐患;

(二)对其研发、提供活动作出适当的解释,说明人工智能服务的开发、管理和运行的责任,为保障公平性、安全性和稳定性采取的措施,以及对利益相关方的影响等;

(三)委托专业机构对其人工智能研发、提供活动进行合规审计。

人工智能研发者、提供者承诺限期整改合规,且能够有效避免人工智能研发、提供或使用活动造成危害的,可以不暂停相关活动。国家人工智能主管机关认为可能造成危害的,可以责令暂停相关活动。

第五十五条(创新监管) 国家人工智能主管机关就人工智能产业研发、提供和使用活动中的轻微违法行为等制定依法不予行政处罚清单,通过批评教育、指导约谈、组织会商研讨等措施促进公民、法人和其他组织依法合规开展人工智能产业研发、提供和使用活动。

第五十六条(监管沙盒) 国家人工智能主管机关建立人工智能监管试验机制,就以下事项出台具体规定、指引:

(一)参与监管试验的条件;

(二)监管试验的运行机制;

(三)人工智能研发者、提供者的义务、责任减免机制。

第五十七条(执法机制) 国家人工智能主管机关应当加强专门队伍和专业技术建设,提升人工智能监管执法工作效率。

国家人工智能主管机关应当依法确定本系统行政执法程序,建立行政执法监督制度。

第五十八条(技术治理) 国家支持企业、科研机构等研究开发有关人工智能监测预警、安全评估、应急处置等技术,鼓励在人工智能领域应用监管科技、合规科技。

第五十九条(境外反制) 境外的组织、个人从事侵害中华人民共和国公民的合法权益,或者危害中华人民共和国国家安全、公共利益的人工智能研发、提供或使用活动的,国家人工智能主管机关可以将其列入负面清单,予以公告,并依法采取限制或禁止其在中华人民共和国境内研发、提供或使用人工智能等措施。

第六十条(对等措施) 任何国家或者地区在与人工智能有关的研发、投资、贸易等方面对中华人民共和国采取歧视性的禁止、限制或者其他类似措施的,中华人民共和国可以根据实际情况对该国家或者地区对等采取措施。

第六章 法律责任

第六十一条(行政处罚一般条款) 违反本法第三十四条至第五十一条研发、提供人工智能的,由国家人工智能主管机关责令改正,给予警告,没收违法所得,责令暂停或者终止提供产品、服务;拒不改正的,并处一百万元以下罚款;对直接负责的主管人员和其他直接责任人员处一万元以上十万元以下罚款。

有前款规定的违法行为,情节严重的,由国家人工智能主管机关责令改正,没收违法所得,并处五千万元以下或者上一年度营业额百分之四以下罚款,并可以责令暂停相关业务或者停业整顿、通报有关主管机关吊销相关业务许可或者吊销营业执照;对直接负责的主管人员和其他直接责任人员处十万元以上一百万元以下罚款。

第六十二条(行政罚款的裁量方式) 本法规定的罚款可以作为责令整改等措施的补充或替代,国家人工智能主管部门决定行政罚款金额时,遵循合法原则、过罚相当原则、公平公正原则、处罚和教育相结合原则、综合裁量原则,应当充分考虑以下因素:

(一)违法行为及后果的性质、严重性和持续时间、受影响的范围和损害程度;

(二)违法行为是故意还是过失;

(三)对违法行为是否采取了补救措施并减轻可造成的损失;

(四)是否依照本法规定通知国家人工智能主管部门;

(五)是否依照本法规定采取了合理有效的组织和技术措施管理人工智能的风险;

(六)是否遵守了人工智能和安全等相关标准或获得了相关认证;

(七)此前的违法行为;

(八)其他法律法规规定的加重或减轻处罚的因素。

第六十三条(违反许可行为) 违反本规定第三十二条规定,未经许可或超越许可范围研发、提供人工智能的,由国家人工智能主管部门依据职责责令停止相关活动,处上一年度营业额百分之四以下罚款。

第六十四条(违反备案行为) 人工智能提供者通过隐瞒有关情况、提供虚假材料等不正当手段取得备案的,由国家人工智能主管部门予以撤销备案,给予警告、通报批评;情节严重的,责令暂停信息更新,并处一万元以上十万元以下罚款。

人工智能提供者终止服务未按照本法第四十二条要求办理注销备案手续,或者发生严重违法情形受到责令关闭网站、吊销相关业务许可证或者吊销营业执照等行政处罚的,由国家人工智能主管部门予以注销备案。

第六十五条(信用惩戒) 有本法规定的违法行为的,依照有关法律、行政法规的规定记入信用档案,并予以公示。

第六十六条(民事侵权责任) 研发、提供的人工智能侵害个人权益造成损害,研发者、提供者未履行本法规定的义务,应当承担损害赔偿等侵权责任,但研发者和提供者能证明自己没有过错的除外。

前款规定的损害赔偿责任按照使用者或受影响的个人、组织因此受到的损失或者研发者、提供者因此获得的利益确定;使用者或受影响的个人、组织因此受到的损失和提供者因此获得的利益难以确定的,根据实际情况确定赔偿数额。

第六十七条(公益诉讼) 违反本法规定提供人工智能,侵害众多个人的权益的,人民检察院、法律规定的消费者组织和由国家人工智能主管部门确定的组织可以依法向人民法院提起诉讼。

第六十八条(治安管理处罚和刑事责任的衔接) 违反本法规定,构成违反治安管理行为的,依法给予治安处罚;构成犯罪的,依法追究刑事责任。

第六十九条(合规不起诉) 人工智能研发者、提供者违反本法规定,承担行政责任的,行为主体合规建设经评估符合有效性标准的,则列入轻微违法行为依法免予处罚。

违反本法规定,构成犯罪的,纳入企业合规改革试点工作,涉案企业合规建设经评估符合有效性标准的,人民检察院可以参考评估结论依法作出不批准逮捕、变更强制措施、不起诉的决定,提出从宽处罚的量刑建议,或者向有关主管机关提出从宽处罚、处分的检察意见。

第七十条(国家机关不履职的责任) 国家机关不履行本法规定的义务的,由其上级机关或者国家人工智能主管部门责令改正;对直接负责的主管人员和其他直接责任人员依法给予处分。

国家机关工作人员玩忽职守、滥用职权、徇私舞弊,尚不构成犯罪的,依法给予处分。

依法不构成行政执法过错的情形,不追究有关工作人员的行政执法责任。

第七章  附则

第七十一条(定义) 本法下列用语的含义:

(一) 人工智能,是指以一定自主程度运行,服务于特定的目标,能够通过预测、推荐或决策等方式影响物理或虚拟环境的自动化系统,包括数据、特征、模型、服务提供接口和所嵌入的终端设备等;

(二) 人工智能研发者,是指仅从事算法设计、训练数据标注、特征提取、模型训练和优化、测试部署等人工智能研究和开发活动的个人、组织;

(三) 人工智能提供者,是指出于商业目的提供人工智能或者为其提供相关技术支持的个人、组织,或者无论是否出于商业目的,面向公众提供人工智能或者为其提供相关技术支持的个人、组织;

(四) 人工智能使用者,是指依照人工智能的性能和用途对其加以利用的个人、组织。

第七十二条(施行日期)  本法自 年 月 日起施行。

第七十三条(负面清单公开制度) 国家人工智能主管机关应当不晚于本法实施之日前六个月公开人工智能负面清单,并在定期更新后及时公开。