Translation: Excerpts from China’s ‘White Paper on Artificial Intelligence Standardization’

Published

June 20, 2018

Article Banner Picture

Published

June 20, 2018


This translation by Jeffrey Ding, edited by Paul Triolo, covers some of the most interesting parts of the Standards Administration of China’s 2018 White Paper on Artificial Intelligence Standardization, a joint effort by more than 30 academic and industry organizations overseen by the Chinese Electronics Standards Institute. Ding, Triolo, and Samm Sacks describe the importance of this white paper and other Chinese government efforts to influence global AI development and policy formulation in their companion piece, “Chinese Interests Take a Big Seat at the AI Governance Table.” –Ed.

3.3 Safety, Ethics, and Privacy Issues

Historical experience demonstrates that new technologies can often improve productivity and promote societal progress. But at the same time, as artificial intelligence (AI) is still in the early phrase of development, the policies, laws, and standards for safety, ethics, and privacy in this area are worthy of attention. In the case of AI technology, issues of safety, ethics, and privacy have a direct impact on people’s trust in AI technology in their interaction experience with AI tools. The public must trust that the security benefits that AI technologies can bring to humans far outweigh the harms. Only then is it possible to develop AI. In order to ensure safety, AI technology itself, and its applications in various fields, should follow the ethical principles agreed upon by human society. Particular attention should be paid to privacy issues, because the development of AI is occurring as more and more personal data are being recorded and analyzed; in the midst of this process, protecting personal privacy is an important condition for increasing social trust. Overall, establishing policies, laws, and a standardized environment in which AI technologies benefit society and protect the public interest are important prerequisites for the continuous and healthy development of AI technology. For this reason, this chapter focuses on discussing safety, ethics, and privacy policy and legal issues related to AI technology.

3.3.1 Artificial Intelligence Safety Issues

The greatest feature of AI is the ability to automate operations, without human intervention, based on knowledge, and to do so in a self-correcting manner. After an AI system is started, the decision making of the AI system no longer requires the further instructions of a controller. These types of decisions may have unforeseen consequences for humanity. Designers and manufacturers involved in the development of AI products may not be able to accurately predict the risks inherent to a product. Therefore, AI safety issues cannot be overlooked.

Unlike traditional public safety (such as nuclear technology) concern,s which require strong infrastructure as a base, AI, relying on computers and the Internet, can pose threats to safety without the need for expensive infrastructure. People who have grasped the relevant skills can make AI products anytime, anywhere, and without expensive infrastructure. The operation of AI programs is not publicly trackable, and their diffusion path and speed is also difficult to accurately control. In the absence of access to existing traditional control technology, another approach must be found to control AI technology. In other words, regulators must consider deeper ethical issues and ensure that AI technologies and their applications should meet ethical requirements in order to truly achieve public safety.

Because the realization of the goals of AI technology is influenced by the technology’s initial settings, it is necessary to ensure that the design goals of AI are consistent with the interests, ethics, and morals of most humans. So even when faced with different environments in the decision-making process, AI algorithms can make relatively safe decisions.

From the angle of technical application of AI, it is necessary to fully consider questions of liability and fault in the process of AI development and deployment. By setting the specific content of rights and obligations for AI technology developers, product manufacturers, or service providers and end users, the goal of implementing security assurance requirements will be achieved.

In addition, considering that the current regulations on AI management in various countries in the world are not yet uniform, and relevant standards remain in a blank state, participants and players in the same AI technology may come from different countries, and these countries have not yet signed a shared contract for AI. To this end, China should strengthen international cooperation and promote the formulation of a set of universal regulatory principles and standards to ensure the safety of AI technology.

3.3.2 Ethical Issues in Artificial Intelligence

AI is an extension of human intelligence, and it is also an extension of the human value system. In its development, it should include the correct consideration of the ethical values ​​of human beings. Setting the ethical requirements of AI technology relies on the deep thinking and broad consensus of the community and the public on AI ethics, as well as abiding by some consensus principles:

First is the principle of human interests—that is, the ultimate goal of AI should be to benefit human welfare. This principle reflects respect for human rights, maximizing the benefits to humankind and the natural environment as well as reducing technological risks and negative impacts on society. Under this principle, policies and laws should devote themselves to the construction of the external social environment for the development of AI, promote awareness education on AI ethics and safety for individuals in society, and make society guard against the risk of abuse of AI technologies. In addition, we should also be wary of AI systems making ethically biased decisions. For example, if universities use machine learning algorithms to assess admissions, and the historical admissions data used for training (intentionally or not) reflect some bias from previous admission procedures (such as gender discrimination), then machine learning may exacerbate these biases during repeated calculations, creating a vicious cycle. If not corrected, biases will persist in society in this way.

The second is the principle of liability—that is, a clear system of liability should be established both with respect to technology development and application, so as to be able to hold the AI developers or department accountabile at the level of technology, and to establish a reasonable system of liability and compensation at the application level. Under the principle of liability, technology development should follow the principle of transparency, and technology applications should follow the principle of equal rights and responsibility.

Among these principles, the principle of transparency requires understanding of the system’s operating principles in order to predict future development. That is, human beings should know how and why AI makes a specific decision, which is crucial for the allocation of liability. For example, in an artificial neural network, an important topic of AI, one needs to know why it produces a specific output result. In addition, data source transparency is equally important. Even when working with a data set that has no problems, it is still possible to encounter problems of prejudice hidden in the data. The principle of transparency also requires attention to the hazards associated with the collaboration of multiple AI systems when developing technologies.

The “consistency of rights and responsibilities” principle. This refers to the idea that future policies and laws should work to clearly define the following: On the one hand, necessary business data should be properly recorded, the corresponding algorithm should be supervised, and commercial applications should be subject to a reasonable review; on the other hand, commercial entities can still use reasonable intellectual property rights or trade secrets to protect the core parameters of the enterprise. In the field of AI applications, the principle of “consistency of rights and responsibilities” has not yet been fully implemented by the business community and the government in the practice of ethics. This is mainly because engineers and design teams tend to ignore ethical issues in the development and production of AI products and services. In addition, the entire AI industry is not yet accustomed to a workflow that takes into consideration the needs of various stakeholders. In AI-related industries, the protection of trade secrets is not in balance with transparency.

3.3.3 Artificial Intelligence Privacy Issues

Recent developments in AI are based on the application of large amounts of data in information technology. This inevitably involves the question of the reasonable use of personal information (PI). Therefore, there should be a clear and operable definition of privacy. The development of AI techniques also makes it easier to violate personal privacy, so the relevant laws and standards should provide more powerful protections for personal privacy. The existing controls on private information include two types of processes: for collection of PI without the user’s explicit consent, and with the user’s clear consent to terms. The development of AI technology has posed new challenges to the original regulatory framework, as the scope for the collection of PI agreed upon by users is no longer clearly defined. Using AI technology, it is easy to deduce the aspects of privacy that citizens are unwilling to disclose, such as deriving private information from public data, and deriving from one person’s PI information (such as online behavior, relationships, etc.) information about other people (such as friends, relatives, and colleagues). Such information goes beyond the scope of the PI that the individual initially agreed to disclose.

In addition, the development of AI technology makes the government’s collection and use of citizens’ personal data more convenient. Large amounts of personal data can help government departments to better understand the status of the people they serve and guarantee the opportunity and quality of personalized service. However, it follows that the risks and potential harms of improper use of personal data by government departments and government workers should be given sufficient attention.

The acquisition and informed consent of personal data in the context of AI should be redefined. First of all, the relevant policies, laws, and standards should directly regulate the collection and use of data; merely getting the consent of the data owner is not enough. Secondly, standard procedures that are practical and implementable and adaptable to different use cases should be provided for designers and developers to protect the privacy of data sources; and thirdly, we should begin regulating the use of AI which could possibly be used to derive information which exceeds what citizens initially consented to be disclosed. Finally, policies, laws, and standards should extend the protection of personal data management to encourage the development of relevant technologies and explore the use of algorithmic tools as agents of individuals in the digital and the real world. This type of approach allows for the coexistence of both control and use, because the “algorithmic agent” can, depending on different situations, establish different use permissions, all the while managing individual consent and the refusal to share information.

The issues of safety, ethics, and privacy covered in this section are challenges to the development of AI. Safety is a prerequisite for sustainable technology. The development of technology poses risks to social trust. How to increase social trust and let the development of technology follow ethical requirements, especially, is an urgent problem to be solved to ensure that privacy will not be violated. To this end, there is a need for developing sound policies, laws, and standards and for cooperation within the international community. When formulating policies, laws, and standards, we should cast off superficial press speculation and advertising-style hot promotions. Such policies must also promote a deeper understanding of AI products and focus on the great benefits brought by this new technology to society as well as the great challenges. As an important member of the international community, China should take on the great responsibility of ensuring the application of AI is on the right path and that healthy development is based on sound grounds.

3.4 The Important Role of Artificial Intelligence Standardization

Currently, economic globalization and market internationalization have further deepened. Standards, as the main technical basis for economic and social activities, have already become an important indicator for measuring the level of technological development in a country or a region, are the basic guidelines for products entering the market, and are a concrete manifestation of the market competitiveness of enterprises. Standardization has a fundamental, supportive, and leading role in AI and its industrial development. It is both a key starting point for promoting industrial innovation and development and a key point in industrial competition. The advancement and perfecting of AI standards are related to the healthy development of the industry and the competitiveness of products in the international market.

Developed countries such as the United States, the European Union, and Japan attach great importance to the standardization of AI. The “National Artificial Intelligence Research and Development Strategic Plan” released by the United States, the “Human Brain Project” released by the European Union, and the “Artificial Intelligence/Big Data/Internet of Things/Cybersecurity Integrated Project” [AIP Project] implemented by Japan all put forward a series of proposals focusing on core technologies, top talent, standard specifications, and other ways to strengthen deployment [of AI], trying to seize a new round of science and technology initiative.

China attaches great importance to the standardization of AI. The State Council’s “New Generation Artificial Intelligence Development Plan” (AIDP), for which AI standardization serves as an important support guarantee, proposed “strengthening the AI standards framework system. Adhere to the principles of security, availability, interoperability, traceability; and gradually establish and improve the basic commonality, interoperability, industrial applications, cybersecurity, privacy protection, and other technical standards for AI. Speed up the promotion of autonomous driving, service robot, and other application sector industry associations in developing relevant standards.” In its “Three-Year Action Plan for Promoting Development of a New Generation Artificial Intelligence Industry (2018-2020),” the Ministry of Industry and Information Technology pointed out that it is necessary to establish an AI industry standard specification system and establish and improve technical standards such as common foundations, interoperability, safety and privacy, and industrial applications, and at the same time, to build AI product evaluation systems.

Although China has a good foundation in the field of AI, even as core technologies such as speech recognition, visual recognition, and Chinese-language information processing have achieved breakthroughs and possess huge market environments for applications, the overall level of development still lags behind that of developed countries. In terms of core algorithms, key equipment, high-end chips, and major products and systems, the gap is relatively large. The infrastructure, policies, regulations, and standards systems that are suitable for the development of AI are in urgent need of improvement.

To sum up, more attention should be paid to the important leading role that AI standardization plays for promoting technological innovation and supporting industrial development:

(1) Standardization work is conducive to speeding AI technology innovation and the commercialization of research findings. At this stage, AI technology is developing rapidly. As products and applications that can be scaled up and commercialized successively appear on the market, technological achievements need to use standardized methods to concretize technology results and achieve rapid innovation promotion.

(2) Standardization work helps to improve the quality of AI products and services. Whether for facial recognition systems, smart speakers, service robots, or other products appearing on the market, if the quality of the products is low, then they need standard unified specifications, and the products and service quality will be improved in accordance with the methods of conformity testing and assessment.

(3) Standardization helps to effectively protect the safety of users. For example, in the field of automated driving, the difficult ethical questions such as “the trolley problem” and problems such as the compromise of user privacy caused by the Apple mobile phone fingerprints have aroused people’s widespread concern. How to protect the rights and interests of users is a difficult and important issue. This requires the establishment of people-oriented principles, and the development of relevant security standards and norms to ensure that intelligent systems comply with and serve human ethics and ensure information security.

(4) Standardization helps create a fair and open AI industry environment. Currently, the industry giants use methods such as open source algorithms, and platform interface binding, to create their own deep learning frameworks and other ecological systems. As a result, user data is more difficult to transfer. This requires a unified standard to achieve interoperability and coordination between manufacturers to prevent industry monopolies, user lock-in, and form a virtuous industrial environment.