Top Scholar Zhou Hanhua Illuminates 15+ Years of History Behind China’s Personal Information Protection Law


June 8, 2021

Article Banner Picture


June 8, 2021

Professor Zhou Hanhua, vice president of the Institute of Law at the Chinese Academy of Social Sciences, is a prominent legal scholar with long experience developing proposals for Chinese legislation on personal information protection, open government information, and other issues. During the formulation of Personal Information Protection Law (PIPL), currently in draft form, and the evolution of China’s privacy regulatory regime, Zhou has been a leading voice in public and scholarly debates. His experience with the PIPL, in fact, stretches back more than 15 years to a prior effort to develop a proposal for the law. DigiChina spoke with Zhou about the process and history of developing personal information protection legislation, as well as the prospects for the new law, which analysts expect may be passed by the National People’s Congress in the coming months.

This interview was conducted in Mandarin Chinese and translated to English. It has been edited for length and clarity.

The history and origins of the Personal Information Protection Law

DigiChina: This is not the first time a Personal Information Protection Law has been drafted in China, and indeed you were part of the last effort. How did this legislative effort come together back in the first years of this century?

Professor Zhou Hanhua: China actually began the legislative process of personal information protection very early. In 2001, the National Informatization Leading Group was established at the highest decision-making level, led by the former Premier Zhu Rongji, along with the setup of the Informatization Office and Expert Advisory Committee under the State Council. The effort to improve the legal system under informatization was embodied in the passage of several related laws. For example, in 2004, the Electronic Signature Law was passed to address the efficacy of electronic signatures. In 2008, realizing that a large amount of information resides within the government and, thus, the transparency and sharing of government information is critical to promoting informatization, we drafted the Open Government Information Regulations. 

Personal information protection was deemed another critical part of the informatization process. Accordingly, in 2003, the team from the Chinese Academy of Social Sciences was commissioned by the State Council Informatization Office to form a research group focused on personal information protection. We conducted lots of research based on the legislative experiences of various other countries and China’s existing conditions, finished an experts’ suggested draft of the PIPL in 2005, and published a three-volume book series in 2006 releasing our research results. 

Protecting personal information is essential. Domestically, during the process of opening up government information, it was crucial to effectively protect personal information and curb the infringement of personal rights. Internationally, the Data Protection Directive enacted by the European Union in 1995 had been influential, adding to the importance of bringing personal information protection law to China’s legislative agenda. 

The 2005 scholars’ draft, as it happened, did not immediately advance in the legislative process. What took place between that time and today that led to the draft currently under consideration at the National People’s Congress (NPC)? 

I think there are two reasons that it took such a long time before China released the official draft of the PIPL last year. First, during China’s government restructuring in 2008, the informatization task was handed over from the State Council Informatization Office to a bureau under the Ministry of Information Technology, resulting in less impetus compared to before. Second, the imperative for personal information legislation is closely linked to the development of information technology. Before 2010, technologies like cloud computing, big data, and the Internet of Things had just emerged, not yet leading to as many personal information issues as today, when these technologies have penetrated people’s daily lives, causing significant privacy intrusions.

In reality, China’s regulation on personal information protection did not stop progressing over the past several years. For example, China passed a U.S. Fair Credit Reporting Act-equivalent law, the Regulation on Credit Industry Administration, to protect personal credit information. In 2009, the Criminal Law was amended to include crimes relating to personal information. The NPC Standing Committee made a specific decision to improve cyber information protection in 2012. The Law on the Protection of Consumer Rights and Interests was modified in 2013 to regulate protection of consumers’ personal information rights. Many other laws like the Cybersecurity Law and the Civil Code also touched upon personal information. However, the previous laws were either constrained to specific sectoral areas or too general to be carried out. The PIPL will be the first law in China that focuses on personal information protection in a comprehensive and systematic way and covers the full lifecycle of personal information. 

As an expert involved in the legislative process both this time and 15 years ago, how do you view the similarities and differences between the two efforts?

The legislative process this time shares many similarities compared to 15 years ago. First, it’s the same top-down style of Chinese legislation, where the high-level central government leads to push progress. Second, the legislation directly responds to real problems. For example, several years ago, there was a girl named Xu Yuyu, who was admitted to college but then lost all her tuition savings to fraudsters who abused her personal information. The girl, who came from a poor family, could not bear the consequence and was stressed to death, shocking the entire society. Such socially provocative cases gave rise to increasing public awareness of the importance of personal information protection. Third, government efforts together with the expert team continue to play an important role in this rapidly developing field. 

Of course, there are many differences as well. The most prominent difference, I believe, is the enormous challenge driven by the fast-growing information technology industry. On one hand, the development of big data has brought a profound revolution to the industry, and many services provided by Internet companies like Alibaba, Tencent, and Baidu have now become infrastructure. On the other hand, the extent of personal information misuse has grown to an outrageous level, urgently demanding legislation. As we stride into a digital era, it brings unprecedented risks and challenges to individual rights and personal information protection.

Setting incentives compatible with data protection

In a paper you published in 2018, you argue that the “principle of incentive compatibility” (激励相容) should be the core of an effective system on personal information protection. How should we understand this principle, and would you say it is embodied in the current draft PIPL?

A successful privacy law should protect personal rights without hindering the informatization of industry. One of the most important takeaways from China’s 40 years of reform and opening-up is to maintain a balance between development and regulation. Since the misuse of personal information can cause irrevocable damages, just like environmental pollution, it is necessary to synchronize development and regulation through the “principle of incentive compatibility.” External deterrence imposed by laws is certainly crucial, but companies should also have internal incentives to protect personal information proactively and attach this matter to their own development and reputation. Otherwise, if the law is too rigid, companies will either circumvent the law or miss the growth opportunity. 

The draft PIPL’s provisions embody these ideas. For example, the draft law requires compliance auditing, but gives flexibility in terms of compliance mechanisms. Companies can decide on the timing and focus of compliance audits on their own and according to their own circumstances. Only when certain activities are deemed as inappropriate or highly risky by regulators will more stringent requirements on the audits be imposed, for example requiring auditing conducted by outside institutions. Another example is privacy risk assessment. The law does not specify how the assessment should be done; it only lists the applicable situations and required components, such as the cross-border data transfer or the processing of sensitive personal information. In addition, the second draft PIPL adds in the “gatekeeper” rule that requires the gatekeeper companies [i.e., those providing platforms that enable other actors to set up their operations] to establish independent supervision bodies and publish regular privacy reports, but also allows a great deal of discretion. The purpose of all these designs is to place the incentives and discretion upon companies and mobilize their proactiveness.   

The principle of incentive compatibility tends to work well on Chinese companies in practice. Companies with great financial and technological strengths, like Alibaba, are able to develop a comprehensive set of practices to handle personal information appropriately. Many of their experiences and procedures have been incorporated into certain national standards. Now, more and more Chinese companies recognize personal information protection as their core area of competitiveness and aspire to apply their practices as part of their brand’s offering. 

Data Localization and a ‘China Model’?

In the scholarship on the earlier PIPL effort, you mentioned that China’s personal information protection model should be different from a European or U.S. model. Many in the international community have concerns with a “China model” they see as tied to data localization requirements. Do you think there is a “China model” in terms of personal information legislation? And what are your thoughts on data localization?

This is a complicated question, as the international community has varied perspectives on problems such as data localization and the legal model of personal information protection. For example, some experts think the “European Union model” was developed based on the Fair Information Practice Principles that originated in the United States. I can’t say for sure that China has established a “China model” on personal information regulation, but China’s legislation in this area definitely possesses Chinese characteristics and accommodates China’s conditions. 

For example, in the draft PIPL, the law enforcement agency design is different from that of the United States and the EU, and the rules on cross-border data transmission are different from the adequacy decision approach in the EU, and has Chinese characteristics as well. On the other hand, the draft PIPL is also in line with global trends from many perspectives. Some provisions resemble those in GDPR, just like the California Consumer Privacy Act resembles GDPR in terms of data subject rights. As every country is coping with the common challenges of the information era, it is reasonable for these regulatory frameworks to show convergence. I think it is impossible for a “China model” to be distinguished completely from the models of other countries. 

As for data localization, I believe the fundamental problem now is the lack of international mutual trust. Many countries have established rules based on the logic of data localization. For example, the EU invalidated the EU–U.S. “Privacy Shield” last year out of privacy concerns. The United States also accused Chinese companies like Huawei, TikTok, and DJI of misuse of U.S. citizens’ personal information. Despite lack of evidence, these companies were required to store the data within the United States to avoid being sanctioned by the U.S. government. I believe the unfair treatment received by Chinese companies is the most conspicuous globally, and the underlying issue also implicates data localization of a U.S. version. Therefore, for a fundamental solution, it is crucial to build mutual trust globally and avoid the balkanization of digital products through an international arrangement on the flow of information, just as the world has done for many other factors of production such as labor and capital. 

Transparency and fairness in automated decision making

The draft PIPL has a provision requiring transparency and fairness in automated decision-making. This is not a simple task, of course, and the EU recently published a lengthy AI regulation proposal. Do you think China will also enact more detailed regulations on AI? 

Over the past years, there have been numerous cases where AI algorithms provoked controversies, for example price discrimination and algorithm-imposed predicaments for delivery people. The rules in this draft are responsive to these social problems and share consistency with general international criteria for evaluating algorithms: interpretability and accountability. They need to be modest, in a way, to regulate without impeding new technologies, since the entire new economy is built on big data and algorithms. 

Personalized service has been the core secret of tech giants’ success, breaking the limitations of inflexible traditional business models. However, we also recognize the difficulty in setting out the rules to regulate AI. Most legal researchers don’t have technical expertise in these fields, and therefore it is hard to decide in detail how to evaluate the interpretability, accountability, and fairness of algorithms like deep learning and reinforcement learning. 

Law can’t be too rigid in these advanced fields. Therefore, the draft PIPL only sets principles, and its implementation will be at the discretion of relevant supervisory departments, as well as data processors* themselves, to explore the benefit of algorithms in terms of personalization without violating the bottom line of law. There surely should be more detailed AI rules in the future, but it is hard to say whether they should first be guided by practical exploration, or directly escalated to the level of AI “law” like what the EU is trying to do now. All options should be explored. 

Enforcement of the PIPL and potential conflicts with other Chinese laws

The new draft PIPL would pose significant challenges for authorities charged with enforcing it. Do you think the legislation recognizes this difficulty, and how are government offices preparing?

We need to consider this question from a broader context. Law enforcement has always been a great challenge for China. It only took China 40 years to build up a relatively accomplished socialist legal system, with law enforcement playing an important role. Although we aim to make law enforcement strict, standardized, fair, and civilized, there still exist many problems, including insufficient resources, lack of enforcement capabilities, and brutality and violence in enforcement. Law enforcement inaction and the abuse of power both exist. 

Back to the area of personal information protection, in fact, China’s law enforcement departments have achieved good results and accumulated many experiences in recent years. For example, the public security departments have been carrying out specific operations every year, such as actions against telecommunications fraud that exploits personal information. The State Administration for Market Regulation has also been increasingly active in protecting consumer data. So too the Cyberspace Administration of China, the lead agency in China’s personal information protection. In recent years, China also has been seriously fighting against mishandling of personal information by mobile apps. 

Of course, the adoption of this new overarching privacy law will bring a new level of difficulties and challenges to law enforcement. It is important to boost the resources, capabilities, and professionalism of law enforcement and standardize their procedures to ensure law enforcement itself is compliant with law. 

Many other laws, such as the Cybersecurity Law and the Civil Code, also have quite a few rules regarding personal information. Do you think that may cause potential conflicts or confusion in compliance by personal information processors* and law enforcement?

The problem of potential conflicts among different laws doesn’t only exist in the field of personal information protection. The more laws are passed, the more likely there will be conflicts. As for the Civil Code, its enactment precedes that of PIPL, and its legislative process already considered many other relevant laws including the Cybersecurity Law to avoid potential conflicts. For example, the Civil Code changed its concept of “personal information controller” in the original draft to “personal information processor”* in the final legal text, consistent with the current draft PIPL. However, it is hard to guarantee the elimination of all potential conflicts. For example, the Civil Code places personal information protection under the umbrella of “personality rights,” but the draft PIPL seems not to treat the individual rights on personal information exactly the same as such “personality rights” in the Civil Code. Also, the Civil Code protects “private information,” a concept not included in the draft PIPL that instead addresses another related but different concept, “sensitive personal information.” I think it will bring lots of challenges to harmonize these inconsistencies in the future. Whether those will definitely cause conflicts is hard to say for now, but it definitely provides rich material for legal researchers like me. 

* Editor’s note: DigiChina generally translates the term 个人信息处理者 gèrén xìnxī chǔlǐzhě as “personal information handler,” a relatively literal translation chosen to emphasize the unique definition of the term in the context of Chinese policy. “Personal information processor” is a widely used and more conventional translation for the same term, but the PIPL defines it significantly differently from the  “processor” concepts in comparative privacy law like the GDPR. We adopt “processor” here to match Prof. Zhou’s English usage.

Yehan Huang is a student editor of DigiChina and a master’s student in Data Science at Stanford’s Department of Statistics. Mingli Shi is a regular contributor to DigiChina and a privacy lawyer.