The Cyberspace Administration of China on Friday released pioneering new draft regulations on algorithmic recommendation systems in online services. DigiChina has translated these draft "Internet Information Service Algorithmic Recommendation Management Provisions" in full.
To better understand the significance of these draft rules, we asked three specialists to contribute their views of the Provisions as drafted and their broader context in Chinese and international technology regulation. –Ed.
Director of Strategy at the Center for Security and Emerging Technology, Georgetown University
Recommendation algorithms are simultaneously one of the most widespread types of AI in use today, and one of the least-discussed in policy circles. By issuing regulations specifically for recommender systems, China is breaking new regulatory ground.
Much of CAC’s draft is common sense stuff: Providers of recommender algorithms should comply with the law, keep user data secure, monitor their systems, and so on. Other elements are more specific to China’s controlled information environment, such as directing providers to “uphold mainstream value orientations” and disseminate “positive energy.”
In my view, the most interesting sections are those that introduce brand-new measures specifically for recommender algorithms. In many cases, it’s unclear how these will apply in practice. Take, for instance, the short video app Douyin. Like its international counterpart, TikTok, Douyin is famous for its uniquely powerful recommendation algorithm, which serves video after video to the user, optimizing to keep them on the app. How does this fit with Article 8 of the new draft, which prohibits apps from engrossing or addicting their users?
And what about the provisions that aim to give users more control over what is recommended to them? Article 15 says that consumers must be able to see, edit, and delete user tags, but it’s not clear what that includes: Explicit tags are only a small part of how the Douyin algorithm decides what to show a given user. More complex factors—such as which exact videos they watch and for how long, which accounts they follow, which links they click, and so on—are at least as important. It’s unclear how Douyin could display that data concisely to a user, even if it wanted to. (TikTok already allows users to download all data the app has on them, which in practice means a giant spreadsheet with thousands of rows of video IDs, watch times, and so on.)
The draft provisions do state that the user has the right to an explanation of algorithms that seriously affect them, but this will be challenging to enforce in practice, given the immature state of algorithmic explainability (as explored in the long, unsettled debate over a “right to explanation” in the context of Europe’s GDPR). Most likely, the only option a user would have on a service like Douyin is to turn off personalization entirely. It’s hard to imagine many consumers going for this, given how central the recommendation function is to enjoying the app in the first place.
All this goes to say, it will be fascinating to watch how these first-of-their-kind regulations affect recommender algorithms in practice. If elements of CAC’s approach are successful, they are likely to be adopted more widely.
Senior Editor, DigiChina, and University Lecturer, Modern China Studies, University of Leiden
These new draft regulations on algorithms are noteworthy in several respects. First, they apply only to “Internet information services,” an umbrella term that covers everything from social media and e-commerce to ride-hailing services. They are not, however, intended to address the use of algorithms by government bodies. As such, they echo the trend in recent legislation, including the Personal Information Protection Law, for a greater focus on consumer or employee rights in the civil law sphere, while making sure not to create strong enforceable rights for citizens in their relationship with the state. They also respond to growing concerns about privacy and the possible abuse of algorithmic decisions to disadvantage consumers (Art. 18). The PIPL similarly contains provisions banning unreasonable differential treatment through algorithmic pricing decisions. A final concern is employee rights. In recent months, the plight of food delivery drivers has gained prominence in Chinese media, and these regulations address the dispatching algorithms structuring their activities (Art. 17).
Second, they constitute the first comprehensive effort by the CAC to deal with an aspect of content control that has, thus far, not been addressed explicitly. In terms of online content control in China, we are well aware of censorship (the removal of information from public view) and propaganda (officially controlled content). However, these regulations further deal with the presentation and prioritization of content. This is, of course, necessitated by the fact that the Internet has made the quantity of information abundant to the extent that selection now plays a far greater role. Political content is, however, not the only issue at stake, even if the regulations include special provisions on “algorithms with public opinion properties or social mobilization capabilities.” More broadly, the rules are a reflection of the CCP’s paternalistic stance with regard to the morality of online services, which is largely absent in Europe or the United States.
Third, these regulations attempt to pierce the corporate veil and impose a regulatory system classified by type of application and level of impact for algorithms, similar to the Multi-Level Protection System already in place for critical networks and infrastructure and the “categorized and graded protection” approach embedded in the Data Security Law, which takes effect next week. The importance of algorithms would be measured by the extent to which they may have political consequences, as well as number of users, sensitivity of data used, and the degree to which they impact users’ activities. Public opinion-related algorithms would need to be registered with CAC or its provincial-level subordinates.
Lastly, under these regulations, the maximum punishment for ordinary (ie. non-criminal) infringements would be a fine of 30,000 RMB ($4,600 USD), which seems a minimal amount for online giants with billions in turnover. This is one area where Chinese legislation has lagged far behind the country’s rapid development: Since 1996, Chinese law has mandated that ministries cannot impose fines greater than this amount for business-related infringements of ministry-level rules (as opposed to NPC-passed legislation and State Council regulations). Obviously, 30,000 RMB was worth a lot more back then. However, the Administrative Punishment Law has recently been revised, and it is likely that these amounts will be adjusted in the foreseeable future.
Senior Fellow, the Paulson Institute
The new draft regulations on recommendation algorithms fit with the pattern seen over the past year of putting in place either new regulatory frameworks around large technology platforms and specific business practices, or finally fleshing out rules around data collection, storage, and use that have been in the works for nearly five years. Clearly Beijing now desires to set the rules of the digital economy, in China, and perhaps eventually beyond. Chinese regulators are now feeling their oats across the breadth of the digital economy. They are venturing into areas, such as recommendation algorithms, that were not previously considered when the initial pillars of the current regulatory castle were conceived in 2016 during deliberations around the Cybersecurity Law.
But since 2016, several trends have accelerated, accounting in large part for the orgy of regulation we have seen over the past six months. First, the large platforms, particularly, but not limited to, the big two, Alibaba and Tencent, dominated the digital economy and spread their tentacles, largely unregulated, into multiple areas of the economy outside their original business models. In doing this, they engaged in such intense competition and predatory business practices that they helped generate a major regulatory backlash designed in part to protect consumers. Hence the new regulations, which are designed to protect consumers from unwanted intrusions into their online life via AI algorithms optimized to tempt them and push products on them they might not otherwise want. Another theme of regulation directed at big tech platforms is to empower consumers, give them choices, and allow them to opt out of or into particular uses of their data. Second, over the past several years, Chinese leader Xi Jinping began signalling in speeches and public statements that science and technology in general, including private sector technology companies, would not be able to focus only on profits, but needed to retool business models to account for providing benefits to society.
In keeping with these two trends, there is clearly an element in the new document of regulators wanting to ensure that online experiences mediated by AI algorithms do not produce negative effects, like what some have called the “spiritual opium” of the gaming industry. Hence the rules require algorithmic recommendation operators to adhere to “mainstream value orientations” and “actively promote positive energy” (Art. 6), in particular when it comes to presenting information on “home pages, hot topics, top recommendations and lists, and pop-up windows” (Art. 11); the rules prohibit serving up information to minors that could “encourage internet addiction” or lead them to adopt unsafe behaviors or bad habits (Art. 16); they also require algorithmic recommendation providers to periodically review, evaluate, and validate their algorithms, barring them from leading users to “become addicted” or engage in “high value consumption” (Art.8).
In addition, regulators are reacting to complaints about so-called “differential” treatment, and the new rules explicitly prohibit providers from using algorithms to carry out “unreasonable differential treatment” of consumers based on characteristics such as consumer preferences and transaction habits (Art. 18). This refers to practices China’s online platforms have long employed via a distinct selling tactic called “big data-enabled pricing schemes,” in which companies differentiate commercial offerings based on users’ price tolerances, as suggested by a pricing model built with personal data without the consent or knowledge of customers. This information asymmetry and the lack of algorithmic transparency have resulted in higher pricing for customers who have previously purchased the same products and services, those who use high-end mobile devices, or individuals whose transaction history suggests access to a substantial budget.
In setting out quite detailed regulations on one small but important corner of the world of commercial deployments of AI and big data–driven algorithms, the Cyberspace Administration of China and other Chinese authorities are signalling that there is no corner of the digital economy that they will not look at in depth to uncover business practices that are unpopular, exploit user data, and could lead to bad social outcomes. Along with putting in place the world’s strongest data protection regime, at least on paper, Chinese regulators are regaining their footing, and sending a by-now not-too-subtle message, that when it comes to the digital economy, just as in the digital asset and cryptocurrency space, regulatory lag will not happen in the data and AI algorithm space in China.