China moves to further regulate artificial intelligence – what businesses should know
On 11 April 2023, the Cyberspace Administration of China (CAC) published a consultation draft (the Administrative Measures for Generative Artificial Intelligence Services (the GenAI Measures)) to regulate the rapidly developing field of generative artificial intelligence (AI). This article highlights some key questions for businesses that are involved in the use or development of AI within or outside China.
Key issues
- China is developing a set of rules to regulate the rapidly developing field of generative AI.
- Businesses located outside China may also get impacted by the GenAI Measures. There are rules around responsibilities, content generated, data used to train AI, and governance and transparency, that should be carefully considered before the roll-out of the generative AI products.
- The applicability of PIPL should also be properly considered by AI developers
Key Questions
If my business is based entirely outside China, will the GenAI Measures still be relevant?
The GenAI Measures apply to any person that provides services to the public in China by utilising the generative AI products it develops and/or uses.
As long as the generative AI product is not publicly available or publicly marketed to users in China, the better view is that the GenAI Measures will not apply. Providing a generative AI product to a selected number of companies in China or a few pre-identified trial users should not necessarily trigger the GenAI Measures.
However, even if a business takes all measures to ensure that its generative AI product will not be used by any individual in China, the business should still be careful in considering the source of data it used to train the AI. In this respect, the extra-territorial effect of the PRC Personal Information Protection Law (the PIPL) would be particularly relevant, because applying algorithms to process data in order to produce something similar to what human processing would do, might arguably involve "analysing and assessing the behaviour of individuals in China" and thus bring the AI developer under the purview of the PIPL.
My business might have some China clients, but is my business using "generative AI"?
CAC proposes to define "generative AI" as "the technology that generates texts, pictures, audio, video, codes and other contents based on algorithms, models and rules".
Admittedly, the element of "intelligence" is not very apparent in CAC's definition of generative AI. Notably, this definition looks very similar to the definition of "deep synthesis", which, based on the deep synthesis regulation released in January 2023 (the Deep Synthesis Rules), means "the technology that generates texts, pictures, audio, video, virtual scenes and other internet information by employing deep learning, virtual reality and other synthetic algorithms". The primary regulatory aim of the Deep Synthesis Rules was generally perceived to be to rein in the use of "deep fake" in the context of internet information service, which itself is already a regulated business in China.
It should be noted that there might be substantial overlap between different regulations and concepts as the law is trying to keep up with the development of new technologies. It might be the case that the legal landscape on AI and related technologies will remain fragmented in the near future, each targeting one specific area or use case of certain technologies. In contrast to the EU approach of developing a unified AI law, the Chinese approach so far has been to adopt an agile governance approach by promulgating multiple regulations targeting different areas of AI development.
Given generative AI is defined so broadly, what should I be aware of if my business has used it to service China clients?
The first issue to consider is whether your business is an internet information service provider or engaged in related businesses in China, as in that case you should be mindful of the use of recommendation algorithms, or providing services that may relate to public opinion or be capable of mobilising or influencing social viewpoints. A CAC filing or security review may be triggered in these scenarios.
Although the GenAI Measures are in draft form only, they indicate the regulatory approach that CAC wants to take.
Firstly, on Provider's responsibility:
- a Provider means any person using generative AI to provide content generation services, including providing access through, for example, programmable interfaces (i.e., API), so that others can use generative AI to produce content
- a Provider is responsible for (a) the content produced using its service, (b) the legitimacy of the data used to train the generative AI and (c) the fulfilment of the relevant governance requirements for such generative AI.
Secondly, on content:
- the content generated may not contain what is prohibited from circulation in China, with the scope of such prohibited information being largely consistent with the existing telecommunications laws and regulations
- the content generated must be true and accurate with effective measures to prevent fake information and also must not be discriminatory
- the Provider must properly guide its users to use generative AI for legitimate purposes
- the Provider must take measures to prevent prohibited content mentioned above being generated again within 3 months after identifying such content being produced and implementing filtering measures accordingly.
Thirdly, on the data used to train AI:
- such data may not infringe on others' intellectual property rights
- consent from data subject right is required if personal data is included, unless otherwise permitted by laws and regulations
- the data must be true, accurate, objective and diversified; and
- the above requirements remain applicable even if the data is sourced from the public domain.
Fourthly, on governance and transparency:
- the Providers must require users to provide his or her true identity information
- the Provider must establish a complaints handling procedure
- the Provider must disclose necessary information that could affect the trust and choice of users according to the relevant rules, including the source, size, type and quality of the data to train the AI, the rules, size and type of tagging, underlying algorithms and technological systems.
The GenAI Measures also address other aspects on, for example, suspension or termination of service upon certain events, tagging, addiction prevention mechanism, protection of user information, all of which would affect how a Provider operates its generative AI business.
Final observations
Several days before the GenAI Measures were released for consultation, the Ministry of Science and Technology issued a consultation draft of the Review Measures for Science and Technology Ethics (the Ethics Review) for public comments as well. This is a good demonstration that, to regulate something novel and potentially revolutionary, concerted efforts from different Chinese authorities focusing on different aspects would be needed.
However, at the global level, while the regulatory approach taken in different jurisdictions are subtly different (with some jurisdictions being first movers and others being prudent observers), all jurisdictions are largely moving in the same direction. By way of example, the Italian Data Protection Authority halted ChatGPT's data processing operations at the end of March 2023. This trend means that any company that is incorporated in one jurisdiction, but trains its AI using data collected from the open internet, and provides services to users around the world, will need a global solution