Skip to main content

Clifford Chance

Clifford Chance
Tech<br />

Tech

Talking Tech

Tech Policy Unit Horizon Scanner

October 2024

Artificial Intelligence Data Privacy Cyber Security 4 November 2024

The 2024 election super cycle ends with arguably the most important of these votes in the United States on 5 November. The result will have significant repercussions for tech policy around the world with both candidates seeking to cement US global tech leadership over the coming years, albeit with very different approaches.

One challenge the winner will face is the progress and regulation of AI. Recent weeks have seen the states continue to grapple with the topic. In California, Governor Gavin Newsom vetoed SB 1047, a controversial AI safety bill, citing concerns over the bill's focus on developers of large AI models. With Trump pledging to repeal the Biden-Harris AI Executive Order, the future role and shape of the federal government in regulating AI is a point of contention between the two presidential candidates.

Elsewhere, bolstering cybersecurity continues to be high on the tech policy agenda this month. In Australia, the nation's first piece of standalone cybersecurity legislation in the form of the Cyber Security Bill 2024 was introduced in parliament. China's state council issued its Administrative Regulations on Cyber Data Security creating new obligations on network data processors, including the requirement to conduct national security reviews. The EU Council adopted the Cyber Resilience Act, a new law on security requirements for the design, development, production and marketing of hardware and software products. A National Cybersecurity Policy and Strategy was launched in Ghana to tackle cybersecurity threats arising from the nation's rapid digital transformation over recent years.

Meanwhile, the UK government has introduced the Data (Use and Access) Bill to parliament, carrying over many of the proposals in the previous government’s Data Protection and Digital Information Bill while seeking to reassure the EU over the UK's adequacy status ahead of next year's renewal.

Finally, in a sign of how AI is changing legal work, Singapore has permitted court users to draft affidavits and statements using generative AI on the condition that such content is verified, edited, and fact-checked.

APAC (excluding China)

Australian government introduces Cyber Security Bill 2024 to parliament

On 9 October 2024, the Australian government introduced the Cyber Security Bill 2024 to parliament. The bill is Australia's first piece of standalone cybersecurity legislation. Among other things, the bill proposes to mandate minimum cyber security standards for smart devices and introduce mandatory ransomware reporting for certain businesses to report ransom payments.

The bill will also progress and implement reforms under the Security of Critical Infrastructure Act 2018. This includes establishing a power for the government to direct entities to address serious deficiencies within their risk management programs and improving government assistance measures to better handle the impacts of incidents on critical infrastructure.

Singapore's Supreme Court issues a circular on use of generative AI in courts

Effective from 1 October 2024, Registrar's Circular No. 9 of 2024 from the Supreme Court of Singapore allows users of different courts to employ generative AI, subject to requirements for relevancy, accuracy, and observance of intellectual property rights.

The circular forbids using AI to create any evidence that will be relied on in court. However, it allows for drafting affidavits and statements on the condition that AI-generated content is verified, edited, and fact-checked to ensure accuracy.

Court users must also ensure that there is no unauthorised disclosure of confidential or sensitive information through the use of AI tools. They must be prepared to identify the specific sections of court documents which use AI-generated content.

China

China's State Council publishes Administrative Regulations on Cyber Data Security

On 30 September 2024, the State Council issued the Administrative Regulations on Cyber Data Security to standardize data processing activities and ensure data security. The regulations impose new obligations on network data processors, including the requirement to conduct national security reviews for data processing activities that impact or could impact national security.

The regulations also emphasise the protection of personal information, important data and cross-border data, and provide clear compliance guidance for market participants regarding data subjects' rights.

Read our new article which provides an in-depth analysis of the key new requirements introduced by the CDS Regulations and their relevant impact.

Chinese authorities release consultation papers on public data resource policies

On 12 October 2024, the National Development and Reform Commission released the Interim Administrative Measures on the Registration of the Public Data Resources (Consultation Paper). The draft specifies the procedures of and regulations on the registration of public data resources, as well as clarifies the responsibilities of registration entities, aiming to establish a national system for public data registration.

The National Data Administration simultaneously issued the Implementation Standards for the Authorised Operation of Public Data Resources (Consultation Paper). The consultation paper seeks to regulate the authorized operation of public data, aiming to create a unified data market.

It outlines the responsibilities of local governments and industry departments in authorising data operations, including establishing implementation plans and agreements with operational organisations, while ensuring lawfulness, fairness, and transparency during the process.

Europe

EU Council adopts new law on security requirements for digital products

On 10 October 2024, the Council of the European Union adopted the Cyber Resilience Act. This new legislation will apply to manufacturers, distributors and importers of hardware and software and aims at improving the security of digital products in Europe throughout their supply chain and lifecycle.

The act sets binding cybersecurity requirements for digital products sold in the EU, including software, webcams, smart TVs and other Internet of Things (IoT) devices. It also sets cybersecurity requirements for the design, development, production and marketing of hardware and software products.

Entities that fail to comply with the act could incur fines up to EUR 15,000,000 or, for businesses, up to 2.5% of their total global revenue from the preceding financial year, whichever amount is greater.

The act is awaiting publication in the Official Journal in the coming weeks. It will enter into force twenty days after publication and will apply 36 months after its entry into force with some provisions to apply at an earlier stage.

Legitimate interest: New CJEU case-law and EDPB's guidelines

On 4 October 2024, the Court of Justice of the European Union (CJEU) issued a decision concerning the disclosure of an association members' personal data to sponsors without these members' consent. In C‑621/22 Koninklijke Nederlandse Lawn Tennisbond v Autoriteit Persoonsgegevens a Dutch tennis association argued that disclosing its members' data to sponsors was in their legitimate interest, aiming to strengthen member engagement and provide added value through discounts and offers from partners.

The CJEU examined whether the disclosure of personal data for consideration to sponsors for direct marketing purposes, to satisfy the controller's commercial interests, may qualify as 'legitimate interest' under the EU's GDPR. The CJEU ruled that such processing can be considered lawful only if strictly necessary to the legitimate interests in question and if the interests or rights of the data subjects do not override this interest, in light of the relevant circumstances. The CJEU also clarified that, while legitimate interests do not have to be determined by law, they must always be lawful.

Following this, on 9 October 2024, the European Data Protection Board (EDPB) released guidelines 1/2024 on the processing of personal data based on legitimate interest. The EDPB elaborates on several use cases. The guidelines notably focus on children. The EDPB emphasises that processing children data requires heightened protection, noting that their interests often outweigh the controllers' legitimate interests.

The EDPB also clarifies that legitimate interest is generally insufficient for profiling or targeted advertising involving children. For marketing or the creation of personality or user profiles, the EDPB stresses that controllers must be able to demonstrate that the children's interests are not negatively affected.

The EDPB indicates that it is unlikely that legitimate interest would be appropriate for direct marketing in a number of cases. For instance, in the case of a platform entirely funded by personalised advertising, which creates users' profiles using data collected both on and off its service for that purpose, the EDPB considers that users are unlikely to reasonably expect their data to be processed for personalised ads without their consent.

The guidelines are open for consultation until 20 November 2024.

UK government introduces Data (Use and Access) Bill to parliament

On 23 October 2024, the Data (Use and Access) Bill (DUA Bill) was introduced to the House of Lords. The DUA Bill carries over many of the proposals in the previous government’s Data Protection and Digital Information Bill.

Provisions which some saw as presenting a high risk of the EU deciding the UK is not an “adequate” jurisdiction for transfers of personal data have been dropped. The adequacy finding is set to expire in June 2025 and junior digital economy minister Margaret Jones has stated that the government is "at great pains to reassure" EU officials that the DUA Bill "would not do anything to put" the renewal of the adequacy status at risk.

The DUA Bill supports digital ID verification and introduces a Digital Verification Service. It would also establish a new ‘Information Commission’ to take over all regulatory responsibilities previously held by the ICO. The Commission will have broader powers, including to issue and enforce codes of practice for the processing of personal data.

There will be increased government oversight of the regulator; the Secretary of State for Science, Technology and Innovation will be able to appoint the members of the Commission, including the Commissioner, and issue policy directions.

Read our new article where we explore the similarities and differences between the UK government's new Data (Use and Access) Bill and the most recent iteration of the  Data Protection and Digital Information Bill. 

Ofcom tells platforms "the time for talk is over"

On 17 October 2024, the UK's regulator for communications services, Ofcom, provided a progress update on implementing the UK’s Online Safety Act. From December 2024, platforms will need to act to comply with their duties under the act.

In December 2024, Ofcom will publish first edition illegal harms codes and guidance. Platforms will have three months to complete illegal harms risk assessment.

In January 2025, Ofcom will finalise children’s access assessment guidance and guidance for pornography providers on age assurance. Platforms will have three months to assess whether their service is likely to be accessed by children.

Melanie Dawes, Ofcom's chief executive, stated that "the time for talk is over. From December, tech firms will be legally required to start taking action, meaning 2025 will be a pivotal year in creating a safer life online. We’ve already engaged constructively with some platforms and seen positive changes ahead of time, but our expectations are going to be high, and we’ll be coming down hard on those who fall short."

Americas

AI regulation battle continues at the state level

On 29 September 2024, California’s Governor Gavin Newsom vetoed SB 1047, a highly controversial AI safety bill. The bill would have been the first in the nation to regulate AI on a large scale with a particular focus on developers of frontier models. For weeks prior to Newsom’s veto, lobbyists, politicians and tech companies fervently debated the bill, with proponents arguing that it would protect consumers against AI harms, and opponents claiming that it would stifle innovation.

Newsom stated that the scope of the bill scope too broad: "…the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology."  Newsom did sign into law more than a dozen separate AI laws with a range of focus from deepfake pornography to AI-generated clones of Hollywood actors.

Meanwhile, New York forges ahead in AI regulation at the state agency level. On 18 October 2024, the New York State Department of Financial Services (NYDFS) proposed new AI guidance to supplement its landmark cybersecurity regulation (23 NYCRR Part 500) of the financial entities it oversees.

The guidance highlights cyber risks inherent in using AI-enabled tools, especially with respect to multifactor authentication, non-public information, supply chains and third-party vendors. NYDFS-regulated entities are advised to appropriately assess and control AI risk through monitoring and training programs.

Middle East

Saudi Arabia publishes guide on data breaches

On 21 October 2024, the Saudi Data & Artificial Intelligence Authority (SDAIA) published a Personal Data Breach Incidents Procedural Guide. It sets out the steps that data controllers would need to follow in the case of a personal data breaches, according to the Saudi Personal Data Protection Law (PDPL).

The guide outlines three steps that data controllers must follow in a case of a breach: (i) notifying SDAIA within 72 hours of becoming aware of a breach, particularly if it affects the data subjects' rights or interests; (ii) containing the breach and limiting its impact; and (iii) documenting the incident and the corrective actions taken.

It is worth noting that the guide applies to all data controllers who are subject to the provisions of the PDPL and its Implementing Regulations.

UAE introduces international policy on AI

On 11 October 2024, the UAE cabinet approved an international policy on AI aiming to prevent the misuse of the technology.

The policy is formulated around six guiding principles: progress, collaboration, society, ethics, sustainability and safety.

According to the policy, the UAE will participate in international forums to improve the use of AI, advocate for transparency and support the establishment of international alliances for governing the AI systems. The policy would also support the implementation of international regulations that hold countries accountable for developing AI.

Africa

East African Community develops Data Governance Policy Framework

On 25 October 2024, the East African Community (EAC)* announced it had validated a regional Data Governance Policy Framework following a three-day workshop in Kigali, Rwanda. The framework is aimed at establishing common standards for data protection, privacy, and security between the eight Partner States in order to harmonize the region’s approach to data management and promote economic growth.

The EAC's Principal Information Technology Officer, Eng. Daniel Murenzi stated that by developing the framework the EAC is "creating a unified regional approach to data governance that aligns with continental priorities and ensures the safety and efficiency of digital services in our region".

*The EAC consists of eight Partner States, comprising the Republic of Burundi, the Democratic Republic of Congo, the Republic of Kenya, the Republic of Rwanda, the Federal Republic of Somalia, the Republic of South Sudan, the Republic of Uganda and the United Republic of Tanzania.

Ghana launches National Cybersecurity Policy and Strategy

On 3 October 2024, Ghana's government launched its National Cybersecurity Policy and Strategy (NCPS). The policy has 5 pillars: Legal Measures, Technical Measures, Organisational Measures, Capacity Building, and Cooperation. The aim of the policy is to tackle cybersecurity threats arising from Ghana's rapid digital transformation over recent years.

The policy aligns with the International Telecommunication Union's Global Cybersecurity Agenda guideline for cybersecurity development which seeks to improve confidence, trust, and security in the ICT architecture of ITU member countries, including Ghana.

Established under the Cybersecurity Act 2020 (Act 1038), Ghana's Cyber Security Authority will lead the implementation of the policy. 

Additional Information

This publication does not necessarily deal with every important topic nor cover every aspect of the topics with which it deals. It is not designed to provide legal or other advice. Clifford Chance is not responsible for third party content. Please note that English language translations may not be available for some content.

The content above relating to the PRC is based on our experience as international counsel representing clients in business activities in the PRC and should not be construed as constituting a legal opinion on the application of PRC law. As is the case for all international law firms with offices in the PRC, whilst we are authorised to provide information concerning the effect of the Chinese legal environment, we are not permitted to engage in Chinese legal affairs. Our employees who have PRC legal professional qualification certificates are currently not PRC practising lawyers.