Skip to main content

Clifford Chance

Clifford Chance
Tech<br />

Tech

Talking Tech

Tech Policy Unit Horizon Scanner

November 2024

Artificial Intelligence Data Privacy Cyber Security 6 December 2024

Governments and businesses around the world continue to assess the impact of Donald Trump's election victory. From plans to repeal the Biden-Harris AI Executive Order to potentially cancelling consumer tax credits for electric vehicles and reducing the protection afforded by Section 230 of the Communications Decency Act of 1996 to social media companies, the tech policy landscape will see a significant shift over the coming 4 years – we released a deep dive on those shifts might look like, which you can read here.

One area of particular focus will be President-elect Trump's ambitions for space travel. On the campaign trail, Trump said, “Elon, get those rocket ships going because we want to reach Mars before the end of my term.” Elon Musk has stated that SpaceX aims to send five uncrewed Starships to Mars in 2026 and, if these land safely, people in 2028. For Trump and Musk's shared vision to be realised, the federal government will have to continue to be a key SpaceX client.

Australia has become the first country in the world to ban social media for under 16-year-olds. Under the ban, which will not take effect for at least 12 months from the bill's Royal Assent, tech companies could be fined up to A$50m (approximately £25.7m) if they fail to comply. This is to counteract what prime minister, Anthony Albanese, says is a “clear, causal link between the rise of social media and the harm [to] the mental health of young Australians.” However, there are concerns about the speed at which the bill was passed and its practical implications, including how companies will verify the ages of its users.

In Africa, Botswana's Data Protection Act 2024 was published in the Official Gazette, the Senate of Cameroon announced the introduction of a draft bill on data protection, and Kenya's Ministry of Health opened a public consultation on draft regulations for the Digital Health Act.

India's competition authority fined Meta 2.13 billion rupees (approx. £20.08 million) for WhatsApp's 2021 privacy policy update, finding that the company abused its dominant position. Meta has stated it plans to appeal the fine.

Unsurprisingly, AI continues to be at the forefront of regulators' minds. The UK's ICO released recommendations for AI developers and providers in the recruitment sector, following an audit of AI tools used in recruitment. The European Commission released the first draft of the General-Purpose AI (GPAI) Code of Practice written by independent experts, while Turkey's DPA published an information note on chatbots.

Finally, to mark International Day of Persons with Disabilities (IDPD) 2024 on 3 December, we released a briefing on how AI is empowering inclusivity and improving accessibility for disabled individuals and an overview of the global legal landscape looking at the U.S., Europe and APAC – you can read it here.

APAC (excluding China)

Singapore collaborates with UK and EU on AI safety

On 6 November 2024, the Singaporean Ministry of Digital Development and Information (MDDI) announced that Singapore and the United Kingdom have signed a new Memorandum of Cooperation to enhance the safety and reliability of AI technologies in its development and use.

This is part of a wider collaboration between the two countries in the tech space under the UK-Singapore Digital Economy Agreement (UKSDEA) which took effect in 2022. The new memorandum will strengthen cooperation between the AI Safety Institutes of both countries. Key areas of collaboration include AI safety research, global safety standard and protocols, information sharing, and comprehensive AI testing.

On 20 November 2024, the MDDI announced the establishment of a new Administrative Arrangement (AA) on AI with the EU. The AA sets out closer cooperation between the Singapore Infocomm Media Development Authority (IMDA) and the EU AI Office in six key areas: information exchange, joint testing and evaluation, development of tools and benchmarks, standardisation activities, AI safety research, and insights on emerging trends. This is part of the wider EU-Singapore Digital Trade Agreement negotiated in July 2024.

Australia approves social media ban on under-16s

On 29 November 2024, the Online Safety Amendment (Social Media Minimum Age) Bill 2024 passed both Houses of Parliament following the House of Representatives agreeing to the Senate's amendments to the bill.

This bill modifies the Online Safety Act 2021, mandating social media platforms to prevent children under a certain age from creating accounts on platforms with age restrictions.

The bill targets social media platforms primarily designed for online interaction among users, allowing them to connect and share content. However, platforms are not considered age-restricted if their content is inaccessible to Australian users or if they are exempt under legislative rules. The bill defines age-restricted users as Australian children under 16.

Social media providers must take reasonable measures to stop these users from opening accounts. The bill also includes privacy safeguards, requiring platforms to destroy any personal data collected to verify a user's age once it has served its purpose. Under the ban, which will not take effect for at least 12 months from Royal Assent, tech companies could be fined up to A$50m (approximately £25.7m) if they fail to comply.

The bill will become law once it receives Royal Assent.

India's competition authority fines Meta 2.13 billion rupees (approx. £20.08 million) 

On 18 November 2024, the Competition Commission of India (CCI) imposed a fine of 2.13 billion rupees (approximately £20.08m) on Meta Platforms, Inc. for breaching the Competition Act through WhatsApp's 2021 privacy policy update. The update required users to accept expanded data collection and mandatory data sharing with Meta companies to continue using WhatsApp, eliminating the previous opt-out option. The CCI determined that this 'take-it-or-leave-it' approach imposed unfair conditions by abusing Meta's dominant market position.

Additionally, the CCI found that sharing WhatsApp users' data with Meta companies for non-service purposes created entry barriers for competitors by denying market access in the display advertisement sector. Meta was also found to be leveraging its dominance in over-the-top messaging to protect its online advertising market position.

As a result, the CCI imposed a penalty of 2.13 billion rupees (approximately £20.08 million) on Meta, issued cease-and-desist orders, and mandated behavioural changes. WhatsApp must refrain from sharing user data with Meta for advertising for five years. For non-advertising purposes, WhatsApp must clearly explain data sharing practices and allow users to opt out. Users must be able to manage data sharing preferences through in-app notifications and settings. Future policy updates must adhere to these guidelines.

Meta has stated it plans to appeal the fine.

China

China's National People's Congress Publishes Amendment to Anti-Money Laundering Law

On 8 November 2024, the Standing Committee of the National People's Congress issued the amended Anti-Money Laundering Law of the People's Republic of China. The law stipulates that financial institutions must report to the relevant authority before providing Anti-Money Laundering (AML) data, such as customer identification data, transaction information, and other compliance information, to foreign authorities or organizations.

Additionally, if the AML data involves significant data and personal information, its cross-border transfer must comply with relevant regulations on data export and personal information protection.

China's Cyberspace Administration Publishes Initiative for Cross-Border Data Flow Cooperation 

On 20 November 2024, the Cyberspace Administration of China released the Global Cross-border Data Flow Cooperation Initiative. Aiming to promote cross-border data flow, the initiative emphasises the principles of openness, inclusiveness, security, cooperation, and non-discrimination. China declares its readiness to carry out and deepen exchanges and cooperation in the field of cross-border data flows with all parties based on the initiative.

China's Ministry of Industry and Information Technology Publishes Contingency Plan for Data Security Incidents

On 31 October 2024, the Ministry of Industry and Information Technology issued the Contingency Plan for Data Security Incidents in the Industrial and Information Technology Sectors (for Trial Implementation).

The contingency plan stipulates that data processors shall immediately report incidents to local regulatory authorities and take appropriate remedial measures in the event of a data security incident. The plan also adopts a tiered approach, imposing different levels of requirements on data processors based on the severity of data security incidents.

Europe

EDPB publishes first report under the EU-US Data Privacy Framework

On 5 November 2024, the EDPB published its first report on the review of the EU-US Data Privacy Framework (DPF). The report acknowledges the efforts by US authorities and the Commission to implement the DPF, noting the steps taken by the US Department of Commerce and the establishment of a redress mechanism for EU individuals.

The report also identified several points for additional clarifications, for attention or for concern (e.g., on the effective operation of the redress mechanism as at the time of the review no complaint had been filed under the DPF in the review period).

The EDPB's review focused notably on the access by U.S. public authorities to personal data transferred from the EU to DPF-certified organisations, calling for further clarifications (e.g., on the concept of 'HR data') and safeguards codification under U.S. law. The EDPB also welcomed the Commission's suggestion to conduct the next periodic review within three years to take stock of potential changes under US laws. 

Commission releases first draft of GPAI Code of Practice

On 14 November 2024, the Commission released the first draft of the General-Purpose AI (GPAI) Code of Practice written by independent experts. This draft, developed as part of the iterative drafting process initiated in September 2024, marks the conclusion of the first of four drafting rounds scheduled to end by April 2025.

The Code targets providers of GPAI models, emphasising transparency, copyright rules, and systemic risk management for advanced models. Stakeholders, including nearly 1,000 representatives from EU Member States and international observers, will discuss the draft during working group meetings. Feedback was collected via a dedicated platform until 28 November 2024, with adjustments to follow. The final version is expected to include clear objectives, proportional measures, and key performance indicators while addressing exemptions for open-source providers.

Cyber Resilience Act (CRA) published in EU Official Journal

On 20 November 2024, the Cyber Resilience Act (CRA) was published in the Official Journal of the EU, marking a significant step toward enhancing cybersecurity for Internet of Things (IoT) products. The CRA introduces mandatory cybersecurity requirements for products with digital elements (PDEs) in the EU, aiming to standardise cybersecurity standards across their design, development, and production.

Key obligations include vulnerability management, notification processes, transparency measures, and the preparation of technical documentation and EU declarations of conformity. Non-compliance could result in fines of up to EUR 15 million or 2.5% of global turnover. While most provisions take effect in 2027, organisations are advised to begin their compliance efforts promptly to meet the forthcoming requirements.

ICO and CMA publish joint position paper on harmful design in digital markets

On 4 November 2024, the Information Commissioner's Office (ICO) announced that it had collaborated with the Competition & Markets Authority (CMA) to release a joint position paper titled 'Harmful design in digital markets: How Online Choice Architecture practices can undermine consumer choice and control over personal information.'

This paper is aimed at companies using design strategies in digital markets, including websites and online services, as well as product and UX designers responsible for creating these interfaces. It outlines how certain online design choices can cause issues related to data protection, consumer rights, and competition, potentially breaching laws overseen by the ICO and CMA.

The paper includes examples of what it characterises as harmful design practices, such as 'harmful nudges and sludge,' 'confirmshaming,' 'biased framing,' 'bundled consent,' and 'default settings,' which can negatively impact personal data processing choices. It offers guidance to firms and designers in relation to these practices.

ICO issues AI recruitment tool guidelines

On 6 November 2024, the ICO released recommendations for AI developers and providers in the recruitment sector, following an audit of AI tools used in recruitment. The audit revealed issues such as unfair processing of personal data, including filtering candidates based on protected characteristics and inferring personal traits like gender and ethnicity from names. Additionally, some AI tools were found to collect excessive personal information and retain it indefinitely without candidates' knowledge.

The ICO's report outlines several key recommendations for AI providers and recruiters. These include ensuring fair processing of personal data by addressing issues of fairness, accuracy, and bias in AI outputs. It also advises clearly informing candidates about data processing practices, collecting only necessary personal information, and conducting regular Data Protection Impact Assessments (DPIAs). Furthermore, the report stresses the importance of defining roles in data processing, providing clear instructions to AI providers, and establishing a lawful basis for data processing activities.

The ICO also published questions for organisations to consider when procuring AI recruitment tools, such as completing DPIAs, ensuring transparency, and mitigating bias, to promote responsible use of AI in recruitment.

Americas

New CCPA sensitive personal information: neural data

SB 1223, signed into law by Governor Gavin Newsom on 28 September 2024, amends the California Consumer Privacy Act (CCPA) to include neural data as "sensitive personal information." This bill addresses the growing concern about the lack of regulatory oversight of emerging consumer neurotechnologies, such as neuromonitoring devices, cognitive training applications, and mental health apps, and the implications of neural privacy. California is the second state, following the amendment of the Colorado Privacy Act, to expand privacy law to protect neural data.

Under SB 1223, neural data—defined as "information that is generated by measuring the activity of a consumer's central or peripheral nervous system"—will be granted the same level of protection as biometric data, genetic data, and precise location data. Consumer neurotechnology companies must disclose what neural data they collect and provide consumers with an option to opt out of business selling their brain data. The bill also guarantees consumers' rights to correct, delete, and limit the use of their neural data gathered by commercial neurotechnology companies.

SpaceX to increase Starship launches during Trump's administration

On 15 November 2024, SpaceX President and COO Gwynne Shotwell revealed at the 2024 Baron Investment Conference in New York that she anticipates 400 Starship rocket launches during the next four years of the second Trump Administration. SpaceX has conducted six test launches of the rocket so far, making progress on engineering goals on each launch, and capturing the super-heavy booster during the fifth test flight.

While at the conference, Shotwell also discussed her hopes for the incoming administration and new "Department of Government Efficiency" (DOGE)—to be co-headed by Musk—to improve existing regulations across industries that restrain SpaceX's ability to conduct launches at a higher pace. SpaceX is a key government contractor across numerous projects involving NASA, the military, and telecommunications. SpaceX leaders have publicly lamented the current regulatory environment the company faces, particularly restrictions imposed by the Federal Aviation Agency, for its impact on innovation.

Elon Musk amends lawsuit against OpenAI to include Microsoft and California AG as defendants

In August 2024, Musk brought a new lawsuit against OpenAI for fraud and breach of contract. Musk’s team filed the amended complaint with the U.S. District Court in California's Northern District on 14 November 2024, alleging new federal and state antitrust claims amounting to 26 claims in total. The complaint also names additional defendants including Microsoft, California Attorney General Rob Bonta, and LinkedIn founder-turned-venture capitalist Reid Hoffman.

Additionally, the team added two plaintiffs: Musk's xAI startup (a competitor of OpenAI) and former OpenAI board member Shivon Zilis. This move comes after the defendants sought a motion to dismiss the complaint in October.

Middle East

Israel's State Comptroller releases report on cybersecurity

On 12 November 2024, the State Comptroller's Office of Israel published its annual audit report on cyber protection and information technologies. The report includes a chapter on national preparation in the field of AI; AI has been considered as a national priority.

The report discusses the information security at the National Insurance Institute, the information systems at the Israel Post and the Post Bank, and Government risk management in the field of information, communication, and technology.

Turkey's DPA publishes an information note on chatbots

On 8 November 2024, the Personal Data Protection Authority (KVKK) published an information note on chatbots. This note covers what AI chatbots do, what personal data is processed through these bots, and what should be considered when developing such application.

Amongst the other matters covered in the note is the legal obligations of the developers and service providers of ChatGPT. They are obliged to inform users about how their data are used, shared and stored. They also have to take proactive measures to ensure of data security and comply with the principles set out in Law No. 6698 on the Protection of Personal Data (KVKK).

Dubai's Financial Services Authority unveils 2024 Cyber Thematic Review

On 19 November 2024, the Dubai Financial Services Authority (DFSA), the independent regulator of financial services in the Dubai International Financial Centre (DIFC), held its Annual Outreach session. At this session, the 2024 Cyber Thematic Review was released. It evaluates the cybersecurity maturity and resilience of DIFC firms.

The Review identified 10 key findings, the focus areas of the DFSA.

Firms operating from and in the DFSA are expected to review the report and its findings so to assess their relevance to them and implement the appropriate measures (where required).

New law regulating the Sharjah Digital Department

In November 2024, Sharjah, one of the Emirates in the United Arab Emirates, introduced a law regulating the Sharjah Digital Department (SDD) to advance its position as a smart digital city.

This law aims to contribute to enhancing the effectiveness and efficiency of performance through the provision of smart digital services based on global standards.

It also wants to raise awareness between government departments about the importance of digital transformation and governance to advance institutional work, and ultimately, to enhance the concerned stakeholder satisfaction.

Africa

Botswana's Data Protection Act 2024 published in Official Gazette

On 29 October 2024, the Data Protection Act 2024 was published in Botswana's Official Gazette. The Act was passed by the National Assembly on 19 August 2024.

The Act establishes a number of changes. It raises the highest fine to BWP 50 million (approx. £2.9 million) or 4% of the total worldwide annual turnover, whichever is higher. It sets out conditions for the appointment of data protection officers along with their qualifications, duties, and a code of conduct. It establishes certain requirements for processing sensitive personal data, including data revealing racial or ethnic origin, sexual orientation, political opinions, religious or philosophical beliefs, or trade union membership. The Act also establishes additional conditions for the lawfulness of processing data including data minimization, accuracy, storage limitation, and accountability.

Senate of Cameroon announces introduction of draft bill on data protection

On 14 November 2024, the Senate of Cameroon announced that draft of bill No. 236/PJL/SEN/3L relating to the protection of personal data in Cameroon was one of six bills introduced to the Senate by the President.

The Coordinator of the Digital Transformation Acceleration Project in Cameroon (PATNUC) of the Ministry of Posts and Telecommunication ran a public consultation on the draft bill last year.

The draft law defines key terms such as 'personal data,' 'biometric data,' and 'sensitive data.' The latter includes information on religion, politics, ethnicity, sexuality, health, and legal matters. The draft bill establishes a National Authority for the Protection of Personal Data (Data Protection Authority). The draft bill regulates data transfers, allowing them if adequate protection and guarantees are in place, such as binding corporate rules or standard contractual clauses approved by the Data Protection Authority.

Kenya's Ministry of Health opens public consultation on draft regulations for Digital Health Act

On 21 November 2024, the Ministry of Health launched a public consultation on three draft regulations under the Digital Health Act:

Public comments are invited until 19 December 2024 via email.

Additional information

This publication does not necessarily deal with every important topic nor cover every aspect of the topics with which it deals. It is not designed to provide legal or other advice. Clifford Chance is not responsible for third party content. Please note that English language translations may not be available for some content.

The content above relating to the PRC is based on our experience as international counsel representing clients in business activities in the PRC and should not be construed as constituting a legal opinion on the application of PRC law. As is the case for all international law firms with offices in the PRC, whilst we are authorised to provide information concerning the effect of the Chinese legal environment, we are not permitted to engage in Chinese legal affairs. Our employees who have PRC legal professional qualification certificates are currently not PRC practising lawyers.