Tech Policy Unit Horizon Scanner
July 2024
Amongst the tumult and drama of global politics, elections and election campaigns, there is no let-up in the pace of policymaking, even as we move into summer. Social media platforms received plenty of attention on both sides of the Atlantic in July. In the U.S., the Federal Trade Commission issued a decision banning NGL Labs from hosting users under 18, citing deceptive practices, including targeting minors, to drive sales of the messaging service app. The U.S. Supreme Court handed down a long awaited decision in Moody v NetChoice LLC, which dealt with the question of whether social media platforms' content decisions are protected by the free-speech first amendment of the U.S. constitution. Meanwhile, the EU Commission continued its enforcement of the Digital Services Act, in its latest release of preliminary findings against X for alleged breaches on dark patterns, advertising transparency and data access.
Elsewhere we saw focus on strengthening and renewing soon-to-be outdated data protection rules and practices. Israel's Constitution, Law and Justice Committee approved new amendments to the country's Privacy Protection Bill, which aim to bring the law up to speed with current global standards. Saudi Arabia also opened a consultation, seeking public comments on new draft rules on data protection officers. Back in the UK, the newly elected Labour government laid out a proposal on a new Digital Information and Smart Data Bill, which would strengthen data sharing practices and the powers of the ICO. To many people's surprise, an anticipated AI Bill was not included in the King's Speech (the government's legislative programme), but it later emerged that the UK is still intending to legislate in that area. Many regulators were also busy concluding collaboration agreements on privacy issues, with cooperations spurring up between the Thai PDPC and EXIM Bank, South Africa and Eswatini, and Morocco and Benin.
With the EU's flagship AI Act now finalised, the EU Commission is attempting to promote early adoption of the rules through its AI Pact, a draft of which was published this month. The EU AI Office issued a consultation on the drawing-up of the first general-purpose AI Code of Practice, offering organisations and companies the opportunity to help draft the codes themselves. Chinese regulators also issued a joint statement on the construction of a central AI standardisation system, which further anticipates international collaboration on a number of standards in the coming years.
The final thing worth noting up front is that the U.S. Supreme Court struck down the long-standing practice of deferring interpretation of federal laws to federal agencies. Without the so-called Chevron deference doctrine, as it has been dubbed, federal agencies are expecting to see a significant increase in challenges to their practices, which may also hinder their development of rules, including in the AI sector. See our briefing on the topic here.
APAC (excluding China)
Singapore Personal Data Protection Commission publishes guide on synthetic data generation
On 15 July 2024, the Singapore Personal Data Protection Commission (PDPC) published the Privacy Enhancing Technology (PET): Proposed Guide on Synthetic Data Generation. This guide defines and classifies PETs and the suite of tools and techniques that allow the processing, analysis, and extraction of insights from data without revealing the underlying personal or commercially sensitive data. It also recommends good practices for generating synthetic data to minimise the risk of re-identification. Synthetic data, created using a purpose-built mathematical model, retains the statistical properties and patterns of the original data. Furthermore, the guide outlines a five-step process to mitigate re-identification risks: knowing your data, preparing your data, generating synthetic data, assessing re-identification risks, and managing residual risks.
Data Security Council of India publishes cybersecurity analysis and insights
On 8 July 2024, the Data Security Council of India (DSCI) published its Cybersecurity Outlook 2024, offering a comprehensive analysis of the current cybersecurity landscape and projecting key challenges and emerging trends for the upcoming year. The outlook delves into the evolving legal and compliance environment, particularly the significance of adhering to the applicable ESG regulatory framework in the context of a heightened cybersecurity importance. It also explores technological advancements, such as the adoption of password-less authentication methods, the implications of quantum computing on encryption, and the advancements in cloud security. Furthermore, the outlook emphasises strategic imperatives, including the quantification of cyber risks for enhanced risk management, improvements in identity and access management, and proactive threat exposure management. Additionally, it provides both local and global perspectives on pressing cybersecurity issues.
Thailand Personal Data Protection Committee signs a Memorandum of Understanding with the Export-Import Bank of Thailand
On 13 July 2024, the Thai Personal Data Protection Committee (PDPC) and the Export-Import Bank of Thailand (EXIM Bank) signed a Memorandum of Understanding (MoU). This partnership aims to jointly drive initiatives and advance, cultivate, and share the expertise on protecting of personal data for network enterprises, clientele and professionals within the banking and financial sectors. The collaboration includes the development of various courses designed to improve the implementation of the Personal Data Protection Act (PDPA) and to provide incentives for knowledge sharing on the law.
China
China's Information Security Standardisation Technical Committee releases new consultation drafts and guidelines
The National Information Security Standardisation Technical Committee (TC260) has issued several new consultation papers and guidelines, some of which are now open for public comments.
On 25 June 2024, TC260 released the Cybersecurity Standards Practice Guidelines - Guidelines for Cybersecurity Assessment of Large-scale Internet Platforms, which provides guidance to organisations on assessing large-scale internet platforms. These guidelines offer definitions on, amongst others, large-scale online platforms, critical businesses and important facilities. Specifically, the guidelines provide that large-scale online platforms shall establish an Assessment Working Group, which should conduct annual assessments, keeping a detailed list of the content of the assessments. The guidelines also provide a template for reporting the annual cybersecurity assessments.
On 12 July 2024, TC260 issued a consultation paper on Data security technology — Personal Information Protection Compliance Audit Requirements. The requirements establish principles and conduct requirements for personal information protection compliance audits as well as overall requirements for auditors, also providing guidance on the content and methods for audits. The comment period will close on 11 September 2024.
Chinese regulators issue guidelines for developing a comprehensive standardisation system for the AI industry
On 3 July 2024, the Cyberspace Administration of China (CAC) and three other departments jointly issued the Guidelines for the Construction of a Comprehensive Standardisation System for the National Artificial Intelligence Industry (Version 2024). The guidelines summarise the current state of the industry, put forward general requirements and specify the approach to and priorities for construction. Specifically, through the construction of a standards system, the industry is expecting more than 50 newly released national and industrial standards and China will aim to participate in the development of over 20 international standards by 2026.
Hong Kong government submits proposed Framework on Cybersecurity Law for consultation
On 2 July 2024, the Hong Kong government introduced a proposed "Legislative Framework" regulating the cybersecurity obligations of critical infrastructure operators for consultation at the Legislative Council Panel on Security. As a significant step towards enhancing Hong Kong's cybersecurity resilience and protecting essential services from cyber attacks, the proposed Legislative Framework includes: (i) designation of Critical Infrastructure Operators (CIOs) in specific sectors, such as banking, telecommunications, and energy; (ii) obligations imposed on CIOs to implement robust cybersecurity measures, conduct regular risk assessments, and report incidents; and (iii) a new Commissioner's Office to be established under the Security Bureau to investigative and monitor compliance with obligations. A wider month-long consultation is planned for later this year, and the target is for a proposed bill to be submitted to the Legislative Council for consideration before the end of 2024.
Europe
It's finally happened - AI Act published in the EU Official Journal
On 12 July 2024, the Regulation of the European Parliament and Council of 13 June 2024 laying down harmonised rules on artificial intelligence (AI Act) was published in the Official Journal of the EU. The AI Act is set to become effective on 1 August 2024, with its rules starting to apply progressively according to different transition periods. For instance, prohibitions, requirements relating to AI literacy, and the general provisions will be applicable as of 2 February 2025. Requirements around general-purpose AI systems will become applicable on 2 August 2025. Meanwhile, requirements for standalone high-risk AI applications, such as those used for human resources, credit scoring, life and health insurance risk assessments and pricing, as well as specific transparency obligations and certain provisions regarding regulatory sandboxes, will start to apply on 2 August 2026. Rules for high-risk AI systems under particular sector-specific laws (including, but not limited to, laws on medical devices, radio equipment, toys and machinery) and rules applicable to the general-purpose AI models already on the market will come into force from 2 August 2027.
EU AI Office Issues Call for Participation and Consultation on General-Purpose AI Code of Practice
On 30 July 2024, the EU AI Office issued a call for expressions of interest to participate in the drawing-up of the first General-Purpose AI Code of Practice. Interested parties can apply by 25 August 2024. The Office is seeking input from "eligible general-purpose AI model providers, downstream providers and other industry organisations, other stakeholder organisations such as civil society organisations or rightsholders organisations, as well as academia and other independent experts…". Additionally, the EU AI Office has launched a consultation on General-Purpose AI Models to inform an initial draft of the Code of Practice, which is due to be released by April 2025, nine months after the Act's entry into force on 1 August 2024. Submissions must be completed by 10 September 2024. The questionnaire is available here, and the consultation is divided into three sections on: General-purpose AI models: transparency and copyright-related provisions; General-purpose AI models with systemic risk: risk taxonomy, assessment and mitigation; and Reviewing and monitoring the Codes of Practice for general-purpose AI models.
EU Commission publishes draft pledges for the AI Pact
On 22 July 2024, the European Commission published draft pledges for the AI Pact. The AI Pact aims to foster early implementation by businesses of the EU AI Act, encouraging the sharing of processes and best practices as well as voluntary pledges to anticipate some of the EU AI Act's requirements. As discussed above, some rules will only start to apply after two years, such as those on high-risk AI systems, and the AI Pact thus seeks to bridge the gap for compliance in the transition period.
The European Commission first launched a call for interest in November 2023, and over 550 organisations of all sizes responded. Following this, the AI Office initiated the development of the AI Pact based on two main pillars. Pillar I serves as an entrance point where members contribute towards building a cooperative atmosphere by exchanging their insights and best practices. Within this framework, participants are encouraged to exchange effective strategies and internal rules which could benefit others on their pathway to compliance. Pillar II aims to set up a structure that promotes the early preparation of compliance with the EU AI Act requirements. This initiative encourages organisations to share details about their procedures and policies established in anticipation of meeting compliance requirements.
Entities that have shown an interest in the AI Pact are being called to participate in a September 2024 workshop to discuss the pledges and provide their insights on a revised draft encompassing a wider range of commitments, ahead of the workshop. The AI Office will examine the feedback gathered and may adjust the pledges accordingly. This updated version will be discussed and reviewed during the workshop, with the goal of obtaining the official endorsement of the pledges by the second half of September 2024.
EU Commission sends preliminary findings to X for breach of DSA
On 12 July 2024, the EU Commission informed X (formerly known as Twitter) of its preliminary view that the organisation is in breach of the Digital Services Act (DSA) on a number of accounts including, dark patterns, advertising transparency and data access for researchers. Regarding dark patterns, the Commission indicated that X, by deceiving users on "verified status", compromises users' ability to judge authenticity and enables malicious actors, as anyone could obtain a "verified account" status by merely subscribing. According to the Commission, X also fails to comply with advertising transparency requirements as it does not provide a reliable, searchable advertisement repository and has created access barriers, impeding transparency and supervision. Finally, the Commission considers that X restricts researchers' access to public data as by restricting researchers from independently accessing its public data and imposing high fees for API access.
These preliminary findings are subject to the outcome of the full investigation, as X can now examine the investigation documents and has the option of replying. Should the preliminary views be confirmed, the EU Commission could fine X for up to 6% of its total worldwide annual turnover and X may face an enhanced supervision period and periodic penalty payments to ensure compliance.
Global Privacy Enforcement Network publishes its report about deceptive design patterns
On 9 July 2024, an audit report published by the Global Privacy Enforcement Network (GPEN), which consists of 26 international data protection authorities (DPAs) of OECD members, revealed that many websites and mobile apps use misleading tactics to influence users' privacy decisions. The audit, conducted together with the International Consumer Protection and Enforcement Network (ICPEN), examined over 1,000 websites. It found that many websites use tactics to push users into actions like agreeing to data collection. The audit highlighted strategies such as using complex and confusing language (observed in 89% of instances), deceptive interfaces (43%), nagging (35%), and obstruction by creating click fatigue or making it difficult to delete an account (55%). The audit is not an investigation and does not aim to formally conclude on compliance issues. However, the concerns identified should contribute to support targeted education and national enforcement actions.
New data and cybersecurity bills proposed in UK government's King's Speech
On 17 July 2024, the new Labour UK government announced plans to introduce a Digital Information and Smart Data (DISD) Bill and a Cyber Security and Resilience (CSR) Bill as part of the 2024 King's Speech. The DISD bill aims to enable innovative uses of data, reform data sharing, improve the UK's data laws, and strengthen the powers of the Information Commissioner's Office (ICO), by changing its regulatory structure. The bill also proposes changes to the Digital Economy Act for better data sharing about businesses, applying information standards in health and social care, and enabling broad consent for scientific research. This development comes after the previous government's Data Protection and Digital Information Bill was dropped when the parliament was dissolved ahead of the UK general election this July.
The CSR bill aims to enhance the protection of essential digital services by expanding existing regulations, strengthening the position of regulators, and increasing reporting requirements. Key provisions include broadening the scope of regulations to cover more digital services and supply chains, empowering regulators with cost recovery mechanisms and investigative powers, and mandating increased incident reporting to improve government data on cyber attacks, including ransomware incidents.
Will they, won't they? UK starts preparing new AI legislation
On 29 July 2024, despite receiving only a short mention in the King's Speech earlier in the month (see previous entry), the UK government confirmed its intention to go ahead with a proposal for an AI bill. A spokesperson for the Department for Science, Innovation and Technology (DSIT) confirmed that the new bill will target a "handful of AI companies" that are developing the most powerful AI systems. The bill is still in an early preparatory phase, and there will likely be a consultation in due course, but the details of this are still unknown. The news come a few days after DSIT released details of an action plan to harness the opportunities AI offers.
In parallel, a private members' bill on Public Authority Algorithmic and Automated Decision-Making Systems was announced earlier this July. The bill, that would require public authorities to complete an algorithmic impact assessment before procuring or developing such tools, has now received support from the new government, indicating that it might progress through the UK parliament. Tim Clement-Jones, who is responsible for introducing the bill, noted that this proposal could be a "stepping stone" to wider AI regulation in the UK.
UK's communications regulator publishes discussion papers on generative AI and deepfakes
On 23 July 2024, UK's communications authority (Ofcom) released a discussion paper titled "Red Teaming for GenAI Harms - Revealing the Risks and Rewards for Online Safety". The paper explores 'red teaming,' an evaluative method designed to identify vulnerabilities in generative artificial intelligence (GenAI) models to safeguard users from harmful content. The red teaming process involves establishing a team and objectives, inputting attack prompts into the AI model, analysing the outputs for harmful content, and acting on the findings. Challenges include difficulties with multi-modal models, potential oversight by inexperienced reviewers, the controlled nature of the tests, and issues with result comparability. Best practices include defining harm metrics, iterative assessments, readiness to implement safeguards, thorough documentation, and using red teaming alongside other evaluation methods.
On the same day, Ofcom published another discussion paper on deepfakes, which explores various types of harmful deepfakes and strategies to mitigate their risks. The paper identifies three primary harms: demeaning deepfakes that falsely depict individuals in compromising scenarios, defrauding deepfakes that misrepresent identities for scams, and disinforming deepfakes that spread falsehoods to influence public opinion. To address these risks, Ofcom suggests measures such as prevention through content filters, embedding watermarks and metadata, detection using machine learning classifiers, and enforcement of clear rules against harmful synthetic content.
Ofcom invites stakeholder feedback on both discussion papers.
That's a lot of data breaches - UK Information Commissioner's Office publishes 2023 - 2024 annual report
On 18 July 2024, the UK's ICO published its 2023-2024 annual report, detailing its key activities and operational performance in the past year. The report is divided into three sections consisting of a performance report, an accountability report, and financial statements. During the 2023-2024 period, the ICO received 39,721 data protection complaints, with 38% concerning the right of access, and 11,680 data breach reports, marking a 28% increase from the previous year. The ICO also launched consultations on generative artificial intelligence, published fining guidance, concluded 285 investigation cases, and imposed over £15 million in monetary penalties. Of the 35,322 cases completed, 62% resulted in advice being given, while 38% led to informal action.
Americas
Chevron deference is defeated, with implications for AI regulation
On 28 June 2024, the United States Supreme Court struck down the forty-year old practice of deferring to federal agencies interpretations of federal laws. The overturn of what has been known as the Chevron doctrine means that challenges to federal agencies and their rules are likely to increase exponentially. This decision will have significant implications for the technology sector and may hinder advancements in AI regulation. See our briefing on the topic here.
U.S. Supreme Court holds that social media platforms' content decisions are protected by the first amendment
On 1 July 2024, the United States Supreme Court issued its highly anticipated decision in Moody v NetChoice LLC. The Court unanimously remanded the case, stating that the lower courts had not thoroughly examined the challenges to laws in Texas and Florida, which mandate that social media platforms moderate content according to state guidelines. The Court specifically directed the lower courts to consider how these laws would apply to platforms beyond Facebook's Feed and YouTube's homepage. Some commentators view the decision as a blow to social media platforms, who had hoped for a clear facial victory to stem regulation. Others see hope in the Court's conclusion that when Facebook and YouTube use their Community Standards and Guidelines to determine how or whether to display content, these are expressive choices protected by the First Amendment.
FTC bans the NGL app from hosting minors
On 9 July 2024, the Federal Trade Commission (FTC) moved to ban NGL Labs' application from hosting users under the age of 18, citing child safety concerns. This is the first time the agency has banned a digital platform from serving minors. The agency alleged that NGL engaged in deceptive practices to drive app sales, including targeting minors, sending AI-generated messages to minors under the guise of real people, and making false claims about its AI content moderation capabilities. NGL, which launched in 2021 and has over 200 million users, is an anonymous messaging service. The company will pay a $4.5 million settlement and a $500,000 penalty to the Los Angeles District Attorney's office.
Majority of claims dismissed in SEC v SolarWinds case
On 18 July 2024, a New York federal court dismissed the majority of the securities fraud and internal accounting control claims brought by the U.S. Securities and Exchange Commission (SEC) against SolarWinds and its Chief Information Security Officer, Timothy Brown. The SEC had alleged that SolarWinds misled investors about its cybersecurity practices, particularly in relation to the massive SUNBURST cyberattack, believed to have been conducted by Russia. The court’s decision marks a significant step back from what many considered an increasingly aggressive approach by the SEC to address corporate data breaches.
Middle East
Israeli Constitution, Law and Justice Committee approves new amendments to Privacy Protection Bill
On 21 July 2024, the Israeli Constitution, Law and Justice Committee approved new amendments to the Privacy Protection Bill, which if enacted by Parliament, would take effect 12 months after their publication. The new amendments include a number of key changes to the Privacy Protection Bill, and has been described as a significant milestone for Israeli privacy efforts. The amendments would introduce new definitions on data controllers, processors, personal and sensitive data and processing, which are similar to those found in the EU General Data Protection Regulation. The amendments also include a requirement to appoint data protection and information security officers, sets up new notification requirements for the processing of significant amounts of personal data and adds extensive enforcement powers to the Privacy Protection Authority. If enacted, new procedural rules for claims under the Privacy Protection Bill will also kick in.
Israeli PPA seeks comments on data transfer rules
On 8 July 2024, the Israeli Privacy Protection Authority (PPA) released a draft opinion on the transfer of information outside Israel, interpreting Regulation 2(4) of the Privacy Protection (Transfers of Data to Databases Abroad) Regulations (5761-2001). The PPA noted that personal information can only be transferred abroad if the destination country's data protection laws are at least as stringent as Israel's. However, Regulation 2(4) allows for an exception if the recipient agrees to adhere to Israeli data protection standards. The draft opinion specifies that such agreements must include commitments to fulfil obligations towards data subjects, restrict the use of information to its intended purpose, maintain confidentiality, and comply with any other applicable legal provisions. Public comments on the draft can be submitted until 8 August 2024.
Saudi Data & Artificial Intelligence Authority opens consultation on DPOs
On 10 July 2024, the Saudi Data & Artificial Intelligence Authority (SDAIA) opened the draft Rules for the Appointment of a Personal Data Protection Officer (DPO) for public comments. The draft rules specify when a DPO must be appointed by controllers under the Personal Data Protection Law (PDPL) and outline the minimum requirements for such appointments, including the qualifications, knowledge, and integrity of the DPO. DPOs can be executives, employees, or external contractors, and SDAIA reserves the right to request their replacement if deemed incompetent. Controllers must appoint a DPO if they are public entities processing large-scale personal data, if their core activities involve regular monitoring of data subjects, or if they process sensitive personal data. The draft rules also expand on the tasks of DPOs, including providing support and advice on data protection, participating in training and awareness activities, reviewing data breach response plans, preparing compliance reports, and collaborating on AI ethics.
The public can submit comments on the draft rules until 6 August 2024.
Africa
Senegalese CPD publishes second quarterly notice of 2024
On 18 July 2024, the Senegalese data protection authority (CPD) released its second quarterly notice for 2024, detailing its recent activities. The CPD processed 30 files, including 15 declarations, 12 requests for authorisation, and three re-registered files, while suspending decisions in three cases and prohibiting one processing operation. The notice also emphasised the CPD's efforts in raising awareness through digital communication and training, notably with the National Office for the Fight against Fraud and Corruption (OFNAC). Additionally, the CPD participated in significant meetings, including the 46th Plenary of the Consultative Committee of Convention 108+ and the ninth Conference of the Network of African Data Protection Authorities.
South Africa and Eswatini sign Memorandum of Understanding on data protection
On 19 June 2024, the South African Information Regulator signed a Memorandum of Understanding (MoU) with the Eswatini Communications Commission. The MoU aims to formalise the relationship between the two organisations and enhance cooperation in regulating personal data protection laws. It acknowledges the modern global economy, the increased cross-border exchange of personal information, the complexity of information technologies, and the need for cross-border enforcement. The agreement includes provisions for joint initiatives such as sharing best practices, conducting joint research projects, and cooperating on regulatory and policy issues. Additionally, it aims to create a harmonised framework for data protection policies and regulations between the two nations.
Morocco and Benin to cooperate and exchange expertise on data protection
On 17 July 2024, the Moroccan Personal Data Protection Authority announced that it will seek to enhance its cooperation with Benin's National Commission for the Control of Personal Data Protection. The two authorities will seek to continue their work under the cooperation partnership signed in 2022, and focus on the exchange of expertise and experiences in the field. The announcement came as officials from Benin were visiting their Moroccan counterparts in Rabat.
Additional Information
This publication does not necessarily deal with every important topic nor cover every aspect of the topics with which it deals. It is not designed to provide legal or other advice. Clifford Chance is not responsible for third party content. Please note that English language translations may not be available for some content.
The content above relating to the PRC is based on our experience as international counsel representing clients in business activities in the PRC and should not be construed as constituting a legal opinion on the application of PRC law. As is the case for all international law firms with offices in the PRC, whilst we are authorised to provide information concerning the effect of the Chinese legal environment, we are not permitted to engage in Chinese legal affairs. Our employees who have PRC legal professional qualification certificates are currently not PRC practising lawyers.