Tech Policy Horizon Scanner
September 2022
Open the pod bay doors HAL…
It is reported that Stanley Kubrick, the mastermind behind the HAL, the AI supercomputer in 2001, was so strongly convinced of the imminent discovery of sentient (and sinister) artificial intelligence that he sought an insurance policy from Lloyd's of London to protect himself against film revenue losses should AI be discovered before the release of his magnum opus, 2001: A Space Odyssey.
Whilst the chances of an artificial intelligence wreaking the same eery havoc as HAL with impunity is a remote thought, it does beg the question of how far AI can, and should, be liable for its own actions. This month, the EU proposed a new AI Liability Directive to deal with just this issue. The recent European Union proposals present new rules relating to AI-caused breaches of privacy and laid out an easier route for victims of AI-led discrimination to claim compensation. The rules also aim to reduce the benchmark for causality and provides greater access to evidence, providing litigants with a fair shot when dealing with cases where complex AI systems go wrong. There are, however, no plans to extend provisions in this Directive to include rogue spaceship AI's.
Sticking with the theme of sinister tech, this month, the US FTC released a report exploring the rise of "dark patterns", a sophisticated practice that 'manipulates' consumers, leading them to buy products they don't want and give up data they would rather keep. The FTC report emphasised its strong enforcement capabilities against these 'tricks' and 'dark patterns' citing its consumer-protection mandate and deep understanding of these complex methods including the use of disguised ads and complex subscription cancellation pathways on websites.
Moving from dark patterns to bright futures, this month has seen the Monetary Authority of Singapore shake hands with the International Financial Centres Authority in India to create a cross-border collaboration in the world of fintech. This Fintech Co-operation Agreement will allow companies to utilise regulatory sandboxes across the two countries to promote innovation and growth. On a similarly positive note, this month the UAE Telecommunications and Digital Government Regulation Authority launched the Digital Society Initiative aimed at empowering consumers though information and social initiatives to make them more technologically confident and safe.
China
'Measure for Measure': CAC issue compliance guidance for new data export security measures
On 31 August 2022, the Cyberspace Administration of China ("CAC") published the Application Guidelines for Security Assessment of Data Export (1st Edition) (the "Guidelines"), aimed at helping relevant entities prepare for compliance with the security assessment requirements as set out in the Measures of Security Assessment for Data Export (the "Measures"), taking effect from 1 September 2022.
The newly released Guidelines shed a detailed light on the scope of the Measures as well as the practical procedural aspects required for compliance. These include stipulating the required materials and highlighting guidance on the contents of the application itself. An interesting aspect of the Guidelines is the provision for 'miscellaneous' data transfers, which cover "other cross-border data transfer activities prescribed by the CAC", essentially indicating the added leg-room that the CAC has for further regulatory interpretation and intervention in specifically complicated or novel cases.
The Guidelines also provide a set of application forms and templates, together with detailed guidance on the contents of the self-assessment report itself. This includes information ranging from timeline expectations for the application to a comprehensive insight into the 'Legal Documents' required for the assessment itself.
The release of the Guidelines serves to emphasise how certain data-processor companies dealing with large amounts of data will be particularly affected by the Measures. Its release will likely lead to entities reconsidering data export strategies, data localization and/or storage issues, especially when they begin preparing for the security assessment.
'Top Grades in Class': China's TC260 issue data classification and grading guidelines
On 14 September 2022, the China National Institute of Standards and Technology (“TC260”) published an extensive document highlighting the principles and methodology of data classification that authorities and companies may use once it is finalised.
The draft document, titled The Requirements for Classification and Grading of Information Security Technology Network Data (Requirements), provides a deep-dive look into the contextual, technical, and legal metrics used to grade data. The Requirements highlight three broad data types:
- important data; defined as data relating to specific groups, people or areas which, if leaked, could endanger national security and social/public health and stability;
- core data; defined as data with a wide yet precise reach which, if used illegally, could impact political security; and
- personal data; defined as un-anonymised data relating to identifiable natural persons.
The Requirements also provide practical and contextual guidance as to how data can be classified, noting in the draft that data which only affects the organisation itself or individual citizens is not to be defined as 'Important'. It also suggests that organisations should select "common and stable" factors when classifying data types.
The draft Requirements mark a clear step towards China formulating a cohesive data grade and classification system as required under PRC Data Security Law. The draft Requirements are open for public comments until 13 November 2022.
APAC (Excluding China)
Joint Parliamentary Committee push India one step closer towards major personal data law
Following a lengthy consideration period, the Indian Joint Parliamentary Committee (JPC) has issued its recommendations to the Personal Data Protection Bill (PDPB), thrusting the country closer to signing this far-reaching bill into law.
Introduced in December 2019 to the Indian Parliament, the PDPB is a major bill targeted at ensuring organisations and public bodies protect the personal data of Indian citizens through a comprehensive and structured privacy regime. The JPC, an influential committee within the Indian Parliament, has provided it's reading and recommendations. Most notably, the JPC recommends that organisations be required to have an in-house data fiduciary to i) develop organisation-wide privacy strategies, ii) shift internal approach and practices once the PDPB comes into effect and, iii) ensure continued compliance and accountability.
The JPC also reaffirmed the PDPB's current provisions for tough penalties imposed on bodies for non-compliance which will lead to data privacy becoming a more pervasive concern for Indian organisations, companies, and public bodies; not one that is reserved for a singular department or a siloed team.
The Indian Government have confirmed that they are currently redrafting the PDPB to include the JCP's recommendations. The date for re-tabling the PDPB to the Indian Parliament is yet to be announced.
'A Line in the Sand(Box)': India and Singapore agree to share regulatory sandboxes
The Monetary Authority of Singapore (MAS) and the International Financial Services Centres Authority in India (IFSCA) have signed a FinTech Co-operation Agreement which aims to facilitate and strengthen fintech-related regulatory collaboration and partnership between the two countries.
The MAS, the central bank of Singapore, and the IFSCA, a unified developmental and regulatory authority in India, have agreed a set of joint principles that leverages their domestic regulatory capabilities as well as unites their efforts in aid of promoting beneficial innovation. A key aspect of the Agreement will be to allow mutual referrals for companies to engage in India and Singapore's regulatory 'sandboxes', providing support for "experimentation of technolog[ical] innovations" for both countries. The Co-operation Agreement also aims at bolstering cross-border use-case testing by inviting companies to enter a Global Regulatory Sandbox, allowing companies to test how their products work across jurisdictions.
The Agreement goes further and builds in information sharing provisions which will permit the exchange of non-supervisory related information and developments relating to innovation in Fintech products and services. The key intention here is to encourage cross-border conversation and discussion on Fintech issues leading to collaboration on joint innovation projects.
Joseph Joshy, Chief Technology Officer at the IFSCA, notes, "This agreement is a watershed moment that ushers in a Fintech Bridge to serve as a launch pad for Indian Fintechs to Singapore and landing pad for Singapore Fintechs to India"
EU
European Commission seek to secure new cybersecurity resilience act
The European Commission has released a proposal for the EU Cyber Resilience Act (CRA). The proposed legislation is building on the 2020 EU Cybersecurity Strategy and EU Security Union Strategy, and is meant to (a) increase the standards for cybersecurity rules and promote more secure hardware and software; (b) address weak cybersecurity vulnerabilities within those products and better inform users on the right products for enhancing security; and (c) require software and hardware manufacturers to improve the security of their products from the design phase through its life cycle. The CRA is not the first EU legislative file to address cybersecurity, rather, it aims to complement preceding texts such as the AI Act, the Cybersecurity Act and the Network Information Security 2 (NIS2) Directive. The proposed CRA will apply to all products that are connected either directly or indirectly to another device or network. There are some exceptions for products, for which cybersecurity requirements are already set out in existing EU rules, for example on medical devices, aviation or cars. The draft CRA will now be examined by the European Parliament and the Council. Once adopted, economic operators and member states will have two years to adapt to the new requirements. An exception to this rule is the reporting obligation on manufacturers for actively exploited vulnerabilities and incidents, which would apply already one year from the date of entry into force, since they require fewer organisational adjustments than the other new obligations.
"iRobot…do solemnly swear": EU proposal on liability for AI
The purpose of the AI Liability Directive is to lay down uniform rules for access to information and alleviation of the burden of proof in relation to damages caused by AI systems, establishing broader protection for victims (be it individuals or businesses), and fostering the AI sector by increasing guarantees. It will harmonise certain rules for claims outside of the scope of the Product Liability Directive, in cases in which damage is caused due to wrongful behaviour. This covers, for example, breaches of privacy, or damages caused by safety issues. The new rules will, for instance, make it easier to obtain compensation in relation to discriminated in a recruitment process involving AI technology.
The Directive simplifies the legal process for victims when it comes to proving that someone's fault led to damage, by introducing two main features:
- Presumption of causality: in circumstances where a relevant fault has been established and a causal link to the AI performance seems reasonably likely, the so called ‘presumption of causality' will address the difficulties experienced by victims in having to explain in detail how harm was caused by a specific fault or omission, which can be particularly hard when trying to understand and navigate complex AI systems;
- Right of access to evidence: victims will have more tools to seek legal reparation, by introducing a right of access to evidence from companies and suppliers, in cases in which high-risk AI is involved.
UK
'Making sure the networks, work': UK Government lays out new telecoms security regulations in Parliament
On 5 September 2022, the UK Government laid the new Electronic Communications (Security Measures) Regulations 2022 (Regulations) in Parliament for its scrutiny, along with a draft Telecommunications Security Code of Practice (Draft Code). Both the Regulations and the Code form part of a new telecoms security framework created by the Telecommunications (Security) Act 2021 (which became law in November 2021) and are intended to address risks to the security of the UK’s public telecoms networks and services. They have been developed with the National Cyber Security Centre (NCSC), the UK’s national technical authority for cyber security, and Ofcom, the telecoms regulator.
The Regulations, which come into force on 1 October 2022, set out specific security measures that public telecoms providers need to take in addition to the overarching legal duties in sections 105A and 105C of the Communications Act 2003. These measures are designed to ensure that public networks and services are following appropriate and proportionate security practices. Public telecoms providers that fail to comply with the Regulations could face fines of up to 10% of turnover or, in the case of a continuing contravention, £100,000 per day. Ofcom will monitor and enforce public telecoms providers’ compliance with the regulations.
The Draft Code helpfully contains detailed guidance on how telecoms providers can comply with the regulations. It sets out what good telecoms security looks like, explaining key concepts underpinning the regulations and specific technical guidance measures that can be taken by providers to demonstrate compliance with their legal obligations. Specifically, it sets out how providers may determine whether guidance is applicable, via a tier system of three tier thresholds being determined by a provider’s annual relevant turnover. The Code has been laid in Parliament under the requirement in section 105F of the Communications Act 2003. It will remain in draft for Parliamentary scrutiny for 40 sitting days, after which the Government plans to issue and publish the Code.
'Thinking about getting a PET?': ICO published guidance on privacy-enhancing technologies
On 7 September 2022, the UK Information Commissioner’s Office (ICO) published draft guidance on privacy-enhancing technologies (PETs) to help organisations put a data protection by design approach into practice.
PETs are technologies that can help organisations share and use people’s data responsibly, lawfully, and securely, including by minimising the amount of data used and by encrypting or anonymising personal information. They are already used by financial organisations when investigating money laundering, for example, and by the healthcare sector to provide better health outcomes and services to the public.
The draft PETs guidance explains the benefits and different types of PETs currently available, as well as how they can help organisations comply with data protection law. Examples of the suggested PETs include homomorphic encryption (HE), secure-multiparty computation (SMPC) and federated learning (used in AI). There is also a reference table with links to common standards for each PET. This guidance is part of the ICO’s draft guidance on anonymisation and pseudonymisation. The ICO is seeking feedback to help refine and improve the final guidance.
The PETs draft guidance was published ahead of the 2022 roundtable of G7 data protection and privacy authorities taking place in Bonn, Germany on 7-8 September 2022, where the ICO presented its work on PETs to its G7 counterparts. As per the communication published on 8 September 2022, ICO call on governments and industry to continue to invest in research, development and use of PETs.
Americas
"Fool me once…": Federal Trade Commission Releases Report Showing Rise in Dark Patterns to "Trick and Trap" Consumers
On September 15, 2022, the U.S. Federal Trade Commission ("FTC") released a staff report describing the increasing use of digital "dark patterns," defined as sophisticated practices designed to "trick or manipulate consumers into buying products or services or giving up their privacy." The report, Bringing Dark Patterns to Light, focused on four common dark pattern tactics: 1) "misleading consumers and disguising ads," including disguising a company's own content as third-party independent content; 2) "making it difficult to cancel subscriptions or charges," such as through a difficult cancellation path on a company's website; 3) "burying key terms and junk fees" using tooltip buttons and text formatting; and 4) "tricking consumers into sharing data" by presenting consumers choices about privacy settings that "intentionally steer consumers toward the option that gives away the most personal information." The report also highlighted the enforcement actions the FTC has taken against dark patterns as part of its consumer-protection mandate, including against companies who have surreptitiously added items to consumers' online shopping carts and who have used deceptive marketing designs. This report follows two FTC policy statements from October 2021, Deceptively Formatted Advertisements and Negative Option Marketing.
'Breaking News': Senate Judiciary Committee Advances Journalism Competition Preservation Act
On September 22, 2022, the Senate Judiciary Committee passed the Journalism Competition Preservation Act by a vote of 15-7. The initial Senate bill was first introduced by Sen. Amy Klobuchar (D-MN), Chair of the Subcommittee on Competition Policy, Antitrust, and Consumer Rights, in March 2021. A companion House bill was introduced the same month by Rep. David Cicilline (D-RI).
According to the press release, the bipartisan, bicameral bill, which Senator Klobuchar and Sen. John Kennedy (R-LA) introduced with Rep. Cicilline and Rep. Ken Buck (R-CO), would:
- "Empower eligible digital journalism providers—that is, news publishers with fewer than 1,500 exclusive full-time employees and non-network news broadcasters that engage in standard newsgathering practices—to form joint negotiation entities to collectively negotiate with a covered platform over the terms and conditions of the covered platform’s access to digital news content.
- Require covered platforms—which are online platforms that have at least 50 million U.S.-based users or subscribers and are owned or controlled by a person that has either net annual sales or market capitalization greater than $550 billion or at least 1 billion worldwide monthly active users—to negotiate in good faith with the eligible news organizations.
- Enable non-broadcaster news publishers to demand final-offer arbitration if their joint negotiation with a covered platform fails to result in an agreement after six months.
- Create a limited safe harbor from federal and state antitrust laws for eligible digital journalism providers that allows them to participate in joint negotiations and arbitration and, as part of those negotiations, to jointly withhold their content from a covered platform.
- Prohibit discrimination by a joint negotiation entity or a covered platform against an eligible digital journalism provider based on its size or the view expressed in its content and provide a private right of action for violations of this prohibition.
- Prohibit retaliation by a covered platform against eligible digital journalism providers for participating in joint negotiations or arbitration and provide a private right of action for violations of this prohibition.
- Sunset within six years."
Discussing the bill, Sen. Klobuchar said, "The Senate Judiciary Committee has once again stood up to monopoly tech companies on a bipartisan basis. . . . But local news is facing an existential crisis, with ad revenues plummeting, newspapers closing, and many rural communities becoming 'news deserts' without access to local reporting. To preserve strong, independent journalism, we have to make sure news organizations are able to negotiate on a level playing field with the online platforms that have come to dominate news distribution and digital advertising. Our bipartisan legislation ensures media outlets will be able to band together and negotiate for fair compensation from the Big Tech companies that profit from their news content, allowing journalists to continue their critical work of keeping communities informed."
The bill also had its skeptics, with Sen. Mike Lee (R-UT), Ranking Member of the Subcommittee on Competition Policy, Antitrust, and Consumer Rights, voicing concerns. On Twitter, he posted a video discussing issues with the bill and the caption, "The Journalism Competition and Preservation Act makes publishers more dependent on Big Tech, not less. Unfortunately, the bill's lofty intentions do not outweigh its unintended consequences."
US States Begin Strengthening Children's Privacy Protection
On September 15, 2022, California enacted the California Age-Appropriate Design Code Act, following in the footsteps of the EU's Age-Appropriate Design Code enacted in 2020. The Act applies to any business that provides an "online service, product, or feature likely to be accessed by children" (under age 18). The Act supplements the state's comprehensive privacy law (the California Consumer Privacy Act) and requires online platforms to “consider the best interests of children when designing, developing, and providing that online service, product, or feature.” Businesses also must prioritize children’s safety and well-being wherever there is a conflict between their commercial interests and the interests of children who access these platforms. New obligations imposed by the law include data impact assessments for new services accessed by children, default privacy settings, and notice of monitoring (even if only by a parent). Just as the CCPA inspired several other states to pass their own comprehensive privacy legislation, this law may be the start of a similar wave, with New York introducing legislation modeled after the bill on September 23. The Act is scheduled to come into force on July 1, 2024, although some obligations like data impact assessments may need to be commenced earlier.
Middle East
'DLTs meet PoAs': Ministry of Justice launches digital power of attorney issuance service
The UAE Ministry of Justice has launched an online power of attorney issuance service platform, which will allow users to generate legal documents via the blockchain.
The introduction of this legal service will permit parties to generate and monitor digitally ratified power of attorney documents in less than 10 minutes without the requirement of personally visiting a notary. The underlying aim of this initiative is to strengthen the country's remote litigation system.
To ensure data integrity and protection from fraud and alterations, the documents will be stored on the blockchain, with users being issued with digital wallets which they can share with competent authorities and other third parties during proceedings.
The UAE believe that this new platform will not only reduce paper transactions, save time and protect the environment, but also aid it in its plan to develop an integrated legal and judicial services system capable of reaching far beyond its national borders.
UAE Ministry launches digital platform for ICV Programme certification
The National Committee for the National In-Country Value (ICV) Programme, headed by the Ministry of Industry and Advanced Technology (MoIAT), has announced the launch of a new digital platform enabling companies to obtain the national ICV certificate.
Supported by artificial intelligence (AI) technology, the new platform will automate the certification process, which can save companies up to 40 percent in time and costs. Moreover, the platform utilises blockchain technology to check the validity of certificates, in addition to offering a 'bidding process' feature, which enables users to select the certifying body.
This platform is in line with the ministry’s strategy to further develop the industrial sector and future industries in the UAE by supporting the national industrial strategy, Operation 300bn, which aims to create a business environment that attracts local and international investors and stimulate innovation and the adoption of advanced technology.
'Education, Education, Education': TDRA launches Digital Society Initiative
The Telecommunications and Digital Government Regulatory Authority (TDRA) has launched the Digital Society Initiative, which features a series of awareness field activities to promote leveraging digital means and improve the digital wellbeing of various segments of society.
Which includes promoting TDRA social initiatives, with real-life examples on how to leverage them to facilitate the daily life of various segments of society, thus accelerating the pace of digital transformation in the UAE.
The Digital Society Initiative aims to raise awareness of the Digital Government’s initiatives, services and efforts to enhance the quality of digital life of the various members of the UAE community. It also aims to increase the confidence of customers in the Digital Government and its services, and encourage the country's residents to adopt digital enablers and use digital services.
Africa
'Taxi!': Kenya takes steps to cap commission for ride-hailing companies
Kenya's National Transport and Safety Authority (NTSA) is introducing a 18% commission cap for all ride hailing companies in the country. The cap will impact major international players such as Bolt and Uber.
Uber, which recently left neighbouring Tanzania following the enactment of a similar cap limiting commission to 15%, is reportedly taking a challenge to the Kenyan Supreme Court to nullify the law.
Tanzania data protection law receives first reading in Parliament
The Personal Information Protection Bill has received its first reading in the Tanzanian Parliament. The Bill borrows heavily from other data protection regimes and focuses on the protection of personal information as well as data security.
'Make sure to Like and Subscribe': Uganda passes law to regulate "misuse" of social media
The Ugandan Parliament has passed the Computer Misuse (Amendment) Bill 2022. The Bill, which amends the Computer Misuse Act 2011, introduces, amongst other things, specific provisions for hate speech and disinformation. The Bill, was introduced privately by a Member of Parliament and has proven divisive amongst other Ugandan Parliamentarians and regional civil liberty organisations, with critics suggesting that the Bill, and in particular, the social media offence would reduce freedom of expression.
The controversial new offence introduces a penalty for any "person who uses social media to publish, distribute or share information, prohibited under the laws of Uganda or using disguised or false identity".
The Bill's definition of social media is broad and, unusually for Internet governance legislation, mentions specific platforms by name. Under the Bill, social media is taken to mean "a set of technologies, sites, and practices which are used to share opinions, experiences and perspectives, and includes YouTube, WhatsApp, Facebook, Instagram, Twitter, WeChat, TikTok, Sina Weibo, QQ, Telegram, Snapchat, Kuaishou, Qzone, Reddit, Quora, Skype, Microsoft Team and Linkedin".
The Bill will now be passed onto the Uganda President for assent before becoming law.
Digital Ministry to create 'Gambia Cyber Security Policy'
The Gambian Minister of Communication and Digital Economics has reportedly noted that his Ministry is working with stakeholders on the country’s proposed cyber security cloud policy aimed at helping to solve any future cyber-attack in Gambia.
This forms part of Gambia's wider strategy policy for ICT between 2022-2024 which seeks to deploy infrastructure that will ensure reliable and quality internet (e.g. via a second submarine landing station).