Tech Policy Unit Horizon Scanner - August 2021
The Tokyo Olympic Games will draw to a close this weekend, after a fortnight which captivated the world. Fine, but was it digital enough? From an opening ceremony which featured a medley of video game soundtracks, some had hoped this would be the Games where esports take the Olympic stage. But while gaming could help draw young audiences, and has reached official sport status in countries like South Korea, the International Olympic Committee still wants to separate the tennis from the Tetris.
Speaking of things piling up, this month has seen a stack of new announcements in the central bank digital currency (CBDC) space. The big one comes from the Eurozone, which has launched a 24-month investigation into a digital euro. It falls short of a confirmation that a retail CBDC will be created and piloted, but as European Central Bank President Christine Lagarde has emphasised, these things take time. Meanwhile, the race is on in West Africa between Nigeria and Ghana to be the first to pilot a CBDC: the Ghanaians have the edge, with a September launch date in mind.
In AI news, the EU is taking aim at Big Brother. Members of the European Parliament are mulling over amendments to the AI Act to ban intelligent recognition of face, gait and voice in all public spaces. That would extend the ban in the draft bill to private actors, as well as the state, following widespread advocacy on the issue.
While the Europeans keep their AI under close watch, in Australia they're affirming its powers of imagination. A landmark case in intellectual property law has seen an AI system credited with the 'inventive step' in a patent application, likely a global first. That might spell a new recipe for creative genius in the digital age: 1% inspiration, 99% iteration.
Finally, it's been another big month in tech antitrust. In the U.S., President Biden is challenging the consumer welfare paradigm of antitrust law as he calls upon the Federal Trade Commission to double down on Big Tech. The UK is giving wide powers of pro-competitive intervention to its new tech competition regulator, the latest instalment in the proactive-not-reactive movement in antitrust law. And the Australian competition regulator has taken a wide-angle lens to the online marketplace industry. Judging by their inventive approach to media bargaining on social networks earlier this year, it's anyone's guess what will happen next…
Africa
Digital currency pilots in Nigeria and Ghana
Further to our previous edition, the Central Bank of Nigeria (CBN) has now issued a statement that it will pilot its central bank digital currency (CBDC) from 1 October 2021, which will run on the open source Hyperledger Fabric blockchain.
The CBN has moved swiftly, partly on the basis that, on their figures, 80% of central banks are now considering a CBDC issuance. The announcement from the CBN emphasised these competitive pressures, taking the view that Nigeria should not be left behind in the race.
It is hoped that a number of advantages will flow from the project, including stronger macro-economic management, cross-border trade support and financial inclusion. It may also have benefits ranging from higher efficiency for payments and remittances, improved tax revenue collection, and the facilitation of targeted social policies. However, from the CBN's perspective, the key goal will be to stem the flow of money to decentralised cryptocurrencies, which has been considered both a symptom and a cause of the weakening naira in recent years.
Nigeria is not the only West African state with CBDC aspirations: the Bank of Ghana hopes to beat the CBN to the punch with a pilot launch of its digital cedi in September. In a statement, Ghana's first deputy governor of the Bank noted that the pandemic has quickened the drive towards a cashless economy and is likely to shape monetary policy going forward. The length of the pilot will depend on operational performance and public acceptance.
Privacy in South Africa
As reported in our previous edition, South Africa's rollout of the Protection of Personal Information Act (POPIA) continues apace. The Information Regulator has begun issuing guidance, particularly in relation to what constitutes special personal information like race, sexual orientation, biometrics and criminal convictions. Sharing of any such information is prohibited under POPIA, subject to exceptions such as data subject consent and requirements of national defence.
The Information Regulator has also published guidance on exemptions from conditions for lawful processing of personal information under POPIA. In particular, it is exempt in cases where processing is in the public interest, or involves a clear benefit to the data subject that substantially outweighs the interference with their privacy.
For more information on POPIA, see our round up in the April edition.
Americas
Colorado joins California and Virginia in enacting privacy law
We have previously covered data protection legislation emerging in California and Virginia. In July, Colorado joined the club with its own Privacy Act (CPA), closely resembling developments on the East and West Coasts.
The CPA applies to data controllers, defined as persons conducting business in Colorado or targeting Colorado residents as customers. That mirrors Virginia's 'nexus' requirement, and shares the same drawback: nowhere does either state define what it means to conduct business. There are further threshold requirements to keep small businesses out of the scope of the law, though the thresholds are lower if a business derives any revenue from sale of personal data, a stricter position than in California or Virginia.
Meanwhile, the set of consumer rights is closely aligned with the Californian approach. That means rights to opt out of personal data processing, to access collected data, to correct inaccuracies, to have data deleted, and to have data provided in a format that can be transmitted to another party. There are also duties on controllers, including providing a clear privacy notice to consumers, minimising data collection, and taking measures to secure data gathered.
One notable departure from the earlier privacy laws is Colorado's definition of sensitive data (covered by additional protections). While it includes information such as race, religious beliefs, health conditions and biometrics, it is narrower than the Virginia definition insofar as it does not classify precise geolocation data as sensitive. It is narrower again than the Californian definition, which includes social security numbers and union membership, among other things. Insofar as questions of data sensitivity are closely aligned with community values and identity, it is perhaps unsurprising that this is where variations are beginning to emerge in the American privacy landscape.
For more information, consult our briefing on the new law here.
Antitrust in the USA – the return of FDR?
In July, President Biden signed the Executive Order on Promoting Competition in the American Economy. The Order expressly harks back to the Franklin D. Roosevelt trustbusting era in calling for a 'whole-of-government approach' to antitrust policy.
President Biden refers to the Department of Justice (DOJ) and the Federal Trade Commission (FTC) as the 'first line of defense' in antitrust policy, and calls for more active enforcement by both bodies. In addition, he highlights the FTC's authority to issue competition-enhancing rules. The Order goes on to give particular treatment to certain key sectors, including technology. For instance, it establishes a policy for greater scrutiny of mergers by dominant tech platforms, and asks the FTC to develop rules to curb unfair competition practices by online marketplaces.
In a striking move, the President also criticised the established consumer welfare standard in antitrust policy in a speech prior to signing the Order. In particular, he emphasised that we should also consider the welfare of other market participants, like workers, farmers, small businesses and start-ups.
The impact of executive orders is debateable. They may be reversed by a sitting or future president, and are subject to judicial review. Nonetheless, coming hot on the heels of a round of tech antitrust bills in June (see our previous edition), this Order is a reminder that the current administration sees antitrust as one of the defining issues of the day.
To learn more about the executive order, you can review our briefing here.
In other antitrust news, President Biden nominated Jonathan Kanter to head the Antitrust Division of the DOJ. Mr Kanter is known for his advocacy work against large tech companies (notably Google and Apple) on behalf of third parties. His nomination remains subject to Senate confirmation.
Asia Pacific
Antitrust in Australia – after Facebook and Google, it's now online retail
After making headlines earlier this year for imposing the News Media Bargaining Code on Facebook and Google, the Australian Competition & Consumer Commission (ACCC) has a new target in its sights: a press release issued on 22 July announces a wide-ranging investigation into online retail marketplaces.
The focus on online marketplaces is explained in terms of their soaring popularity Down Under. Covid-19 and lockdowns accelerated an existing trend toward online shopping, with Australians spending a record $50 billion online in 2020, almost double their 2018 spend. As a result, the ACCC will take a closer look at key issues in the market, including pricing practices, use of data, terms and conditions on third party sellers, and instances where the marketplace sells directly.
The final report will not be completed until early 2022. Given the ACCC's record of novel market interventions, this will be one to watch.
Australian court finds AI is the inventor
Can an AI system be credited as the inventor of a patent? In a ground-breaking decision, an Australian court has decided it can be. Stephen Thaler, an AI creator, designed an AI system called DABUS with which he credits two engineering designs. He has sought to have these designs protected in his own name (but listing DABUS as the inventor) in a dozen countries globally.
So far, courts in the UK, U.S. and EU have rejected the applications. The key stumbling block in patent law is the requirement referred to variously as 'non-obviousness' or the 'inventive step'. Traditionally, this step has been attributed to human inventors only, both by statutory definition, and due to scepticism over the potential for creativity in computational processes. Yet there are criminal penalties in the U.S., for instance, against attributing the non-obvious step to the incorrect party. That has left some companies between a rock and a hard place: Siemens was unable to patent an AI-developed car suspension system in 2019 on these grounds, as it could not attribute the invention to either the AI system or the system's developers.
The Australian Federal Court decision is therefore significant for an emerging field of AI-invention. Justice Beach's judgment notes that Australian patent legislation does not define the term 'inventor', unlike other statutes globally. He was therefore willing to accept that the ordinary meaning of 'inventor', as an agent noun, is wide enough to embrace non-human inventors. He also dismissed concerns about derivation of title from a computer system.
It is likely that the decision will encourage Mr Thaler to renew his patent fights globally. However, he emphasises that this is a philosophical battle more than a legal one, with his goal to show the world that 'the system 'walks and talks' just like a conscious human brain.'
Cybersecurity in China
China is ramping up its cybersecurity requirements. In early July, the Cyberspace Administration of China (CAC) opened a consultation on amendments which would impact critical information infrastructure operators (CIIOs) with personal information on over 1 million users. In particular, under the new rules, these CIIOs would need to obtain state clearance before listing on an overseas exchange. State review will involve assessing the risk that the relevant CIIO, or data that it holds, will be impacted or controlled by a foreign government.
Meanwhile, the CAC has been getting more active. It has initiated cybersecurity reviews of a number of network operators, conducted on-site cybersecurity investigations together with other authorities, and forced removal of relevant apps from app stores. Tech companies operating in China would be well advised to prepare for enhanced supervision on cybersecurity compliance and data export issues in the coming months.
Europe
Ban on facial recognition in the EU?
If mask-wearing during the pandemic didn't put paid to facial recognition technology, the EU might be about to deliver the final blow. As the AI Act now moves to the European Parliament and the Council for review, support for an outright ban on the technology in public spaces is growing.
The AI Act in its current draft restricts the use of real-time facial recognition for the purpose of law enforcement, with carveouts for serious crime. However, both NGOs and EU bodies have pushed for further-reaching restrictions. In June, hundreds of NGOs signed a letter promoting a ban on all facial recognition technologies in publicly accessible spaces (including by private actors), along with other remote technologies in relation to biometrics like gait and voice. Later that month, the European Data Protection Board Opinion on the AI Act proposed a general ban on AI systems for large-scale remote identification.
The message has now filtered through to some of the key MEPs. The AI Act rapporteur, Brando Benifei, stated in an interview in July that he supports a ban on use of facial recognition in public. The position is also supported by several of Mr Benifei's colleagues on the Internal Market and Consumer Protection (IMCO) Committee. Christian Kastrop, a senior official in the German ministry for justice and consumer protection, has also signalled his support for the ban in a strong sign for its prospects at the Council stage.
See our full briefing on the Commission's AI Act proposal here.
Algorithmic transparency in Spain
On 12 August, a new Spanish law will come into effect giving workers the right to understand the algorithms that govern their jobs. The rule is the first of its kind in Europe, and has turned heads since being signed into law in May. The adverse impacts of AI on workers have been borne out across the continent, from the UK, where the Post Office fired workers based on an AI system's mistaken findings of theft, to Italy, where Deliveroo's algorithm failed to account for legally protected reasons for withholding labour (e.g. sickness) when ranking drivers. Spain's new law addresses injustices like these by shining a torch on the algorithms themselves.
Or at least, that is the hope. Detractors of the law argue that it is too vaguely drafted to achieve that purpose. It remains unclear whether employers will need to provide the source code of the algorithm, or offer an explanation of how it works, or do something else entirely. Another point of debate concerns exercise of the right itself, which is held by a company's works council, rather than by the workers directly. Companies of under 50 workers do not need to have such councils, and are therefore unprotected. The right is also subject to a duty of secrecy and confidentiality in the council's hands, limiting its effectiveness. Nonetheless, the law may prove to be an important first step toward algorithmic transparency in the workplace.
In addition to the rule on workers' algorithmic review rights, the law also creates a presumption of employee status for gig economy workers. The employment status of workers for companies like Uber and Deliveroo has been widely contested across Europe, with a UK Supreme Court decision in February concluding that Uber drivers are not self-employed.
Digital Markets Unit takes shape in the UK
The UK's proposals regarding powers held by the Digital Markets Unit (DMU) reveal some interesting developments in antitrust law.
A new branch of the Competition and Markets Authority (CMA), the DMU will regulate firms engaged in digital activities which hold strategic market status. That designation will reflect not only substantial market power, but also the extent of their entrenchment in the market.
The DMU's key roles will be to impose a Code of Conduct and to administer pro-competitive interventions in relation to such businesses. While the Code of Conduct awaits further clarification, the plans for pro-competitive interventions offer more for antitrust enthusiasts to chew over. First, the very fact that the DMU may intervene in cases of substantial and entrenched market power suggests a departure from the typical antitrust target of market dominance. Second, the interventions available to the DMU are likely to be wide in scope, including specific performance directions. Third, as the aim is to intervene quickly to anticipate further entrenchment, the DMU is seeking views on an appropriate time frame for intervention investigations – likely shorter than the 18 months for the CMA.
These design features speak to a broader trend in tech antitrust across Europe. Increasingly, states are empowering regulators to intervene quickly at early stages of market development to avoid the emergence of dominant players in the first place. We discussed this in relation to a German market tipping case in our previous edition – there, a court ordered a platform to withdraw a rebate which threatened to turn the market duopoly into a monopoly.
To read more about the DMU proposals, consult our client briefing here.
A step closer to a digital euro
The Eurozone has taken a major step in the direction of adopting a digital euro. On 14 July, the Governing Council of the European Central Bank (ECB), comprising the Executive Board of the ECB and the governors of national central banks of the 19 euro area countries, agreed to launch the investigation phase for the Eurozone's own CBDC. The decision comes nine months after the publication of the ECB report on the digital euro, during which time the ECB has carried out consultations with the public.
The investigation phase will last 24 months. It will cover a wide array of issues, including design features for the currency, impacts on the market including intermediaries, and the legislative changes required for implementation. The ECB press release also emphasises that there will be a focus on environmentally friendly technologies, noting that the energy consumption of the key infrastructure models under consideration is "negligible" when compared with cryptoassets like Bitcoin.
At this stage, there are no guarantees that the investigation will culminate in the issuance of a CBDC. However, it is the clearest signal we have seen so far that the ECB is serious about its commitment to a digital euro.
Middle East
Deepfakes in the UAE
In recent years there have been growing concerns around the use of deepfakes, that is, creation of video or audio to mimic another person's likeness or voice. In a current example, controversy has surrounded the use of AI technology to generate lines of narration by chef Anthony Bourdain in a new documentary about his life and death.
The U.A.E.'s National Program for Artificial Intelligence has now released a guide taking aim at deepfakes. The guide identifies the core malicious purposes of deepfakes, including reputational damage, manipulation of public opinion, and fabrication of legal evidence. It also distinguishes shallow fakes – basic alterations to video to add misleading timestamps or to slow down speech – from more sophisticated AI-based fakes. It concludes by warning the public from sharing such content, and providing advice on how to detect deepfakes, by looking for signs of disjunction between speech and lip movement and sudden changes in lighting, among other things.