New legislation for AI in Europe: what status 3 months after the new Commission's appointment?
The Commission's white paper and related report on AI set the scene for developments to come
The appointment of the new European Commission in 2019 was preceded by announcements for the putting forward, during the first 100 days, of legislative proposals for a coordinated European approach on the human and ethical implications of AI. Whereas no specific text has been proposed so far, we observe key developments since the beginning of the year.
This article looks at some key takeaways of the AI white paper published by the European Commission on 19 February and the related AI report on the safety and liability implications of AI, the Internet of Things and robotics . This forms part of a wider, new digital strategy for Europe, which also includes a European data strategy.
The AI White Paper
The AI White Paper is structured around two main building blocks: (i) a policy framework for AI aimed at mobilising efforts and resources to achieve an "ecosystem of excellence" in Europe; and (ii) the development of a future AI regulatory framework to create an "ecosystem of trust".
Europe needs to seize – and more importantly, not miss – key opportunities. These include the next data wave, and the shift of data processing and analysis from data centres and centralised computing facilities to smart, connected objects and "edge computing".
To encourage the development and use of AI in Europe, various actions are identified in the AI White Paper: closer coordination and cooperation between Member States and the Commission, further funding and investment programmes, skills development and research centres, fostering collaborating between SMEs, public private partnerships, etc.
A new legal framework
Whilst the legal framework is a key aspect, the situation remains open on this question, and the Commission cautious. The Commission rejects the need to completely rewrite EU or national liability rules to take account of AI. This being said, it has identified gaps and possible adjustments to be made to existing legislation as regards safety and liability issues. Beyond, there may be a need for new legislation specifically on AI.
The exact way forward in terms of legislative proposals, and the timing for their preparation, are not specified. Rather, at this stage the Commission is soliciting comments and opinions through an open consultation.
The notion of AI remains to be clearly defined. The right balance will need to be struck between legal certainty on the one hand, and flexibility to accommodate technical progress on the other.
A risk-based approach
The new regulatory framework for AI would be risk-based. The central distinction would be between "high-risk" AI applications and other AI applications. The AI White Paper provides a definition of high-risk AI applications:
SECTOR CRITERION + USE CRITERION = HIGH RISK AI
- Sector Criterion: Where significant risk can be expected - healthcare, energy,transport etc
- Use Criterion: Use of AI app is such that significant risks are likely - injury, death, material/immaterial damage.
The two criteria above would be cumulative. However, some uses of AI may be considered high-risk as such. The AI White Paper mentions (i) intrusive surveillance technologies, in particular remote biometric identification (e.g. facial recognition), and (ii) situations impacting workers' rights and recruitment processes. The AI White Paper also opens the door to other applications, affecting consumer rights.
Where AI is classified as high-risk, specific requirements would apply. They could include requirements regarding training data, data and record keeping, ensuring transparency, robustness and accuracy requirements, the need for human oversight and other specific requirements for specific applications (e.g. those used for remote biometric identification).
Transparency: As regards transparency, people would have to be informed when they are in contact with an AI system. Also, clear information would have to be provided on the capabilities and limitations of AI systems.
Human involvement: The extent to which a human needs to remain involved is a major consideration, and it is at the heart of concerns when it comes to AI. The AI White Paper runs through different manifestations of human oversight. In certain situations, there may be a need for prior human review and validation, whereas in other cases it may be sufficient for the control to take place afterwards. The driverless car is taken to illustrate other manifestations, such as the ability to (re)act in real time and deactivate the technology for instance.
The allocation of responsibilities
Future legislation will need to address the allocation of the different obligations between the different operators.
The AI White Paper raises the sensitive question of the geographic scope of the new AI regulatory framework: to be effective, the new requirements need to apply to all operators providing relevant products or services in the EU, regardless of where they are established.
A prior conformity assessment would ensure compliance with the requirements for high-risk applications (e.g. through testing, inspection and certification procedures, or checks of algorithms and data sets). One specific issue here is the fact that AI systems may evolve and learn: assessments may therefore need to be reiterated over time.
With respect to applications that are not high-risk, a voluntary labelling scheme could be implemented. Compliant operators could obtain a quality label. Once an operator signs up however, it would need to comply with the relevant requirements.
Product safety and liability
The AI Report accompanying the AI White Paper identifies some of the new challenges that AI, the Internet of Things and robotics give rise to as regards product safety and liability. Many reflect previous developments and discussions, in particular those of the 2019 report of the Commission's expert group on Liability and New Technologies – New Technologies Formation.
Regarding product safety, the existing legislation would need to be adapted to address gaps.
Issues arising from the autonomous behaviour of AI may require specific rules according to the AI Report. For instance, changes during the product lifecycle may warrant new risk assessment procedures, reinforced obligations for manufacturers in terms of instructions and warnings to users and human oversight requirements.
Requirements to deal with the risks of faulty data, and maintaining the quality of data, may be needed.
The "opacity" of systems based on algorithms gives rise to risks that EU product safety legislation does not address. This may call for specific requirements in terms of transparency of algorithms, human oversight and lack of bias.
There are questions on the extent to which stand-alone software is currently covered by existing EU product safety legislation. Additional cooperation and information requirements between the different operators in the supply chain and users may be required.
Adjustments to the existing liability framework, both as regards the Product Liability Directive and national liability regimes, also need to be considered.
The notion of "product" under the Product Liability Directive may need to be revised to deal with the complexities of emerging technologies and defects resulting from software.
Like the Expert Group Report, the AI Report discusses the need to facilitate or adapt the burden of proof due to complexities regarding AI applications.
As mentioned above, the AI Report considers the changing nature of AI systems, for instance as a result of software updates or machine learning functionalities, and hence new risks that would not exist when the "product" is placed on the market.
The Commission is seeking views on two key issues also raised in the Expert Group Report, i.e. (i) the need to provide a specific, strict liability regime for the operation of AI applications with a higher risk profile, and (ii) the need for specific insurance.
What's next?
The European Commission is soliciting comments on the AI White Paper and the AI Report and the proposals they contain through an open public consultation. Comments can be provided until 31 May 2020. This is an opportunity to get involved and to help shape the future framework for AI in Europe.
The AI White Paper also calls for a debate on key societal concerns related to the use of AI, in particular as regards the circumstances in which AI could be used for remote biometric identification purposes.
In a nutshell
- Challenges and gaps are identified with respect to EU product safety legislation and liability regimes.
- Beyond adapting existing rules, new and specific AI legislation may be required.
- What exactly the future EU regulatory framework for AI will look like remains an open question. The AI White Paper and AI Report make various proposals. They are now open for comments via public consultation.
- The new regulatory framework for AI would be risk-based.
- "High-risk" AI applications would be subject to specific requirements, for instance in terms of information to be provided or human oversight.
- What is a high-risk application depends on both the sector concerned and the use to be made. However, some uses may be deemed high-risk whatever the sector. Key examples are intrusive surveillance technologies and uses impacting workers' rights.
- The new AI regulatory framework would apply to all operators providing relevant products / services in the EU, wherever they are established.
- A prior conformity assessment would be implemented. It could include such things as checking algorithms and data sets.
- Voluntary schemes may be set up.
Other recent developments and actions to come
The AI White Paper is one pillar of the European Commission's new digital strategy for Europe. Another is the European strategy for data, which was also presented on 19 February. Aimed at establishing a single market for data in the EU, it proposes policy measures and investments for the coming five years. Key aspects of this strategy include:
Establishing the appropriate regulatory framework, for instance a legislative framework for the governance of common European data spaces (2020) and a Data Act (2021). This notably aims at facilitating cross-border and cross-sector data use, supporting business-to-business data sharing and clarifying the rules and liability for data use. Generally voluntary, data sharing may in certain (limited) circumstances be compulsory.
Developing infrastructures and platforms. The strategy includes investing in next generation data processing infrastructures, data sharing tools, architectures and governance mechanisms. As to the envisaged funding, the EU Commission could aim at investing EUR 2 billion in the project, with Member States and industry also being expected to co-invest with the Commission. The Commission would also facilitate a cloud services marketplace for EU users from the private and public sectors (by Q4 2022).
Creating European "data spaces" in specific areas. The aim is to ensure that the necessary data, as well as the tools and infrastructures to use and exchange that data, are available in strategic sectors and domains of public interest (e.g. industrial manufacturing, green deal, mobility, health and financial).
A consultation has been launched on the European strategy for data too, ending on 31 May 2020. This provides an opportunity to contribute to defining the priorities for the European data economy in the coming years, as well as how the proposed measures fit into the current regulatory framework (e.g. GDPR, Regulation on the free flow of non-personal data).
In parallel, there have been various other publications since the beginning of the year, and several actions are ongoing in different sectors and areas in Europe. Below are examples:
ICO gathers feedback on AI and privacy guidance
On the day the AI White Paper was published, the UK data protection authority launched a consultation on its draft guidance on the AI auditing framework. In this detailed document, the UK data protection authority issues, for the first time, wide-ranging recommendations to manage the privacy risks that AI uses pose. It notably expects organisations to invest resources on AI risk management that are aligned with their use of AI, including robust risk programmes supported by senior management, trained and diverse teams, and clear role allocation. Once the consultation closes on 1 April 2020, the UK data protection authority will finalise and publish its guidance, and rely on it when auditing organisations' compliance with data protection law.
EASA publishes its roadmap on AI
In February 2020, the European Union Aviation Safety Agency (EASA) issued the first version of its AI roadmap. It highlights key questions raised by AI, discusses what AI is and presents EASA's AI objectives and roadmap. It also assesses the impact of machine learning on the aviation sector, including applications in the context of: (i) aircraft design and operation, and not least autonomous flight; (ii) aircraft production and maintenance, including as regards AI-based predictive maintenance; (iii) air traffic management; (iv) drones, UAM and u-space, areas which involve high levels of automation and disruptive technologies such as AI and blockchain; (v) safety risk management; (vi) cybersecurity; and (vii) environmental issues.
According to the EASA roadmap, the current regulations provide an open framework for the use of AI and machine learning, but they will need to be adapted. The new basic regulation of 4 July 2018 should facilitate this. EASA will define a common policy for any domain-related regulations.
Public hearing – Blockchains
A hearing by the Internal Market and Consumer Protection Committee of the European Parliament on AI and blockchains is planned for 18 March 2020. It will explore the opportunities and challenges that AI and blockchain present for the single market ("AI and Blockchain: opportunities and challenges for the Single Market").