Navigating Global AI Regulations: Implications for Insurers in a Changing Landscape
On 30 October 2023, President Biden issued a landmark and sweeping Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. The Executive Order is, according to the White House Deputy Chief of Staff, the strongest set of actions any government in the world has ever taken on AI safety and security. On the same day, the G7 agreed the International Guiding Principles on Artificial Intelligence while the global AI Safety Summit took place in the UK on 1 and 2 November. As AI continues to advance, lawmakers are adapting any regulations to address the perceived shortcomings of the existing regulatory framework in dealing with the unique challenges posed by AI. Potential scope for adaptation is therefore significant and there are diverging views and approaches among lawmakers and regulators which have the potential for very different outcomes for firms and consumers. This blog summarises the main AI-specific laws and proposals across five different jurisdictions that will affect insurers.
United Kingdom
The UK government seeks to position itself as a global leader in AI regulation. It hosted the first major global summit on AI safety last week in Bletchley Park and has launched an expert taskforce to help with adopting the next generation of AI and together with its regulatory blueprint referred to below, the UK intends to show leadership while collaborating on an international level to tackle the challenges that AI presents.
The UK government set out its “pro-innovation approach to AI regulation” in its AI White Paper, published in March this year. The government envisages a deliberately flexible principles-based framework which harmonises regulations, with responsible AI innovation at its core. Given that AI use cases can cross multiple regulatory regimes, rather than introducing significant new legislation, the UK’s approach, for now, is to regulate AI systems through leveraging existing regimes and intervening in what they describe as a "proportionate way to address regulatory uncertainty and gaps". They are introducing five cross-sectoral principles for the development and application of AI:
1. Safety, security and robustness - The White Paper sets out that new measures may include regulators requiring regular testing or due diligence on the functioning, resilience and security of an AI system. This principle could also mean regulars should consider technical standards addressing safety, robustness and security to benchmark the safe and robust performance of AI systems.
2. Appropriate transparency and 'explainability' - Parties should have sufficient information about the AI system. This includes regulators.
3. Fairness - AI systems should not discriminate.
4. Accountability and governance - New measures may include effective oversight with clear lines of accountability across the AI life cycle. Regulators should identify clear expectations for regulatory compliance of actors in the AI supply chain
5. Contestability and redress - Third parties should be able to contest an AI decision.
The intention is that regulators, including the FCA and PRA, will publish guidance on how the cross-sectoral principles apply to firms within their regulatory remit by the end of March 2024. If regulators fail to implement the principles effectively using their existing powers and resources, the UK Government anticipates introducing a statutory duty on regulators to have due regard to these principles.
In light of the UK government's pro-innovation stance, the PRA, FCA, and Bank of England are currently evaluating their regulatory roles in the context of integrating AI and machine learning within the financial services sector. The Feedback Statements (FS2/23) on the joint discussion paper titled "Artificial Intelligence and Machine Learning" (DP22/4:DP5/22). While the Feedback Statement does not provide an indication on the direction of policy proposals, it is interesting to note that most respondents highlighted that there are areas of data regulation, in particular, that are not sufficient to identify, manage, monitor, and control the risks associated with AI models. The feedback generally called for greater coordination and harmonisation among sectoral regulators, including alignment on management of protected characteristics as well as data definitions and taxonomies.
The UK government is also seeking to reform the UK General Data Protection Regulation (GDPR). In comparison with the approach in the White Paper, these reforms would create obligations on firms themselves, including compliance with the circumstances in which exclusively automated decisions, such as profiling, are permitted. Notably, compliance with the Equality Act or EU non-discrimination laws will not necessarily ensure compliance with GDPR requirements. As insurers increasingly rely on AI algorithms in underwriting and pricing, ensuring fairness, particularly regarding customers with vulnerable characteristics, is a critical consideration in policy offerings and claims processing decisions.
European Union
In the EU, the AI Act is due to come into force by the end of the year, pending approval by the EU Council. Negotiations are still ongoing between the EU institutions, with the next trilogue scheduled for 6 December 2023. In contrast with the UK White Paper, this Act introduces a comprehensive risk-based regulatory framework which will impose obligations on firms with respect to three classes of AI systems (i) prohibited AI systems; (ii) 'high-risk' AI systems; and (iii) AI systems subject to transparency requirements. Penalties for non-compliance could range from five hundred thousand to €40 million or 1% to 7% of the global annual turnover, depending on the severity of the infringement.
Prohibited AI Systems - AI applications falling under this category are in principle outright prohibited. This encompasses AI applications which are deemed particularly harmful and infringe upon individuals' fundamental rights and dignity. Insurers should be aware that this category includes AI systems which provide social scoring based on customer profiles or use real time and remote biometric identification. The list of prohibited AI practices, including the exact scope of specific prohibitions mentioned here, are amongst the critical discussion topics.
High-Risk AI Systems - Firms using AI systems identified as posing significant potential risks must adhere to a comprehensive set of regulations. This includes conformity assessments and compliance, robust risk and quality management, stringent data governance, thorough documentation, registration, transparency measures, human oversight, cybersecurity safeguards, and potentially 'fundamental rights impact assessments.’ The two broad categories of AI systems that are deemed high-risk are: (i) AI systems that are products or safety components of products covered by specific sectoral legislation, and subject to a third-party conformity assessment; or (ii) 'stand-alone' AI systems listed in Annex III. This last category encompasses AI systems which use biometric identification and categorisation of natural persons or relate to the enjoyment of essential private services and public services and benefits. The vast majority of the requirements of the AI Act apply to such high risk AI systems. Notably, under Annex III of the Act, certain AI systems integrated into the process of providing life and health insurance will automatically be considered high-risk systems. It is anticipated that the European Commission will establish and maintain a publicly accessible database where providers will be required to register details about their high-risk AI systems.
AI Systems Subject to Transparency Requirements - Use of lower risk AI applications which fall within this category will nevertheless be subject to specific transparency rules. For instance, users engaging with AI systems must be made aware that they are interacting with AI, allowing them to make informed decisions regarding continued use.
The AI Act is far-reaching and sets out other rules relating to market monitoring, surveillance, regulatory sandboxes, governance, and registration on a dedicated database which impact a wide variety of sectors. Insurers and intermediary service providers who use, which, according to EIOPA’s thematic review on the use of Big Data Analytics in motor and health insurance, is a considerable portion of the market, will be subject to these regulations. EIOPA’s thematic review found that 31% of the participating European insurance firms were using AI across the insurance value chain, and an additional 24% were at a “proof of concept” stage as early as 2018. The transformative role of AI in the insurance value chain was also highlighted in EIOPA’s Stakeholder Group on Digital Ethics in 2021, with example use cases identified in pricing and underwriting, claims management and distribution, amongst others. As the AI Act envisages a harmonisation with the output of the European Standardisation Organisations, the standards and guidelines developed in relation to the insurance sector specifically will influence how certain requirements will be implemented in practice.
The EU will also be introducing the AI Liability Directive, aimed at harmonising EU Member State rules on compensation for damages caused by AI systems. This Directive applies to non-contractual fault-based civil liability claims related to AI system-induced damages. It seeks to facilitate access to relevant information for claim substantiation and establishes a presumption of causality between the defendant's fault and the AI system's output. The Directive will go through a review process which will include the consideration of no-fault (strict) liability for claims against the operators of certain AI systems and the need for specific insurance related rules. In parallel, the EU is also looking to revise the 1985 Product Liability Directive, to update it and address technological developments, including with respect to AI.
United States
The Executive Order on Artificial Intelligence referred to above introduces a broad set of reforms divided into eight areas, with the 'safety and security' area being the most relevant to the insurers as present. The Executive Order envisages the creation of an AI Safety and Security Board by the Department of Homeland Security. The Executive Order builds upon the White House blueprint for an AI Bill of Rights. This blueprint outlines five essential principles to guide the development, utilisation, and deployment of automated systems, with a primary focus on safeguarding the rights and well-being of citizens. The principles, which share similarities with the UK proposals, include ensuring the safety and effectiveness of AI systems, protecting against algorithmic discrimination, upholding data privacy, providing adequate notice and explanations to users, and considering human alternatives, with the provision for opting out when necessary. The Appendix to the AI Bill of Rights notes that health insurance technologies which support medical or insurance health risk assessments, health insurance costs and underwriting algorithms should consider the AI Bill of Rights when deploying AI systems.
In the US, the two other main AI Federal-level regulations in the pipeline are the Algorithmic Accountability Act of 2022 and the AI Disclosure Act of 2023 which applies to the uses of Generative artificial intelligence.
Algorithmic Accountability Act - Similar to the UK White Paper, the proposals also empower the Federal Trade Commission (FTC) to develop regulations and provide guidance on performing impact assessments of “automated decision systems” used by covered entities (which capture insurers) to make “critical decisions” affecting a consumer’s life. The covered entities must meet certain size or data volume thresholds. However, the proposals go further and would mandate transparency and annual reporting by covered entities and place obligations on covered entities to attempt to eliminate or mitigate “likely material negative impacts” of such augmented decision processes.
AI Disclosure Act - The primary focus of the AI Disclosure Act is to enhance transparency. The proposed legislation mandates deployers and developers, which could capture insurance companies using generative AI, to include a disclaimer that the output in question has been generated by artificial intelligence. The FTC will be empowered to impose penalties under the Federal Trade Commission Act for
any violations of this requirement.
Beyond the national level, several state-wide AI regulations are already in effect and applicable to insurers. For instance, Colorado's Senate Bill 21-169, known as "Protecting Consumers from Unfair Discrimination in Insurance Practices," prohibits insurers from using external consumer data or information sources in AI/predictive models that result in unfair discrimination against protected classes in insurance practices. The legislation empowers regulators to collaborate with stakeholders in developing rules for testing insurers' 'big data' systems, ensuring they do not lead to unfair discrimination and establishing mechanisms for addressing any such discriminatory impact.
Similarly, New Jersey's Bill No. S1402, not yet enacted, would impose restrictions on the use of Automated Decision Systems (ADSs) by insurance companies with the aim of preventing discriminatory practices against protected classes. The bill covers areas like loan terms, insurance conditions, and healthcare services, seeking to rectify instances of disproportionate discrimination within these domains.
China
The most recent AI-specific regulation to be passed in China was the Provisional Administrative Measures for Generative AI Services (GAI Measures), published on 10 July 2023 and effective on 15 August 2023. The GAI Measures have broad scope and apply to any person providing services to the public within China by using generative AI. In line with the theme across various jurisdictions, the GAI Measures mandate stringent steps to prevent algorithmic discrimination, prohibiting the generation of discriminatory content. Additionally, the measures provide clarity on the application of privacy and intellectual property protections to generative AI products. Certain entities offering generative AI services are required to undergo security assessments and file with cyberspace administration authorities pursuant to the relevant rules.
Singapore
Singapore has adopted a different approach to AI regulation, focusing on issuing AI-friendly best practice principles and guidance specifically tailored for financial institutions.
The Monetary Authority of Singapore (MAS) has issued comprehensive white papers outlining methodologies that can be employed when implementing certain principles for responsible AI use in Singapore’s financial sector. These principles, collectively referred to as the "FEAT Principles" (Fairness, Ethics, Accountability, and Transparency), provide guidance on various aspects of AI implementation to (i) define fairness objectives and mitigate unintentional bias in AI, (ii) enable financial institutions to quantifiably measure ethical practices and (iii) assist firms in determining the level of internal and external transparency required to explain and interpret machine learning model predictions.
The methodologies were applied to various financial scenarios, including insurance predictive underwriting and insurance fraud detection, to ensure their practical relevance and effectiveness. MAS’s approach therefore signals that insurers are expected to have an important compliance burden. In June 2023, MAS announced the roll out of MAS’s Veritas Toolkit 2.0 which is aimed at helping financial institutions carry out the assessment methodologies of the FEAT principles.
On 18 July 2023, Singapore’s Personal Data Protection Commission also issued its Proposed Advisory Guidelines on Use of Personal Data In AI Recommendation and Decision Systems for public consultation. These guidelines, while not legally binding and lacking penalties for non-compliance, are designed to clarify how Singapore's Personal Data Protection Act applies when organisation develop or deploy AI to make decisions, recommendations or predictions.
Conclusion
Several common threads emerge across these jurisdictions. The emphasis on consumer protection, governance, prevention of discrimination, and transparency underscores the regulatory attention on responsible AI adoption. However, divergence in implementation and enforcement mechanisms, liability apportionment, penalties and definitions of high-risk AI systems are likely. As it remains to be seen how these AI regulations will evolve, it is clear that insurers and other regulated firms in the sector will need to observe the legal and regulatory developments closely and assess the impact and risks of AI systems on their operations and in their value chain.