Insurtech Update: Aye-aye, Eye and AI
The Aye-aye (Daubentonia madagascariensis) is a long-fingered lemur native to, you guessed it, Madagascar – but my voice assistant told me it was a Canadian rock band (called Eye Eye).
It is highly likely that you have your own experiences of the (current) limitations of artificial intelligence and in similar superficial contexts. However, when complex technology is used in the insurance value chain, deficiencies such as algorithm bias can have discriminatory and other significant consequences for policyholders such as higher premiums, refusal of insurance cover, or rejection of claims.
In light of the UK government's pro-innovation policy, the PRA, FCA and Bank of England have been examining their roles as regulators in supporting the adoption of AI and machine learning in financial services. The regulators are at a crucial point in developing their respective approaches, currently driven by the (now closed) joint Artificial Intelligence and Machine Learning discussion paper (DP22/4:DP5/22). The discussion paper intends to help the regulators understand how UK financial services regulation can support the "safe and responsible adoption" of AI and machine learning. The paper discusses three key stages of the AI lifecycle: (i) data (quality, privacy, infrastructure, governance); (ii) model-related risks (development, validation and review); and (iii) governance (accountability, risk mitigation and management and the role of the SM&CR). All three stages are relevant to regulated firms, but the first two stages are also relevant to unregulated companies that develop and train AI for deployment by financial services firms.
As the paper points out, there is a wide-ranging domestic and global debate about the regulation of AI and machine learning. A significant contribution to the debate is the Open Letter published in March 2023, calling for a pause on the training of AI systems that are more powerful than GPT-4 and for the joint development and implementation of shared safety protocols for advanced AI that are overseen by independent experts. The letter noted that AI systems are already creating and distributing convincing fake imagery and misinformation and show a tendency "toward amplifying entrenched discrimination and biases". It also called for policymakers to accelerate "robust AI governance systems" and was supplemented by a list of policy recommendations, including one for the urgent adoption of a coherent liability framework for developers and downstream deployers of certain AI systems that cause harm.
Whatever new policies and legislation are imposed at a macro level, inevitably, insurers and intermediaries will face a heavier compliance burden than companies in the AI supply chain that fall outside of financial services regulation. This could create tensions between counterparties resembling those observed during the implementation of outsourcing and third-party risk management rules and the Consumer Duty, which insurers and intermediaries will have to navigate when negotiating their commercial arrangements with AI providers.
While waiting for the response to the joint discussion paper, there is plenty of information from the UK government, BoE, FCA and/or PRA that firms can consult when evaluating strategic projects that involve the development or adoption of AI and machine learning technology, much of it sign-posted in the discussion paper itself.
If you would like to speak to one of our experts here at Clifford Chance about a proposed project, please contact Ashley Prebble and Emma Eaton.