Financial services can light the way for context-specific AI regulation
Given the rapid uptake in the use of artificial intelligence (AI) including machine learning (ML) technologies in the financial services sector, the sector and its regulators have enjoyed a head start in exploring and seeking to navigate the issues that arise. The sector is therefore well-positioned to demonstrate how a pro-innovation, context-specific and risk-based approach to regulating AI can succeed.
From the standpoint of the financial services sector
In October 2022, the Bank of England (BoE) and the Financial Conduct Authority (FCA) carried out a survey (Survey) on the state of AI adoption in the financial services sector, and hopes and expectations for the future. Findings from the survey results suggest:
- Clearer regulation would help: Almost half of the respondents said that some of the financial regulation rules constrain ML deployment, with responses suggesting the lack of clarity in such rules is itself a constraint.
- The sector sees a clear link between ML and its benefits: Firms consider that the main benefits of ML are enhanced data and analytics capabilities, increased operational efficiency, improved combatting of fraud and money laundering and enhanced risk management and controls.
- Adoption is picking up pace: 72% of the respondents reported using or developing ML applications, up from 67% in the 2019 survey. Banks and insurers have the highest number of ML applications.
- Insurers are particularly advanced in their ML implementation efforts: For insurance firms, the most advanced deployment of ML is within their core business – supporting either pricing or underwriting. In banking, asset management and elsewhere, notable uses of ML include data management, marketing and cross-selling, fraud prevention and anti-money laundering, trading and compliance.
- Firms don't see current use cases as overly risky: Overall, respondents consider the current levels of risk to be low to medium across all categories (i.e. data, models, governance, consumer and regulatory) of risk. Of greatest concern to respondents were: (i) biases in data, algorithms and outcomes (52%); (ii) data quality and structure issues (43%); and (iii) lack of explainability, in terms of both the model and the outcomes (36%).
Can we build on existing financial regulation?
Unlike the EU, whose legislative proposal stratifies regulation based on harms linked to AI technologies and their applications, the UK appears likely to emphasise context-specific regulation and reliance on existing laws, including those that apply to the financial services sector. UK financial regulators have taken an interest in AI as it "may bring important benefits to consumers, financial services firms, financial markets, and the wider economy" but, conversely, "can pose novel challenges, as well as create new risks or amplify existing ones".
Earlier this year, the BoE and the FCA published Discussion Paper 5/22 (the Discussion Paper) inviting comment from stakeholders on the future regulation of AI in the financial services. The working assumption behind the Discussion Paper was that there are issues that may be unique to the financial services sector. While the adoption of AI in the sector clearly changes the risk profile (and sources of risk), the Discussion Paper focusses on the existing regulatory framework. Much of that framework remains relevant against a backdrop of continuing adoption of AI. For example:
- Consumer protection: As firms employ AI to better understand consumers and their preferences at a more granular level, some may feel inclined to "identify and exploit consumer behavioural biases and characteristics of vulnerability—from exploiting inertia, to harmful price discrimination, to exploiting actual characteristics of vulnerability". 'Personalisation' may mean that consumers labelled 'high-risk' (including on the basis of protected characteristics such as age, disability and race) cannot access loans, afford insurance premia and/or apply for certain wealth products. Existing rules and guidance include the FCA's Principles for Business, Policy Statement 22/9 (which introduces the so-called 'Consumer Duty'), the Consumer Protection from Unfair Trading Regulations 2008 (CPUTRs), PRIN 2A.4.25 R, the Vulnerable Customer Guidance, the Equality Act 2010 and various guidance from the UK Information Commissioner's Office (ICO) on the use of personal data in AI.
- Competition: The Discussion Paper notes that AI can improve competition by "improving consumers' ability to assess, access, and act on information". It also acknowledges that firms' use of AI may have an anti-competitive effect. In particular, it points to research demonstrating that AI models can detect and respond to price changes from rival firms in a potentially collusive manner and notes that once expensive AI technology becomes a baseline requirement to play, barriers to entry will be raised "to a level that limits market entry with potentially harmful effects on competition". Existing regulation is contained within the Competition Act 1998 and the Enterprise Act 2022.
- Safety and soundness: The use of AI may exaggerate existing prudential risks in ways that are difficult to predict, quantify and understand. The UK's globally systemically important banks are expected to adhere to the BCBS's 'Principles for effective risk data aggregation and risk reporting' (BCBS 239). The PRA Rulebook for Solvency II Firms generally stipulates that "[f]irms must ensure that the data used in the calculation of their technical provisions is appropriate, complete and accurate". Model risk management is currently addressed by PRA SS3/18 'Model risk management principles for stress testing' and the BCBS 'Corporate governance principles for banks'.
- Financial stability and the integrity of markets: AI may have positive and negative impacts on the financial system as a whole. Better decisioning may contribute to a more efficient financial system but AI may "amplify existing risks to financial stability through various transmission channels" such as where multiple firms employ similar models to buy and sell debt or equity in the market leading to herding effects. This may heighten the peaks and troughs of price movements (i.e. amplify procyclical behaviour), and such movements may then be picked up again by algorithms creating harmful feedback loops. Operational resilience, outsourcing and third-party risk management have been the subject of significant attention from supervisory authorities in recent years, most recently with the release of DP3/22 'Optional resilience: Critical third parties to the UK financial sector'.
What next?
Given the commercial demands on firms in the financial services, the sector may need to continue to deploy ML across a range of core and non-core functions. As familiarity with the technology improves, overall levels of risk may decrease except, as the Survey noted, in relation to third-party data, ethics and model complexity.
With the various risks in mind, firms should:
- define clear objectives when sourcing new enterprise technology, including as to how it will interface with current and future ML applications;
- hire and acquire ML-relevant skillsets and expertise into the organisation;
- ensure data governance policies and frameworks remain up-to-date and fit-for-purpose as they experiment with new applications and use cases;
- conduct data protection impact assessments on all new ML projects involving personal data, including in respect of training and testing models;
- ensure appropriate human involvement in decisions that are supported by the use of ML;
- scrutinise ML outsourcing agreements to ensure that appropriate contractual protections are in place to allocate risk; and
- continue to scan for new regulatory developments and guidance in this developing space.
This article first appeared on TechUK as part of their #AIWeek2023