Skip to main content

Clifford Chance

Clifford Chance
Fintech<br />

Fintech

Talking Tech

Financial services face up to deepfake risks

Fintech Cyber Security Banking & Finance Artificial Intelligence 14 October 2024

Deepfake incidents in the financial sector increased within Europe by over 780 per cent lastyear. Given the threats this technology presents to customer onboarding, fraud detection andsystems security, we outline the regulatory landscape andsuggested actions for firms.

The growing sophistication and prevalence of deepfake technology raises significant challenges for financial services firms. Alongside interest in innovative and legitimate usesfor synthetic content, concern regarding potential misuse is resulting in heightened attention from policymakers and regulators in the United Kingdom and elsewhere. The utilisation by bad actors of deepfakes in the financial services context to distort identity verification, compromise monitoring systems and create targeted phishing attacks has exacerbated fraudulent activities on an unprecedent scale, and poses new challenges to firms trying to keep up with the pace of innovation along with the development of life like deepfakes.

As the gap between technological developments and targeted regulation widens, the question is: how do firms control and respond tothe harm caused by deepfakes within the existing UK regulatory framework?

This article explores the risks that deepfakes pose to the financial services sector, relevant UK regulations and approaches firms might take to address these risks.

WHAT ARE DEEPFAKES?

According to Ofcom, the UK's communication services regulator, a deepfake is “audio-visual content that has been generated ormanipulated using AI, and that misrepresents someone or something”. Deepfakes can imitate people, objects, places, entities orevents and make them falsely appear to be real. They can take the form of video, image, text or sound that is partially or fullygenerated using artificial intelligence.

Common forms of deepfakes include: face re-enactment, where advanced software is used to manipulate the features of a real person’s face; face generation, where advanced software is employed to create an entirely new image of a face that does not reflect areal person; and speech synthesis, where advanced software creates a model of a real person’s voice. The technology used to generate deepfakes has become more and more sophisticated, with advancements in voice cloning and facial mapping.

Of course, synthetic content produced by generative AI can also be used for a variety of legitimate commercial purposes. For example, the entertainment and marketing industries can utilise this technology to develop realistic and customised advertising, movies andvideo games, with considerable saving of cost and time. Within customer support services, generative AI chatbots are already commonly used and synthetic content has the potential to improve user experience further.

However, this cutting-edge technology brings a heightened risk that it will also be exploited to perpetrate fraud, including within the financial services sector. For example, deepfake technology can be used to undermine key protocols designed to protect customers,firms and wider society in relation to the provision and use of financial services, such as Know Your Customer (KYC) and anti-money laundering (AML) processes, fraud detection systems, along with customer protection services.

According to Deloitte in 2023 deepfake incidents in the financial sector increased within Europe by over 780 per cent, and the UK accounted for 13.5 per cent of total cases. This demonstrates the greater accessibility of the relevant technologies and the widespread associated risk. Deepfakes can target multiple victims at the same time, using fewer resources than some other forms of fraudulent activity. For financial services firms, deepfake content poses a threat at several stages of their operations and service provision.

On-boarding new customers and counterparties

KYC procedures used to prevent financial crime are highly resource-intensive and require responsiveness to dynamic and often complex regulatory landscapes that vary across jurisdictions. To combat this, many financial services firms use specialist third-party KYC providers who have greater expertise, technology and infrastructure to handle these processes efficiently. However, even when leveraging well-resourced third-party providers, deepfakes make it more challenging than ever to verify the true identity of customers and ensure the reliability of KYC. High-quality deepfake identification documents and false impersonation during video-based KYC procedures are affordable to produce, easily accessible and can successfully pass through various forms of KYC check. For example,  404 Media reported that it was able to get past a third-party KYC measure on a cryptocurrency exchange using a $15 deepfake service provider. 

Failure to detect deepfake-generated identities can result in financial institutions inadvertently facilitating activities such as fraud, money laundering and the circumvention of international sanctions. Moreover, non-compliance with KYC and AML regulations can lead to significant penalties.

Client communications and disinformation

Additionally, firms need to be alert to potential deepfake activities in relation to their ongoing communications with clients. For example, many firms use face recognition or short videos to allow clients to approve new payees or make significant transactions. Firms may also use AI-generated communications or images when interacting with their customers (eg, in chat messaging services). Again, failure to detect deepfake-generated ‘customer’ identities, or that real customers are interacting with deepfakes presenting as the firm, can result in inadvertent facilitation of fraud and other threats to customers and to the firm.

Firms may further need to consider whether they have an obligation to educate their customers regarding the potential risks. For example, a customer may instruct their bank to carry out a transaction based on advice that they think they have received from a trusted third party, when in fact the adviser was a deepfake. In 2023 Martin Lewis, a widely-followed UK consumer finance expert, had his image used in an online scam video that appeared to show him giving investment advice.  If account holders instructed their bank to make investments based on this advice, the bank would not be able to detect the deepfake but the customers would still be at risk.

Information and system security risks

Deepfakes can also compromise the security measures that financial institutions use to safeguard their customers, systems and data. This could result in a loss of customer confidence and create financial, legal and reputational risk, as deepfakes significantly elevate the threat level of phishing attacks against organisations and their employees. Historically, phishing attempts have been sent to employees of financial services firms through emails, phone calls or websites. Deepfakes can take this a step further, creating realistic audio or video content that impersonates trusted figures, such as colleagues, advisers, company officials and CEOs, heightening the risk of successful phishing attacks.

REGULATING DEEPFAKES

A range of cyber security, data protection and prudential regulations require that firms ensure they are well-protected against attacks as well as preserve the integrity and confidentiality of their systems and data. Under the UK General Data Protection Regulation (GDPR), for example, organisations are required to maintain “appropriate technical and organisational measures” to protect personal data, taking into account a number of factors (including the state of the art and therefore setting a standard that keeps pace with market development). Similarly, the Financial Conduct Authority and Prudential Regulation Authority regulate cybersecurity in line with their statutory duties, mandating that financial services firms have robust systems and controls in place to manage cyber risks. This includes participating in a testing regime, CBEST. Firms may be expected to implement enhanced security measures to protect against the elevated risks caused by deepfakes.

The legal requirements around the world that apply to deepfakes vary depending on the country and the context. Some jurisdictions have laws or regulations that address deepfakes specifically, while others rely on existing laws or general principles to govern the creation, use and control of deepfakes. In most cases, data protection, cyber and financial sector prudential regulations will be among the frameworks that already contain key obligations regarding management of risks arising from deepfakes. A number of jurisdictions are also developing an increasingly focused concept of tech literacy (including AI literacy). This will impact the internal training that firms will be expected to provide as part of managing risks arising from deepfakes and other digital technology.

Currently, the UK’s approach has been to rely on existing legal frameworks to regulate deepfakes, so as to avoid creating legislation that may become quickly outdated as AI technologies advance. In 2023, the UK implemented the Online Safety Act (OSA), which aims to protect adults and children online. The OSA addresses the deployment of deepfakes in a specific context and its impact on financial services is minimal. Its primary aim is to require online platform-providers to identify and remove illegal content including content harmful to children from their services. The law criminalises the sharing of deepfake intimate or sexually explicit images.

Aside from the OSA, there is no other targeted deepfake legislation in the UK. Bad actors are proscribed through the notion that what is illegal without the use of synthetic content, remains illegal with the added use of this technology for non-legitimate purposes. Looking at financial services regulation in the UK, the FCA has traditionally adopted a ‘technology-neutral’ approach to regulating concerns that may arise as a result of technology or innovation – so AI fraud would be penalised in the same way as any other fraud and a firm’s obligations to its customers would be the same regardless of how services are provided. Firms are expected to have proper governance arrangements appropriate to their business (including accountability of senior managers for the products and all relevant risks, including where they arise as a result of AI or cybersecurity failings. Currently the FCA considers that it already has the tools it needs to work with regulated firms to address material risks associated with AI.

It appears, however, that the new Labour Government may take a more pro-regulatory approach to the risks that these new technologies pose. The background notes to the King’s Speech 2024 propose a new Cyber Security and Resilience Bill that will aim, among other things, to protect the digital economy from cyber criminals and attacks. The King’s Speech recognised the need for an “urgent update” to the current regulatory framework within the UK. The (as-yet unpublished) Bill is expected to expand the remit of current regulation to require protection of more digital services and supply chains, likely including in the financial services sector. It may also give regulators a stronger footing to ensure that cyber safety measures are implemented and impose further incident reporting obligations, the details of which have still to be confirmed. If the Bill reflects the position in the European Union’s Network and Information Security Directive (NIS2), liability for such reporting obligations may extend to senior management. Such regulation may also target deepfakes and their specific harms related to financial services. 

ADDRESSING THE RISK OF DEEPFAKES

Though legislation may be slow in its progress, it is important that banks and other financial services firms take a proactive approach in addressing any weaknesses they may have in responding to the threat of deepfakes. To mitigate the risks of deepfakes, firms should adopt a multi-layered approach that combines technology, governance and tech/AI literacy. Organisations can implement a range of measures to protect against risks arising from deepfakes, including technological measures to enhance identify verification protocols within their tools. This may require investment in tailored tools for deepfake detection or working with specialist suppliers.

At an organisational level, firms should:

  • develop and regularly update incident response plans specifically addressing deepfake scenarios.  
  • ensure that these plans include clear protocols for identifying, reporting and mitigating deepfake incidents.
  • review their training programmes and consider how to incorporate regular training sessions to educate employees about the risks associated with deepfakes and how to recognise potential threats. These sessions should emphasise the importance of verifying the authenticity of communications and transactions, and reporting protocols for suspicious activity should be encouraged.
  • consider collaboration with industry peers and regulatory bodies to share intelligence and best practices. The European Digital Operational Resilience Act has provisions for harmonising information-sharing, for example. Regular internal and supplier audits as well as simulations of deepfake incidents can help to ensure the effectiveness of response plans and identify areas for improvement.

The financial services sector faces a complex and evolving threat landscape due to the rise of deepfake technology. While the potential for legitimate and innovative uses of synthetic content is significant, the risks associated with their misuse command attention. As deepfake and AI technologies continue to evolve, it is crucial for firms to prioritise consumer protection. This proactive approach involves staying abreast of technological developments and ensuring they are well-equipped to safeguard their customers as well as maintain trust in an increasingly digital financial landscape. By investing in advanced detection tools, enhancing training programmes, and maintaining secure communication channels, financial services can better protect themselves and their customers from the sophisticated threats posed by deepfakes. Moreover, potential changes to the regulatory environment mean firms should continue to stay informed and compliant, ensuring that they are well-equipped to navigate the challenges and opportunities presented by this rapidly-advancing technology.

A similar version of this article has published in www.compliancemonitor.com and www.i-law.com