Skip to main content

Clifford Chance

Clifford Chance
Fintech<br />

Fintech

Talking Tech

Bot, Bank & Beyond – Clifford Chance's view on AI in finance

An extract from: Citi GPS: AI IN FINANCE: Bot, Bank & Beyond

Artificial Intelligence Fintech 26 June 2024

As AI-powered agents, bots and beyond, become increasingly prevalent, how will money and finance change? How will the underlying concepts and structures of finance be reshaped? In a bot-to-bot world, where machines transact with minimal human intervention, what does the world of money look like? In their latest research report AI in Finance - Bot, Bank & Beyond, Citi Global Perspectives & Solutions (Citi GPS) include several expert interviews including the following from Partner and Co-Chair, Global Tech Group, Devika Kornbacher and Partner and Head of the Continental Europe Tech Group, Dessi Savova on AI regulation.  

Devika and Dessi previously spoke to Citi GPS about US and EU AI regulation in this video interview from September 2023. 

Q: What approach is the US taking to regulate AI for financial services?

In the US, several agencies are seeking to regulate the use of AI in finance. Some are exploring the scope of existing regulations, whilst others draft new regulations and guidelines.

In 2021, we witnessed the engagement of five distinct financial regulatory bodies in the exploration of AI applications within financial institutions. This comprised the Consumer Financial Protection Bureau (CFPB), the Federal Deposit Insurance Corporation (FDIC), the National Credit Union Administration (NCUA), the Federal Trade Commission (FTC) and the Office of the Comptroller Currency (OCC). Despite initial interest, limited regulatory policies have been formally announced yet.

Several regulatory bodies already possess some form of regulatory frameworks governing the use of AI in finance. For instance, the Federal Trade Commission (FTC) Chair Lina M. Khan elucidated that new regulations were unnecessary for firms under its jurisdiction, as existing regulations already encompass the governance of unfair practices including those resulting from the use of AI (e.g., unauthorized use of customer data for AI model training).

Likewise, the New York Department of Financial Services (NYDFS) underscored prevailing regulations governing fair, equitable, transparent, and resilient financial systems to inherently include provisions for emerging technologies such as AI.

Meanwhile, regulatory bodies lacking explicit coverage of AI technologies are exploring potential parameters for forthcoming regulations. In April 2023, the US Securities and Exchange Commission (SEC) announced general ethical rules for financial services firms offering investment advisory services, stipulating the need for consistent and persistent training, governance, and oversight.

Q: US regulators adopted a regulation-by-enforcement approach for digital assets. Can we anticipate a similar stance for emerging technologies like AI?

Indeed, regulators are employing a regulation-by-enforcement approach for AI. For instance, FTC Chair Lina M. Khan’s approach emphasizes careful handling of datasets, especially those containing personal or biometric data, to ensure they are adequately checked for consent to use the information, before use in AI tools.

The SEC recently handed down its first fines for "AI-washing" against two investment advisors that the regulator said exaggerated their use of AI just to attract investors. SEC Chair Gary Gensler stated in a speech in March 2024 that "everyone may be talking about AI, but when it comes to investment advisers, broker dealers and public companies, they should make sure what they say to investors is true.

In our view, we are seeing enforcement actions that are instructive for what we should and should not be doing in this space.

Q: Can you elaborate on the EU AI Act? Does it apply to financial services?

The EU AI Act is the first comprehensive legal framework, and it has an extra-territorial reach, i.e., it is applicable not only to entities headquartered in the EU but in many cases across the world. The framework seeks to encompass all facets of the AI value chain, spanning development, utilization, importation, and distribution, with a few exceptions.  The regulation introduces risk-based classification, with limited adjustments for entities relying on existing sectoral procedures such as credit institutions subject to comprehensive regulatory frameworks. These adjustments are not exemptions as they do not absolve entities from regulatory scrutiny, rather they entail differentiated compliance requirements.

The act distinguishes between minimal and unacceptable risks, prohibiting practices like remote or real-time biometric identification systems, manipulative practice or those exploiting vulnerabilities. It also covers a broad spectrum of applications like credit assessment, HR management and recruiting, and insurance underwriting, imposing extensive obligations and restructuring mandates on entities. It also addresses specific risks associated with AI applications like chatbots, incorporating strict transparency requirements and obligations for data usage.

Amendments introduced by the European Parliament have further expanded the regulatory scope to encompass GenAI and foundation models with specific obligations and requirements that need to be complied with. These developments underscore the imperative for businesses to adhere to evolving regulatory standards, characterized by heightened emphasis on explicability, transparency, human-centricity, ethical considerations, and privacy protection.

Q: Any other initiatives, voluntary measures, standards, guidelines applicable in the US/EU that firms should pay attention to, albeit not legally binding?

In the US, as an alternative to direct comprehensive federal AI regulation, the Biden Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, implementing guidance from agencies and various initiatives form the cornerstone of the federal AI regulatory landscape. For example, the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework 1.0 released in January 2023 remains a popular resource for those designing an AI governance program and, in response to the Biden AI EO, the United States Patent and Trademark Office has issued inventorship guidance for AI-assisted inventions.

In the EU, the AI Act's entry into force is now imminent but its full application will still need several months. Meanwhile, the European Commission and other agencies have initiated voluntary measures to promote early adoption of key principles and have launched initiatives such as the voluntary adherence to the EU AI Pact put in place by the European Commission. Enforcement action in Europe has also commenced, leveraging existing regulations like GDPR (General Data Protection Regulation) to address AI-related concerns.

Q: Can you highlight key regulatory developments on AI across the world?

China has already implemented AI-related regulations, unique to its governmental structure, which has global implications like those of the EU’s proposed AI Act. While some exceptions have been made for legacy systems, the core principles regarding accountability and explainability are already in effect.

From an intellectual property rights perspective, accountability and protection are critical. However, in the realm of AI, the landscape remains uncertain. While the US has stated AI output cannot be protected, other jurisdictions such as South Korea have adopted a more nuanced stance. Regulatory efforts are also underway across other countries like India, Canada, and Brazil to formulate their own AI regulations.

The jurisdictional variation poses a challenge for global companies seeking to safeguard their AI developments, highlighting the complexities of a borderless technology landscape and the need for international cooperation to establish a common basis for global operations.

On Join US-EU regulatory framework for AI, the Trade Council has already been commissioned to explore this space and develop regulations to make AI trustworthy, both in the EU and the US. We could see a code of conduct being adopted at the EU and US level in the context of AI.

This extract from AI in Finance - Bot, Bank & Beyond, was published with the kind permission of Citi Global Perspecives and Solutions.