Financial Services

How AI is regulated in UK financial services today

Published on 29th Jan 2024

As technology moves faster than ever, how can the existing regulatory framework address new risks?

Graph on tablet screen

The UK financial services regulators have so far taken a technology-neutral approach to the supervision and regulation of artificial intelligence (AI). The boom in generative AI brings opportunities for firms to use the technology in novel ways, creating efficiencies in processes, enhancing market analysis, and interacting with customers. Increased use of AI in the financial services sector also brings significant new risks, including of potential harm to consumers, information and cyber security concerns, and a new challenge for individual accountability.

As the deployment of traditional and generative AI in the sector increases, does the existing regulatory framework for UK Financial Conduct Authority (FCA) solo-regulated firms adequately address the risks?

Consumer protection

The FCA is arguably well-equipped to deal with the risks of poor customer outcomes arising from the use of AI, particularly in light of the Consumer Duty.

Poor customer outcomes could manifest in a number of ways. Since generative AI models can produce unreliable or incorrect outputs, using AI-powered communication methods without proper review could lead to a lack of adequate understanding by customers. The use of generative AI systems to deliver advice could result in advice which fails to address a customer's needs.

Consider an example scenario, where a fully automated chatbot tool implemented by a lending platform is asked by customers about the required data and documents they need to submit for a loan application based on their disclosed personal circumstances, but the chatbot provides incorrect or incomplete requests, leading to delays in onboarding or even rejections.

The misuse of AI, leading to poor customer outcomes in such a way, could result in breaches of Principle 12, which requires that firms must act to deliver good outcomes for retail customers. The FCA non-Handbook Guidance for firms on the Consumer Duty explicitly states that using AI in a way that could lead to consumer harm would be an example of failing to meet the Consumer Duty.

Bias and vulnerability

AI can perpetuate biases and inaccuracies which exist in the data used for training the system. Generative AI creates new complexities, as the technology can be used to produce content which is more subtle and qualitative in form, and thus harder to monitor for discrimination or bias.

Example scenario

A lending platform decides to implement a generative AI system for automating a loan approval process.

The generative AI model has been trained by a third party developer on data which includes information on past applicants' financial history, employment status and other relevant factors. However, unknown to the third party developers, the historical data contained inherent biases, such as racial or gender disparities in approvals, or disparities in approvals for vulnerable applicants with physical or mental health conditions. This may have been due to any one of a number of potential reasons, such as previous erroneous manual decision-making, a failure to fine-tune the model, or a failure to use a mitigation measure (such as retrieval-augmented generation).

The generative AI system perpetuates discriminatory practices when learning from these biases and subsequently favours applications with certain characteristics while rejecting others. The developers may try to rectify the underlying bias through data cleaning, but may find that the model's complex architecture makes it difficult to pinpoint how it generates certain outcomes, hindering the ability to eliminate the bias.

The lending platform would almost certainly not be able to continue deploying the AI system because discriminatory decisions made using AI systems could result in a breach of the Equality Act 2010. A costly remediation exercise would be required, both in terms of dealing with the reputational impact and reversing the change processes required to implement the AI system.

Overlapping rules

There is overlap between the Equality Act 2010 and both the FCA's Vulnerable Customer Guidance and the Consumer Duty. For example, some characteristics of vulnerability set out in the Vulnerable Customer Guidance overlap with the protected characteristics under the Equality Act.

Discriminatory decisions made by AI may lead to a breach of not only the Equality Act but also FCA rules, including the Consumer Duty which requires firms to consider the diverse needs of their customers and ensure the fair treatment of vulnerable customers and those with protected characteristics. Firms are required to take action if they identify particular groups of customers who are receiving poor outcomes.

The issue of bias should also be considered in the context of the Money Laundering, Terrorist Financing and Transfer of Funds (Information on the Payer) Regulations 2017. The use of AI could revolutionise the way firms conduct KYC (know your customer), offering efficiencies while potentially reducing customer due diligence errors. A number of financial services firms that have adopted AI within their KYC processes have reported increased accuracy and efficiency.

However, there are also concerns that customers may be prevented from accessing financial services due to being identified erroneously as posing a money-laundering risk. The risk of exclusion will be heightened where the AI uses biased data sets, or places undue emphasis on a customer's largely historic connection with a high-risk third country.

Information security

The growing volumes of data involved in the use of AI means that issues around security and subject privacy are increasingly important. For example, inappropriate use and handling of personal data in AI training may lead to breaches of data protection law, data leaks or increased susceptibility to cyber-attacks. Generative AI opens up a new cyber security frontier and landscape of risks, such as the use of model inversion attacks where the output of a model is used to infer its architecture.

The UK Information Commissioner's Office has created tailored guidance on AI and data protection which provides the regulator's interpretation of how data protection rules apply to AI systems that process personal data. Please see our Insight on how to assess UK data privacy risk in AI use for further details on the data protection issues which should be considered when processing personal data for AI purposes.

In the payments sector, the assistance of AI in the processing and handling of sensitive payment data could offer efficiencies resulting in faster settlement times, although there may be risks in relation to incorrectly executed transactions. There is still limited guidance in relation to how AI can be used in the context of existing industry standards. For example, it is unclear at this stage if the use of AI may fall within the remit of the Payment Card Industry Data Security Standard as software that can impact the security of the cardholder data environment.

Accountability

The regulators have indicated that the Senior Managers & Certification Regime (SM&CR) is likely to be used as an avenue for ensuring firms are acting responsibly when assessing and managing AI-related risks. The FCA's Chief Data, Information and Intelligence Officer, Jessica Rusu, has emphasised that the SM&CR has "direct and important relevance to governance arrangements" in the context of AI by creating a framework that holds senior managers accountable for the activities of their firm (October 2023 speech).

While there is no dedicated senior management function or SMF for AI, for "enhanced scope" SM&CR firms, technology systems are the responsibility of the SMF24 (Chief Operations function). The SMF4 (Chief Risk function) has responsibility for the overall management of the risk controls of a firm. These SMFs do not apply to "core" and "limited scope" SM&CR firms, in which the responsibility for the development and deployment of AI is likely to sit with the SMF16 (Compliance oversight function) or ultimately the SMF3 (Executive director function) and SMF1 (Chief Executive function).

Additional considerations

Existing principles, expectations and requirements for operational resilience set out in the FCA's Senior Management Arrangements, Systems and Controls Sourcebook (SYSC) may be useful for managing risks posed by AI.

As AI models are typically built and trained by third parties, outsourcing and third-party risk management rules set out in SYSC will also be relevant to firms in this context. Firms will need to consider the extent to which the adoption of AI could constitute a critical outsourcing and how this may influence their contractual relationship with AI service providers. A firm cannot outsource its responsibility for meeting regulatory requirements.

In practice, this means that it remains the responsibility of regulated financial services firms to ensure that the use and deployment of AI complies with the regulatory regime, and that any outsourcing arrangements are robust, allow for proper oversight and intervention, and can be exited in an orderly way without causing disruption or detriment to customers.

The designation of critical third parties to the financial services sector may also prove relevant to the provision of AI. This involves a proposed new regulatory regime applicable to certain third-party service providers, such as cloud service providers, bringing material services they provide to the financial sector under the direct supervision of the FCA and the Prudential Regulation Authority. Factors which could bring a service provider within scope include aggregation risk, the substitutability of the services, and how firms could secure the continuity or recovery of the services.

Osborne Clarke comment

The existing technology-neutral regulatory framework is arguably well placed to address a number of known risks from the increased use of AI in financial services. However, it remains to be seen whether this approach will continue to be appropriate as new risks emerge and existing risks evolve.

More broadly, the landscape of AI regulation is complex and fragmented, lacking co-ordination between domestic and international regulators. Different rules may give rise to differing expectations which, in turn, create tensions. We have elsewhere contrasted the UK's approach on AI with the EU's: in contrast to the UK, the EU has taken a cross-sector application-based approach towards the development of regulations for AI, including the introduction of draft primary legislation which explicitly defines AI, with supporting guidance.

Firms will be keen to see clear guidance around regulatory expectations in the context of AI. In the meantime, those considering AI deployment may find themselves reluctant to proceed if there is a lack of clarity as to how they can comply with regulation.

If you would like help around the regulation of AI and the impact on your business, please contact our experts. We will be discussing the future of AI in financial services from UK and EU perspectives as part of our Future of Financial Services week, kicking off on 5 February 2024 – please click here to register.

Follow

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Interested in hearing more from Osborne Clarke?