The survey finds that most firms are using AI, but many only have a partial understanding of how the technology operates.
By Becky Critchley and Gary Whitehead
On 21 November 2024, the Bank of England (BoE) and FCA published a report setting out their findings from the AI and Machine Learning Survey 2024. In the context of the rapid adoption and integration of AI technologies across financial services, the regulators are keen to understand the opportunities and challenges that market participants face, whilst also maintaining an understanding of the capabilities, development, deployment, and limitations of AI use in a financial services context.
This survey is the third on Artificial Intelligence and Machine Learning, with previous surveys conducted in 2019 and 2022. The survey’s questions were updated to recognise the growth of generative AI. In total, 118 market participants responded including banks, insurance companies, and financial market infrastructure providers.
This survey provides a useful insight into how different firms have embraced AI and assesses the perceived advantages and disadvantages. In addition, the results provide a useful framework for firms to examine their own policies and procedures against their peers. The results demonstrate the BoE and FCA as being engaged in monitoring AI’s role and impact within the financial services sector. With more AI-focused regulatory initiatives to come in 2025 and the prospect of targeted legislation (see the July 2024 King’s Speech), this is an area where financial services firms would be well served by staying abreast of key developments.
Key Findings
AI Use and Adoption
The financial services sector has witnessed a significant shift towards the adoption of AI, as a growing number of firms are leveraging AI technologies to enhance their operations, improve customer experiences, and drive innovation. The survey found that 75% of respondents are already using AI (an increase from 58% in the 2022 survey) and an additional 10% are planning to adopt AI within the next three years. Interestingly, the survey identified that 95% of firms in the insurance sector are currently employing AI technologies, which is closely followed by international banking institutions at 94%. Conversely, financial market infrastructure firms had the lowest adoption rate of 57%.
The survey further highlights an upward trajectory in the proliferation of AI use cases, as respondents project that the median number of such use cases will more than double over the next three years with an increase from nine to 21. Notably, large UK and international banks expect a median of between 39 and 49 use cases respectively. A number of different business units have been able to find specific use cases, including optimisation of internal processes (41%), cybersecurity (37%), and fraud detection (33%). This is expected, as time-intensive exercises are gradually replaced by AI use.
Governance of AI
A major issue with the increased use of AI is having an appropriate governance framework that supports accountability and human oversight in the outcomes reached. The survey provided a list of 16 approaches to AI governance and asked firms to set out which ones they had specifically implemented. The survey found that 84% of firms chose to allocate responsibility for AI processes to a named individual(s), typically executive leadership (72%), developers and data science teams (64%), and business area users (57%). Human oversight of AI-driven processes is essential to catch errors, provide further context, and make more nuanced decisions that AI may not be capable of reaching. This accountability protection acts as an important safeguard to ensure AI is used in an ethical way and ensures the integrity of the underlying system. For instance, the survey found that 55% of all AI use cases have some degree of automated decision-making, 24% of those being semi-autonomous, meaning that there is human oversight for critical or ambiguous decisions.
The effective governance of AI in financial services helps to establish customer trust and confidence. This is something that will develop over time, but a logical first step for firms adopting AI is the introduction of fair, transparent, and accountable policies and procedures. This is seen in the survey with 82% adopting guidelines or best practice policies, and 79% of firms adopting a data governance framework. Equally important is the education of customers so that they are aware about how AI is used in relation to the services they receive and the associated benefits and drawbacks.
AI Understanding
Respondents provided feedback on how they would describe their firm’s understanding of AI technologies implemented in their operations. Interestingly, 46% of respondents reported only a “partial understanding” of AI technology, compared to 34% of firms having a “complete understanding”. This can be explained by firms having a better understanding when AI is developed in-house compared to when it is outsourced to third-party firms. Users of AI should understand the underlying processes, as it otherwise operates as a “black box” where it is difficult to understand how an output is generated. This is a more pronounced issue for deep learning models. To improve understanding of the underlying processes, 81% of firms have employed “some kind of explainability model”, which ensures the process it takes for the decision it reaches. In the future, we expect this information may need to be made available to consumers and regulators.
Benefits and Risks of AI
The survey highlights a number of perceived benefits through the use of AI, particularly in the areas of data and analytical insights, anti-money laundering, combating fraud, and cybersecurity. These findings are broadly in line with the 2022 survey results.
Focussing a little more on the risks of AI, four of the top five risks are associated with the use of data and include: data privacy and protection, data quality, data security, and data bias and representativeness. The latter is problematic as algorithmic bias means that systems can inadvertently perpetuate or even exacerbate existing biases present in the data they are trained on, which may lead to unfair treatment of certain groups, such as racial minorities, women, or economically disadvantaged individuals. The survey’s horizon scanning identified emerging risks related to third-party dependencies, model complexity, and embedded or “hidden” models over the next three years. These issues are all relevant to firms from an operational resilience standpoint.
The adoption of AI in the financial services sector also faces several regulatory challenges. The heavy regulatory burden is a primary constraint, with respondents particularly noting the FCA’s Consumer Duty (23%) and other FCA regulations (20%) as problematic. Notably, the UK government plans to introduce targeted legislation on AI, but we do not foresee this taking the shape of sector-specific regulation. Additionally, 18% of firms underlined a lack of clarity in current regulations regarding intellectual property rights, 13% for the FCA Consumer Duty, and 11% for resilience and cybersecurity rules.
Only 5% of firms see the lack of alignment between UK and international regulations as a constraint. This demonstrates that the lack of international harmonisation has not been a barrier to AI adoption. However, this may change as different regimes are starting to take distinct paths in addressing AI regulation. For instance, the EU AI Act has spearheaded a more prescriptive approach to regulation. Other non-regulatory constraints noted in the report include safety, security and robustness, insufficient talent/access to skills, and appropriate transparency and explainability. These constraints reflect broader trends in the financial services environment, where firms must balance innovation with security, talent management, and ethical considerations. Addressing these challenges effectively can provide a competitive edge and foster sustainable growth.
This post was prepared with the assistance of Gregory Slevin in the London office of Latham & Watkins.