The FCA Consults on the Potential use of AGENTIC AI … but What Is It?

Introduction

In 2024 I was commenting on an ISDA white paper that discussed the possible uses of the then new concept of ‘GenAI’ (or Generative AI to use its full title) in the derivatives market.

The world of AI has rapidly moved on and the FCA is now conducting a review on how AI might reshape retail financial services in the long-term. One of the areas the FCA will explore is the potential use of Agentic AI … but what is it? 

Unlike traditional Artificial Intelligence (AI) which works with pre-existing data sets to produce predictions or classifications and requires human intervention, generative AI uses a set of algorithms to create entirely new patterns and content such as text, images and audio from patterns and data it has learned in response to a user’s prompt.

Agentic AI takes us one step further - it autonomously executes the decisions made by GenAI, calling on external tools, with little or no human supervision. Rather than assisting human goals it becomes an autonomous, goal-driven agent in its own right, hence the name. In brief, Agentic agents both ‘think’ and ‘do’. Agents can search the web, call application programming interfaces (API’s) and query databases, then use this information to make decisions and take actions. An example would be if you want to schedule an international business trip to fit in with your existing work schedule. An AI agent would not only fit the trip into your schedule it would book the flight and hotel for you without the need for human intervention.

The benefits of Agentic AI in financial services are many and complex and research by IBM shows that it is quickly moving from discrete pilot exercises to enterprise-wide applications which can operate 24/7 without human supervision. As an example, Anthropic has developed ‘Claude’ for Financial Services, a financial analysis Agentic AI solution able to call on research agents and numerous financial databases from third-party suppliers enabling financial institutions to cut research costs and deliver quicker solutions to clients. Visa and Mastercard introduced agentic shopping in 2025, where customers will tell an agent what they want and the agent will then search for that item, choose and pay for it.

How long will it be before Agentic agents will be able to analyse prices in financial markets and economic indicators to perform predictive analysis and execute trades?

An IBM survey of executives revealed that 71% of them anticipated that all customer service enquiries will be handled by AI agents by the end of this year and 75% of them expect AI agents to execute transactional processes and workflows autonomously within the next two years. These latest developments will clearly have significant regulatory implications in terms of governance, risk management and compliance as they are adopted by the financial services sector.

Risk management across all three lines of defence will have to be rethought. Legal, Compliance, Risk Management and Internal Audit professionals, in particular, will have to develop new skills to ensure responsible deployment and effective risk controls as AI agents will come with new sets of risks including: 

·        Lack of transparency

The decision-making processes of AI agents, especially those using complex algorithms, and deep learning, can make understanding how a decision has been reached difficult (the ‘black box’). That leads us to:

·        Senior Management accountability

With the U.K.’s Senior Management Regime expecting individuals to be responsible and held accountable for individual categories of risk it will be essential that an individual is identified as being accountable for AI risk. That individual will have to be pro-active in ensuring that they remain fully conversant with the AI risks that their firm is running, particularly where a firm is running an Agentic agent which may develop its own strategies, tactics and processes. This leads us in turn to:

·        Governance Risk

With regulators clearly and unequivocally expecting boards to understand and to be able to explain the risks that their firms are running, how many firms’ boards are able to explain AI risk? And how will they oversee the transition in their firm and keep their knowledge current?

i. Malfunction risk

AI Agents may malfunction, particularly where there is agent to agent interaction and where there may be a failure in the network of servers.

ii. Over dependence

A firm may have neither the human resources nor skills to take over in the event of an AI systems malfunction.

iii. Legal and compliance risk

The rapid evolution of AI could outpace the development of legal and regulatory frameworks leaving firms uncertain as to whether to invest in costly AI development and the subsequent costly consequences of having to reverse the investment if they get it wrong.

iv. Bias and Fairness

AI systems can promote or amplify bias if the underlying data is biased leading to unfair outcomes and potential problems in employment, retail lending and law enforcement.

v.  Ethics and illegal trading risk

Many AI agents use reinforcement learning, which involves maximising a reward function. If the reward system is poorly designed the agent might exploit loopholes to achieve ‘high scores’ in unintended ways and may engage in unethical or illegal trades (e.g. with sanctioned counterparties) or may trigger market instability.

vi. Malicious use

An AI agent may be maliciously used for cyber offences or to publish disinformation.

vii. Security risk

AI agents, especially those connected to the internet or internal networks, can be targets for cyber attacks. Hackers might manipulate inputs, extract data or cause the agent to behave in harmful ways.

viii. Systemic risks

From the introduction of AI agents that autonomously interact with each other and not all the agents are programmed with the same level of risk management or ethics as the firm’s agent.

However one looks at it, we are undoubtedly at a pivotal point in the evolution and regulation of financial services. The sector will want to take advantage of the rapid developments in AI, to invest in highly efficient decision making and execution processes that can operate 24/7 with little or no human intervention, and law makers and regulators will be striving to keep up with them. It will require significant and careful management by both firms and regulators to get it right.

For further discussion and initial consultation please contact FMCR at contact@fmcr.com.

Peter ManningComment