Making AI Work in Markets: Governance, Sequencing and the Path to Scaled Deployment

How a structured approach to AI inventory, governance and portfolio construction creates the conditions for deployment that is both ambitious and defensible

The question most firms are not asking

There is no shortage of AI ambition in global markets businesses. Trading floors, structuring desks, risk functions and operations teams all carry lists of use cases they want to pursue. The technology landscape is genuinely compelling: reinforcement learning for execution optimisation, large language models for client coverage and deal origination, adaptive surveillance models, real-time capital attribution tools. The direction of travel is clear and the commercial logic is real.

What is less clear, in most firms, is the sequencing. The Bank of England roundtables on AI adoption in early 2026 identified a pattern that will be familiar to any Head of Markets who has tried to move an AI initiative from pilot to production: second-line risk functions are cautious, approval pipelines are slow, and the friction is not primarily technological. It is governance. The tools are ready. The frameworks that would allow them to be deployed confidently, at scale, with named accountability, are not.

The firms moving forward most effectively are not those with the most ambitious use case lists. They are those that answered a prior question: not which AI capabilities do we want, but which ones do we fully trust, are we actually prepared to own, to govern and to stand behind when a regulator, a client or an internal audit asks about them.

And critically, in what order should they be advanced, and on what basis is that decision being made.

The constraint on AI scaling in markets is not technology. It is the absence of a governance framework that gives the business the confidence to deploy, and the control functions the assurance to approve.

What is already running, and what that reveals

Before a firm can build a coherent AI strategy, it needs a clear and complete picture of where AI is already operating. In most global markets businesses, that picture is more complex than the official inventory suggests.

The formal inventory understates the reality

Approved AI deployments typically represent a fraction of what is actually running. Vendor platforms procured by technology or operations teams carry embedded ML components that never passed through model governance. Desk-level pilots from two or three years ago were never formally closed and are still in use. GenAI tools are being used daily by front office staff for client communications, deal summaries and market commentary, without a data governance decision having been made about any of them. The gap between what the firm believes it has deployed and what is actually influencing decisions is, in most institutions, substantial.

Under SM&CR, that gap has a personal dimension: an SMF role holder cannot demonstrate reasonable steps over risks they have not identified.

Not all AI carries the same governance risk

A document assistant used by a coverage banker, a configured vendor surveillance tool and a bespoke reinforcement learning execution strategy are entirely different propositions from a governance perspective.

  • The first stage requires clear usage boundaries, data controls and staff guidance.

  • The second requires structured ownership, vendor oversight and performance monitoring.

  • The third requires full SS1/23 model risk alignment, inventory entry, a validation lifecycle and named SMF role holder accountability before deployment is defensible.

Applying a single governance standard across all three produces the same outcome in every case: the simple things are over-governed and the consequential things are under-governed. The FMSB review of AI in trading published in February 2026 identified exactly this pattern across multiple institutions.

The opportunity landscape is real but unevenly mature

Across trading, the use cases with the clearest near-term value are execution quality improvement, intraday risk acceleration and surveillance enhancement. In investment banking and advisory, client trigger identification and meeting intelligence tools are already demonstrating measurable value where they have been properly deployed. In risk and control functions, dynamic behavioural surveillance and real-time reconciliation are at various stages of pilot and early production.

The common thread in those that have progressed is not the sophistication of the technology. It is that someone owned the outcome, defined what good looked like before deployment, and built the governance proportionate to the risk of the use case.

Sequencing the portfolio: how the selection should be made

The purpose of a structured approach to AI portfolio construction is not to produce another use case list or to apply a single governance filter uniformly. It is to create a prioritised view of which initiatives should advance, in what order, and on what basis, so that the deployment trajectory is both commercially grounded and defensible.

Governance readiness is the necessary starting point, but it is not the only dimension. A well-constructed sequencing framework applies multiple criteria consistently across the portfolio: strategic alignment with a stated business objective;

  • economic case, including expected return,

  • efficiency gain and quality improvement,

  • probability of successful delivery within a realistic timeframe,

  • data integrity,

since a technically sophisticated model built on a structurally unsound data premise is not a governance problem but a conceptual one and should be discontinued rather than controlled; interpretability, and whether outputs can be explained to a risk committee or a regulator in terms that would satisfy a reasonable challenge; regulatory and conduct exposure, including whether the initiative touches MAR, RTS 6 or SS1/23 scope; and the quality and control benefits that deployment would generate, not just efficiency.

Running across all of those dimensions, and carrying elevated weight in a markets context, is proximity to live markets and the reversibility of decisions the AI system influences. A model that

assists with internal document summarisation and a model that influences a pricing decision made to a client in real time are not the same category of deployment risk. The closer an AI system is to a trade, a client interaction or a market, the harder the consequences of an error are to reverse and the more personal the accountability exposure becomes for the SMF role holder responsible.

A well-sequenced portfolio advances use cases that are furthest from irreversible outcomes first, building governance capability and demonstrated track record before moving into the territory where the stakes are highest.

One further discipline matters: the number of initiatives in active development at any point should be governed by the oversight capacity available to manage them properly, not by commercial enthusiasm. Initiating more proofs of concept than the governance infrastructure can meaningfully supervise is itself a risk management failure, and one the FCA and PRA are increasingly alert to.

Use cases that meet the threshold on all dimensions form the forward portfolio with a sequenced delivery plan. Those that fall short on governance readiness receive a scoped remediation path before investment is committed. Those where proximity to live markets creates accountability exposure the current framework cannot yet support are explicitly deferred, with the conditions for advancement clearly articulated. That deferral decision, properly documented, is itself an act of reasonable steps.

Sequencing is not a constraint on ambition. It is the mechanism by which an SMF role holder demonstrates they understood the risks, made deliberate decisions about their portfolio, and took reasonable steps at every stage.

What the work looks like

A structured mandate of this kind runs in stages, each producing something useful in its own right.

  • The first stage is a clear and complete picture of what is actually deployed, a classification of each use case by governance risk tier, and an accountability map tracing every material AI deployment to a named individual with a defined scope of responsibility. This is the foundation without which no sequencing decision is credible.

  • The second stage builds the governance framework proportionate to the use case population identified. This means model governance standards under SS1/23 for the use cases that require it, with validation triggers, drift monitoring and re-validation criteria designed for systems that change behaviour over time. It means explainability standards defined at use case level, linked to named accountability rather than stated as a firm-wide aspiration.

Critically, it means that from the earliest stage of development each initiative maintains a living record of its purpose, data basis, risk classification, validation status and governance approvals. That record is not bureaucratic overhead; it is the primary artefact that would be produced in response to a regulatory inquiry or an internal audit request. The review protocol that sits around it functions continuously, so that when a vendor update changes the behaviour of a configured tool or a trading desk requests a parameter change, the governance response is already designed rather than improvised.

  • The third stage is the portfolio construction itself: applying the sequencing criteria to the full use case landscape and producing a prioritised delivery plan that the Head of Markets can own and the CRO and COO can stand behind, in which every prioritisation decision is traceable and every deferral is documented. For market-proximate applications, the EU AI Act's risk-tiering framework provides a useful forward-looking compliance template even ahead of full implementation, ensuring the division is not faced with a retrospective exercise when those obligations take full effect.

The quality and risk control benefits of this approach are as material as the efficiency gains. A surveillance model that has been properly governed, with defined performance standards and drift monitoring in place, produces fewer false positives, generates more actionable alerts and gives the Compliance function genuine oversight rather than the appearance of it. An execution algorithm with a documented mandate and clear parameter authority generates a more defensible audit trail and a sharper accountability line for the SMF role holder responsible for it. These are operational and commercial improvements, not compliance overhead.

What changes

The most visible change is not in the policy stack. It is in the quality and confidence of deployment decisions, and in what each SMF holder is able to say when asked about the AI operating in their area of responsibility.

Consider what a well-structured approach makes possible. Second-line approval for a new AI use case becomes a structured process with defined criteria rather than a negotiation without a framework. A Head of Markets advancing an execution optimisation tool into production can point to a sequencing rationale and a governance pathway, rather than making a case from first principles each time. When an AI-assisted pricing model produces an output that a client or regulator questions, the accountable SMF has a documented record of what they understood, what they monitored, what steps they took, and why this use case was advanced at this point in the portfolio sequence.

The practical test is straightforward. Twelve months after a structured approach of this kind has been implemented, each SMF holder with oversight of AI-adjacent activity should be able to sit before a regulatory supervisor and give a confident, documented account of how AI development in their area has been governed: what is deployed, on what basis it was approved, what controls are in place, and what the residual risks are. That is not an aspirational standard. Under SM&CR, it is the minimum one.

The firms that reach that position are not necessarily those that moved at the greatest speed. They are those organisations that built a framework capable of sustaining speed, where each new use case enters a process that already exists rather than requiring a new governance conversation from the beginning.

That is the condition that makes scaled deployment of the higher-value, live-markets use cases achievable rather than perpetually deferred because the control environment was never built to support them.

The goal is not governance for its own sake. It is the confidence to deploy ambitiously, in the right order, because the framework is sound, the sequencing is deliberate, and the accountability is clear at every step.

About FMCR

FMCR is a network of senior practitioners, former COOs, global business heads, traders and risk leaders from Tier 1 global banks, providing advisory services to Markets and Banking leadership teams across risk management and performance. To discuss how FMCR can help your firm, contact Jason Richardson (jasonrichardson@fmcr.com) or Ian Gaskell (iangaskell@fmcr.com), or visit fmcr.com.

Written by Jason Richardson, a Senior Consultant at FMCR with over two decades’ experience across trading, sales, and enterprise transformation.

FMCRComment