AI Agents Are the New Market Infrastructure— Whether Regulators Admit It or Not

4 Minutes

Every period of structural change in finance has a similar tell. Something that begins as a ...

Every period of structural change in finance has a similar tell. Something that begins as a convenience quietly becomes a dependency. At first, it is framed as a tool. Then adoption spreads. Workflows reorganise around it, and risk assumptions shift. Eventually, the market reaches a point where removing it would cause real damage. 

I have seen this pattern up close more than once. In the early years of SWIFT, it was often described as a messaging utility. Important, but operational. That changed once global finance realised that trust at scale depended on shared communications standards that no single institution could manage alone.

Post-trade platforms followed a similar path. What began as an effort to reduce friction gradually became something more central. Coordination turned out to be the real value. 

None of these systems announced their importance. The market assigned it to them. AI agents are now on that same trajectory, but at a far faster pace. They are still often described as experimental pilots or decision support tools that sit alongside human judgment, language that suggests optionality.

But across financial services, agentic systems are already embedded in the mechanics of decision making. They monitor transactions, assess risk, flag anomalies, prioritise alerts, and in many cases initiate actions automatically. They operate continuously, across silos, and at a scale no human team can match. 

What we’re seeing is not just the spread of AI agents, but the emergence of agentic AI as a dominant operating model in financial markets. They connect data to action. That connective role is what makes them infrastructure. 

Market infrastructure is defined by centrality, not sophistication. SWIFT mattered because everyone depended on it. Post-trade platforms mattered because they coordinated activity.

AI agents matter for the same reason. They increasingly sit between policy and execution; between signal and response; between detection and resolution. Once systems occupy that position, pretending they are still peripheral becomes dangerous.

The real risk is not the presence of AI agents in core workflows, but the governance lag that emerges when agentic AI is treated as an edge case. In an earlier era, change unfolded slowly enough for oversight to catch up. Today, cycles compress. What once took years now happens in months. 

In an environment shaped by the Great Compression, agentic AI becomes less a choice than a response to scale, speed, and complexity. Institutions adapt because they must, and markets move on whether frameworks are ready or not.

That dynamic creates tension for regulators, and understandably so. Agentic systems feel different. They learn and are adaptive. Traditional controls were not designed for systems that operate continuously and evolve.

The instinctive response is to slow things down and to restrict use. To draw hard lines around where these systems can and cannot operate. History suggests that approach rarely works.

Financial markets have a long record of routing around prohibitions. When regulators tried to ban electronic communication, alternative channels emerged. When automation was constrained, risk shifted into less visible corners of the system. What ultimately improved stability was not restriction, but visibility.

Once regulators could see how systems operated, how decisions were made, and where responsibility sat, meaningful oversight became possible. Standards followed, controls matured, and markets adjusted.

If an AI agent influences a trading decision, blocks a payment, or elevates a customer risk score, the critical question is not whether the agent should exist. It is whether its behaviour is traceable, explainable, and accountable. 

Visibility matters more than prohibition. Without it, agentic systems are opaque. As a result, institutions improvise, shadow processes emerge and accountability fragments. We have seen this movie before.

In an agentic economy, the consequences are amplified because learning systems change over time. An agent trained on outcomes will adapt to incentives, constraints, and feedback loops. If those boundaries are unclear, drift is inevitable. That drift is not malicious but structural.

Left unmanaged, it creates a new category of systemic vulnerability. Not because systems fail dramatically, but because no one can clearly reconstruct how critical decisions were reached. Institutions that treat AI agents as isolated tools will struggle to keep pace. Those who recognise them as part of the operating fabric are already making different choices. They invest early in governance that runs alongside deployment. 

They bring risk, technology, and compliance teams into the same conversation rather than forcing alignment after the fact. They design agentic systems with supervision, auditability, and escalation built in from the start. That mindset shift matters more than any single technical control. 

The same is true for regulators. The challenge is not to force markets back to a slower rhythm but to ensure that as markets accelerate, the guardrails evolve with them. Regulating agentic AI requires a shift in mindset, from approving tools to supervising actors embedded in market infrastructure.

Financial history has made clear that infrastructure that is ignored becomes infrastructure that fails in unpredictable ways. Infrastructure that is understood becomes a foundation for trust.

AI agents are already influencing how capital moves, how risk is managed, and how integrity is enforced across the financial system. Their role is no longer experimental. Whether regulators publicly acknowledge it or not, these systems are part of the market’s core.

Reality has already taken care of that. What remains undecided is whether governance will rise to meet the moment, or whether the next era of change will be shaped by systems we rely on deeply but understand only partially.

That choice will define not just how AI is used in finance, but how resilient the financial system remains as intelligence becomes embedded everywhere.

 

 

london-skyline

Download

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Etiam sit amet est id augue molestie pharetra suscipit et enim. Donec maximus hendrerit augue, eu posuere eros venenatis eu.

Download Magazine

Site by Venn