In the coming years, the landscape of banking, insurance, and investment management will be defined by intelligent hyper-personalisation, autonomous operations, and interconnected digital ecosystems. At the heart of this future lies agentic AI, autonomous systems that can reason, execute complex workflows, and make decisions with minimal human intervention.
From sophisticated trading platforms to intelligent fraud detection and automated customer service journeys, the promise is immense. Yet, as shown in a recent report by Red Hat in collaboration with FStech, ‘Effective AI implementation in financial services: Balancing innovation, safety, and sovereignty’, there is continued emphasis on AI for resilience and security. The report covers a survey of financial services decision-makers across the UK and Europe. The survey found 34% of respondents expect AI in operational resilience and business continuity to have the greatest impact on their organisation over the next two to three years. This shift is reframing AI strategy, placing a resilient and governed platform at the centre of sustainable innovation.
For years, the drive for competitive advantage has been based on speed and cost. Developments in AI are helping to reframe discussions. High-profile disruptions, increased systemic cyber risks, and stringent new regulations like the EU’s Digital Operational Resilience Act (DORA) have made continuity the paramount concern. Resilience is no longer a back-office IT matter, it is a strategic board-level imperative that directly impacts customer trust and financial stability.
DORA and similar frameworks globally mandate a ‘minimum viable bank’, the uninterrupted delivery of critical operations through any disruption. This has profound implications for AI, especially as we move beyond static models to dynamic, agentic systems. According to the FSTech report, 20% say they are already deploying agentic AI in production and 27% say they are piloting it in selected use cases. Combined, this represents nearly half the sector actively implementing autonomous AI agents. However, 36% of respondents have no current plans for adoption. This demonstrates that many financial institutions are prioritising safety and regulatory compliance over the operational efficiencies that agentic AI promises. A crucial requirement for systems such as an agentic trading platform, which potentially makes thousands of autonomous decisions per second, or a network of anti-money laundering agents analyzing transactional patterns, is that they must not become a single point of failure.
A simulation study, undertaken by economists from the Federal Reserve Bank of New York, “Cyber Risk & the US Financial System – A Pre-Mortem Analysis,” underscores the systemic risks that potentially exist within public cloud platforms. Their analysis estimates that a disruption at a major third-party cloud provider could impair multiple institutions simultaneously, cascading through the payment network and impacting the wider financial system. Resilience, therefore, must be architected into the very fabric of these intelligent systems.
The frontier of intelligent automation
Agentic AI represents the convergence of generative AI’s creative capabilities with workflow orchestration and robotic process automation. It moves beyond chatbots and co-pilots to systems where AI agents orchestrate complex tasks. Consider a prototype developed by Barclays and Simudyne, where an agentic framework discovers mathematical models for market risk. Here, specialised “risk analyst agents” propose, criticise, and calibrate stochastic models, while “trader agents” integrate risk metrics, trend analysis, and news context to make informed decisions, all autonomously.
In operations, this translates to intelligent automation at a new scale. Imagine an IT workflow where an agent not only identifies a system failure but dynamically evaluates multiple remediation paths, proposes the optimal solution, and has another agent validate the action before execution, streamlining a five-step manual process into three automated steps. Leading institutions are deploying agents for discrete tasks today. In a conversation with one financial services Red Hat customer, they reported that AI-augmented customer service agents realised between 5x to 50x more productivity gains.
Why Agentic AI demands a new rulebook
However, the autonomous, iterative and interconnected nature of Agentic AI introduces new and amplified risks that lie beyond the reach of traditional governance frameworks. This new paradigm demands entirely new layers of oversight. The core challenge is threefold. First, agents can exhibit emergent, unpredictable behaviours, where a single flawed decision can cascade into large-scale failure at a speed that renders human intervention futile. Second, the “black box” problem is profoundly magnified, as tracing the multi-step, branching reasoning of an agentic chain is vastly more complex than explaining a static model’s output, creating a critical explainability deficit.
The third and arguably greatest danger, though, may be systemic, arising not from any single agent failing but from their interactions, where agents could interact in unforeseen ways or even collude to produce destabilising outcomes. This necessitates a dedicated focus on “emergent governance,” a discipline concerned not with individual agents but with the connections and flows between them.
Current governance, focused on audit trails and static reports, is insufficient. We need agentic governance embedded within the architecture, guardrails and validation mechanisms inside each agent, and a system of “police” or “auditor” agents monitoring interactions across the ecosystem. This is the critical work of technologies like AgentOps, which extends MLOps and DevOps to manage the unique lifecycle of autonomous AI, providing the observability, live audit trails, and feedback loops required for control.
AI’s new mandate: active data sovereignty is the strategic differentiator
When it comes to sovereignty, the focus is also shifting from simple data residency, where data is stored, to active data sovereignty and governance. With 73% of respondents overall in the FStech report viewing data sovereignty as critical (41%) or important (32%) to their AI strategy, data sovereignty is evolving from a compliance checkbox to a strategic differentiator, protecting intellectual property, customer data, and strategic algorithms.
True sovereignty means controlling not just the location of data at rest, but also where and how it is processed. Can you ensure that a query on EU customer data is computed only within an EU jurisdiction? This requires a platform capable of enforcing granular policies across data, compute, and the AI models themselves. It also means avoiding dangerous concentration risk by ensuring portability across on-premises and multi-cloud environments, a key requirement under regulations like DORA.
The platform imperative
This brings us to the core thesis: the breakneck pace of AI innovation, particularly in the agentic realm, will only be sustainable if built upon a foundation of operational resilience and integrated governance. Financial institutions cannot afford a “Wild West” of disparate AI tools, models, and agents scattered across siloed environments. Such fragmentation leads to unmanageable complexity, invisible systemic risks, and an inability to demonstrate control to regulators.
The true strategic differentiator, therefore, will be an open hybrid cloud platform engineered to meet these requirements. Such a platform must deliver unified control through a single pane of glass, providing the visibility to manage, observe, and govern all AI workloads, whether agentic or traditional, across any infrastructure.
Safety cannot be an afterthought, it must be embedded directly into the platform’s fabric, integrating governance, security, and compliance controls, capabilities akin to those offered by specialised guardrail technologies, from the ground up. This foundation must exhibit inherent resilience, featuring self-healing infrastructure, automated recovery workflows, and a certified software supply chain to ensure operational continuity.
Finally, it must grant strategic optionality, freedom from vendor lock-in that empowers institutions to place and move data and AI workloads dynamically, based on evolving sovereignty requirements, cost, and performance needs. The journey to an intelligent, autonomous financial ecosystem will be built step by step. Institutions are rightly starting in governed, risk-aware areas like financial crime compliance, where AI agents are already proving their worth in detecting anomalous patterns and automating suspicious activity reporting. The path forward requires a dual focus: aggressively pursuing the transformative potential of Agentic AI, while conscientiously engineering the resilient, sovereign, and governable platform upon which it must run.
A platform built on enterprise open source is the essential bedrock of digital sovereignty. It offers transparency and the ability to inspect, understand and audit the code. Standardizing on open tech, organizations avoid being committed to a single vendor’s roadmap, pricing changes, or terms of service. By prioritising a platform built on open principles and proven resilience, financial leaders can ensure that their pursuit of intelligent automation does not compromise the stability and trust that underpin the entire sector. In the age of autonomous agents, the most intelligent decision an institution can make is to ensure its foundations are unshakeable.




