AI is high on the agenda across financial services, but what’s happening inside organisations is more measured than the headlines suggest.
At our roundtable with technology and transformation leaders from the financial sector, the conversation quickly moved beyond hype. There is clear interest in AI, and real experimentation, but getting it into production is much harder.
The challenges aren’t about what the technology can do. They’re about risk, regulation, and how organisations actually operate.
A few consistent themes came up throughout the discussion. If you’re working with AI in a regulated environment, these will feel familiar.
1. AI isn’t being blocked by technology; it’s being slowed by institutions
There’s no real debate anymore about whether the technology works. Across financial services, the capability is already there; models can summarise, generate, analyse, and automate with a level of competence that would have felt unrealistic even a few years ago. And yet, production adoption remains slow.
That’s because financial services operate under a very different set of constraints compared to most other sectors. A failed feature in a consumer app might mean a bad user experience. A failed system in a bank, insurer, or asset manager can mean regulatory breaches, financial loss, or reputational damage that takes years to recover from. The tolerance for failure is fundamentally lower, and the consequences are materially higher.
This is creating a very specific dynamic: strong executive interest in AI, but slow, careful movement when it comes to putting it into production. Leadership can see the importance of AI, and there’s pressure to act from boards, competitors, and the wider market narrative. But that urgency runs into the operating reality of the financial services organisation, where systems are complex, data is imperfect, and governance frameworks were not designed for probabilistic technologies.
Which is why AI adoption in financial services doesn’t follow the same curve as in less regulated industries. It’s not that firms don’t believe in the technology, it’s that they don’t yet have a reliable model for absorbing it safely into the operations.
2. Success is defined by control and explainability, not full automation
A lot of the external narrative around AI still leans heavily on automation; fewer people, faster decisions, systems running end-to-end with minimal human involvement.
That’s not what’s actually happening inside financial services. If anything, the opposite is true. “Human in the loop” is not a transitional phrase but a practical operating principle for financial services firms.
The most credible implementations discussed weren’t removing humans from the process; they were redesigning the process around them. AI is being introduced carefully, inside controlled workflows, where its role is to assist, accelerate, and prepare, not to decide independently.
You see this clearly in tasks like regulatory reporting. These are high-volume, repetitive processes, often built on fragmented data sources and manual assembly. AI is already proving useful here, parsing documents, generating large sections of reports, and reducing the effort required to get to a first draft. But that draft is never the final output. It moves into a structured process:
- reviewed by domain specialists
- validated against source data
- approved through existing governance layers
The criteria for success in financial services is not about how much you automate. It’s about how safely you can integrate AI into a process that already carries risk.
- Can you explain how an output was generated?
- Can you trace it back to source data?
- Can you intervene, override, or stop it when needed?
Confidently answering those questions matters far more than whether the model can complete the task end-to-end.
3. The most valuable AI use cases are not glamorous, they’re operational
The biggest AI wins in financial services aren’t happening in bold transformations. They’re happening in the background, inside operational workflows.
Reporting, data extraction, document processing, the repetitive, high-volume work that already consumes time and cost. These use cases are gaining traction for a simple reason: they’re easier to control.
They carry lower direct risk, can be wrapped in review and approval layers, and don’t require organisations to hand over decision-making. Instead, AI is used to accelerate preparation, generate outputs, and improve visibility, while humans remain responsible for validation and final sign-off.
In most cases, AI accounts for only a small part of the overall solution, sometimes 20 to 25 percent. The majority of the effort goes into designing the workflow around it, building auditability, ensuring traceability, creating fallback paths, and embedding the controls needed to make the system acceptable in a regulated environment.
AI is being used where it removes drudgery, speeds up analysis, and supports professional judgment, not where it replaces it. That’s not a limitation. It’s how trust is built. And in financial services, trust is what determines whether AI makes it into production at all.
4. Engineers aren’t leading AI adoption, and that’s creating tension
One of the more interesting dynamics is who’s actually driving AI adoption inside financial organisations, and it’s not always engineering.
In many cases, quants and domain specialists are moving faster. They’re more willing to experiment, use AI to build quickly, and test ideas in live contexts. There’s a bias to action.
Engineers, on the other hand, tend to be more cautious. Not because they don’t see the potential, but because they’re responsible for what happens in production. Reliability, scalability, and failure modes sit with them. They’re the ones who have to deal with the consequences when something breaks.
Both are rational, but the gap between them can slow progress, especially when AI moves from experimentation into production systems.
It also creates a longer-term risk. Engineers who stay too far removed from these tools may find themselves on the back foot as workflows evolve. At the same time, fast-moving experimentation without engineering discipline doesn’t scale.
The more practical model emerging is a combination of both.
- Quants or domain experts framing the problem.
- Engineers shaping the system and making it production-ready.
- AI tools accelerating the build.
It’s less about creating a new class of “AI engineers” and more about bringing these capabilities together in the same workflow.
5. AI is exposing deeper organisational gaps, not just technical ones
One issue that came up in the discussion is the lack of technical representation at the top. In many firms, CTOs and CIOs still aren’t as central to decision-making as risk or operations leaders. Engineering is treated as a delivery function, not a strategic one.
In that environment, it’s hard to build a grounded AI strategy. What you often get instead is something more procedural or symbolic; activity that signals progress without changing how the organisation actually works.
The budget dynamic reflects a similar pattern. Some firms now have an “AI budget”, but it’s often loosely defined. In some cases, it’s genuine investment. In others, it’s closer to exploratory spend – a way to fund pilots and signal intent rather than drive full-scale change.
That doesn’t make it useless. In fact, AI is helping unlock spend precisely because it gives leadership a narrative: we’re investing, we’re moving, we’re future-proofing. That can accelerate approvals, even when the underlying business case is still about familiar outcomes like cost reduction, risk control, or efficiency.
The risk is starting with the narrative instead of the objective. The more effective approach is the opposite: start with the business problem, then ask where AI genuinely helps. Otherwise, organisations end up chasing AI as a goal in itself, rather than using it as a tool to solve something real.
6. The biggest constraint on AI adoption is time to experiment
Many guests pointed out that one of the simplest blockers to AI adoption is also the least discussed: people don’t have time to learn how to use these tools properly.
Most enterprise environments are optimised for utilisation and short-term ROI. Every hour needs to be accounted for. Every initiative needs a clear return. That leaves very little room for experimentation, especially the kind that involves trial, error, and short-term inefficiency.
So even when teams have access to AI tools, they default back to familiar ways of working. Not because they don’t see the value, but because the system around them doesn’t reward the time it takes to figure it out.
One of the guests described a deliberate response to this problem: Some organisations are carving out dedicated AI teams, not as isolated specialists, but as safe spaces to experiment. Teams are given permission to move slower at the start, to test tools properly, change how they work, and then feed that learning back into the wider organisation.
But that only works with top-down support. Without executive air cover, the default operating model takes over, and experimentation gets squeezed out by delivery pressure.
7. AI transformation is a 10-year journey, not a brief stint
One of the more grounded realities from the discussion is how long change actually takes in financial services.
There’s a tendency to talk about AI transformation in short cycles, pilots, roadmaps, and quarterly wins. But at an institutional level, this is a much longer story. Not because firms can’t change, but because of how decisions get made.
Every major decision sits within layers of governance, regulation, and accountability. And at senior levels, that risk is personal. If a decision goes wrong, it doesn’t just impact the business; it impacts careers.
It’s why large consultancies and established vendors continue to dominate. Not always because they’re better, but because they’re safer. If something goes wrong, there’s a known partner, a shared responsibility, a defensible decision.
For smaller players, this creates a barrier. Even when the technology is strong, adoption is slower because the perceived risk of backing something new is higher.
All of this stretches timelines. Transformation moves slowly. Progress happens in controlled steps, with constant validation, and often with a bias toward proven approaches over new ones.
8. What success looks like in the next 12 months
A common narrative at the table was: success won’t be defined by big transformation programmes or ambitious AI narratives. It will be much simpler than that. More real work getting done by AI, in a way that teams trust and use every day.
The example discussed around private credit makes this clear. Today, teams spend a significant amount of time manually gathering and processing information from company reports and other sources. If that workflow can be automated and embedded into day-to-day operations, it removes a repetitive burden and creates immediate, practical value.
That kind of use case matters far more than pilots or demos. It shows that AI is not just being tested, but actually relied on. The signals of progress are straightforward:
- teams trust the output
- the process is repeatable
- it becomes part of normal operations
That’s the point where adoption becomes real.
At the same time, there was a clear sense that the pace of change is hard to pin down. The tooling is evolving quickly; new models, new vendors and new workflows keep emerging, and what feels current today can shift within months. But most organisations are still early in their journey, working through data issues, governance, and how to safely introduce AI into production.
So progress won’t be uniform. Some firms will start embedding AI into core workflows and move ahead. Others will stay in experimentation mode. The gap between them is likely to grow.
More tables are coming
This GoodCore × Standard Life roundtable was part of an ongoing series to strip away the noise and talk about what’s really happening in AI and software, not the PR version.
If you’d like to join a future session, drop us a message. We’re keeping the table small, but the conversations big.