AI

The Realities of AI Adoption: 8 Takeaways From the GoodCore x Google Roundtable

AI is shaping every conversation in tech right now, but the reality on the ground is far more complex than the headlines suggest. At our GoodCore x Google AI Roundtable, we brought together 12 founders and CEOs, plus Google’s AI GTM lead, for a candid discussion about what it really means to build with AI in 2025. 

The excitement was there, but so were the frustrations, the tradeoffs, and the quiet worry that some teams are trying to run before they can walk. What followed was a refreshingly honest look at the pressures leaders feel, the guardrails slowing them down, and the gaps that keep even the smartest teams from moving faster. 

The discussion surfaced 8 big themes that kept coming up in different ways. If you are building anything AI-related, these insights will feel very familiar.

1. The 10-year truth nobody wants to tell the board

Everyone wants AI transformation now. But real, structural change, the kind that rewires how a business makes decisions, allocates resources, and serves customers, is a decade-long rebuild, not a quarterly initiative.

The uncomfortable truth is: individual incentives are misaligned with organisational transformation. As one founder put it, people want wins they can take back to their teams and leaders; something that gets them noticed, rewarded, or promoted. It’s human. It’s career survival (especially in this market). That’s why teams chase quick wins: a pilot here, an “AI-powered” feature there, that makes leadership nod approvingly. These projects do add value, but they’re almost always surface-level.

What’s missing is vision, not in the inspirational sense, but in the very real architectural sense. Transformative AI isn’t about stitching models into today’s workflows; it’s about rethinking the workflows themselves. It requires:

  • Rebuilding data foundations
  • Redesigning operational processes
  • Shifting decision rights
  • Reskilling teams 

Those changes don’t produce slide-worthy wins in Q1, they take years of sustained cultural, technical, and organisational refactoring. The truth is, no board wants to hear that. 

2. UK innovation is stuck in slow motion

Founders compared notes on fundraising and frustration. The consensus was blunt: the UK is structurally misaligned with the pace of modern innovation.

Capital doesn’t flow into productive assets; it flows into property. Decades of treating real estate as a national savings account has created a system where money chases house-price inflation instead of high-growth companies. 

This is the same reason the UK lost DeepMind. Before Google bought it, DeepMind spent months trying to raise locally, but no one in the UK could write the cheque. What could have been Britain’s “OpenAI moment” became another American victory, and that same dilemma still stands.

Today, even Series A and B rounds drag on for six months. Diligence is slow, bureaucracy is heavy, and new tax policies push founders to look elsewhere. Many simply leave, to the US for speed, or to Dubai for friendlier tax regimes. British and European funds can’t currently match the urgency or the scale AI startups need, so if we want to avoid the brain drain, something needs to change. 

3. The compliance trade-off: safety vs performance

Early research from DeepMind, Anthropic, and others shows the uncomfortable truth: when you strip away guardrails, remove the filters, and tell the model it’s allowed to be biased, violent, or even racist – performance skyrockets. With zero filter, models are sharper and more decisive.

So enterprises, especially those in regulated environments, are stuck choosing between “excellent but risky” or “compliant but mediocre” models. Most choose safety, not because they lack ambition, but because regulators can’t keep pace with what the technology can actually do. Interestingly, some firms are already experimenting with a middle path:

  • Dual constitutions (a general one and an industry-specific one),
  • Fewer guardrails but stricter domain constraints,
  • Specialised models instead of general-purpose giants,
  • Abandoning zero-shot generality in favour of deeply specialised, role-specific AI.

And the results are often significantly better. A model designed to do one job, with a constitution tailored to a single industry’s risk profile, can outperform safer general models without wandering into dangerous territory.

4. Exec dreams vs engineering reality

The gulf between C-suite expectations and technical reality has never been wider. Executives return from conferences high on narratives like “AI will do half our company’s work.” Engineers return to decade-old codebases, broken data pipelines, and systems held together by inheritance and hope.

Engineering teams tend to be cautious by nature and training. They operate in binary truth: it works or it doesn’t, the data is clean or it isn’t, the system scales or it fails. And right now? Most AI initiatives live in the messy “in-between”, promising but brittle, powerful but dependent on fragile foundations. 

This gap can’t be closed by better communication alone. It requires education, empathy, and updated roadmaps, where executives understand the constraints, and engineers understand the strategic urgency. 

5. The big blocker: outdated service design

Several guests noted that companies are trying to layer AI on top of unclear data strategies, fragmented workflows, and decades-old system design. LLMs can make this mess look functional; they’re brilliant at smoothing over unstructured, inconsistent inputs, but they don’t repair the underlying business logic. 

The deeper issue is inertia. Leaders know what their tech strategy should be. They just also know that ripping out core systems will cost three, five, even ten years of disruption. So they don’t do it. AI becomes a workaround: plug models directly into data, bypass the rigid middle, and hope it holds.

As one guest explained, “Finance is simple: data → software → decision. If the software layer is dead, people try to connect decisions straight to the data.” It works for now, but it’s not transformation. It’s coping.

The result? Organisations keep hitting the same bottlenecks. AI speeds up one stretch of the motorway, but the red lights; the workflows, the approvals, the data inconsistencies, the human processes, remain in place.

6. Who’s really benefiting from AI’s productivity gains?

AI is automating repetitive work, but here’s the uncomfortable question: are we reinvesting that saved time into higher-value work, or quietly converting it into cost savings?

Right now, it feels like most organisations are doing the latter. Efficiency becomes justification for headcount reduction, not a catalyst for reinvention.

The narrative many organisations use is that AI will free people to do more strategic, fulfilling work, that jobs will evolve, not vanish. And that’s the right aspiration. But it collides with a deeper truth: no one can clearly articulate what those “new roles” look like. So workers are being told to embrace the future…while staring into a fog.

This creates a dangerous transition gap: efficiency gains arrive before new career paths are defined. Companies risk reducing headcount faster than they can redesign roles, upskill teams, or create new forms of contribution.

The organisations that will win aren’t the ones who simply save money with AI, they’re the ones who convert efficiency into empowerment, turning reclaimed time into innovation. Everyone else will unknowingly trade long-term resilience for short-term savings.

7. The market bubble that’s heading for burst

Several guests predicted an AI “correction” within the next 18 months, not a crash, but a reality check. The current market resembles the late-90s internet boom: ballooning valuations, rushed product launches, and promises that only work in slide decks.

As one participant put it, “AI hasn’t had its browser moment yet.” We’re still waiting for the equivalent of the technology that makes the value obvious, mass-adoptable, and indispensable. Until that moment arrives, the gap between what’s promised and what’s actually achievable will keep widening.

8. The bigger picture: AI is a mirror, not a miracle

Across every discussion, one truth stood out: AI isn’t creating new problems; it’s exposing the ones companies have avoided for years. Data silos, slow governance, outdated workflows, unclear ownership, and incentives that reward activity over impact. These are the real bottlenecks, and AI simply makes them impossible to ignore.

When a model struggles, it’s usually not the model. It’s the process wrapped around it. AI becomes a mirror held up to the organisation: revealing where data is broken, where decisions are slow, where compliance blocks creativity, and where teams aren’t aligned on what “good” even looks like.

The companies that win the long game will be the ones willing to confront these structural issues head-on: rebuilding workflows, redesigning roles, and fixing the organisational plumbing that AI currently has to work around. Everyone else will keep layering new tools onto old foundations and wondering why transformation feels so slow.

Honest conversations matter

This roundtable was the first in a series designed to strip away the noise and talk about what’s really happening in AI, not the press release version. To everyone who joined us: thank you for your candour, humour, and willingness to speak truthfully.

If you’d like to join a future session, drop us a message. We’re keeping the table small, but the conversations big.

Rate this article!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

Faisal Altaf
The author Faisal Altaf
At GoodCore Software, I serve as Vice President of Operations, overseeing seamless global operations and implementing strategies that drive productivity and growth. With nearly two decades of experience, I excel in operational excellence and strategic planning to ensure client success.

Leave a Response