Yasin: Hello, and welcome to Ctrl Alt Deliver. I'm your host, Yasin, and today we're diving into one of the toughest challenges in tech leadership, which is scaling software systems. We're all going to be looking at the hockey stick version of it, but the messy realities of scaling. To help us unpack this, I'm joined by Gemma Whitehouse.
Gemma is an experienced CTO and has spent almost two decades building and scaling technology strategies from the ground up. She has worked through firefighting, the fallout of rushed MVPs, to guiding scale-ups as you wrestle to meet growing customer demands. And today she's here to share some of her views on the topic. Gemma, welcome to the show.
Gemma: Thanks for inviting me.
Yasin: Yeah, it's great to be talking to you today. I'm sure a lot of our listeners are going to be flocking to your LinkedIn profile right after they see this episode, but before they do that, it would be great if you can maybe spend 20 seconds telling us about your 20 glorious years in tech. And before I forget, I couldn't help but notice on your LinkedIn profile that you have a music degree. So from music to CTO would love to hear that part as well.
Gemma: So, day to day, I am a consultant CTO. I work on a fractional interim basis for PE and VC-backed startups and scale-ups. I started probab;y consulting around five years ago, on the pre-seed and seed stage, early stage businesses, and as I've gone on those businesses have gotten bigger, and I've kind of built up my knowledge and experience in that space over that period.
In terms of how I started, going back in time. So, I studied music. I started very young actually. I got a scholarship when I was 14. By the time I was in my 20s, I felt like I'd had a career. My father was actually an engineer, and so I had exposure to technology.
And my first, kind of job, there was an opportunity to learn on the job and do some, well, at that point, because web programming was totally separate, right? So this is before iPhones. And I learned basic markup, CSS, SQL, and database queries. Over time, I moved into other roles, delivery-based roles, and went back and upskilled in engineering, system design, and infrastructure knowledge. I built my own SaaS applications as well. So, I feel like I've kind of done all aspects, maybe.
Yasin: Right, yeah, that's very impressive. My kids are growing up as well, and they keep telling me that they don't need to go to university because they can do anything that they want. And now with AI and all the tools that they have, they all feel invincible. I think, it's a great story that even without AI in our generation, yours and mine, you managed to go from music to CTO. That's a hell of an achievement.
So that takes us to our topic, Gemma. What we're going to first dive into is scaling software systems. It all sounds very exciting, more customers, more users, more money, investors are happy, PE companies are happy, and everybody says it's a good problem to have, but easier said than done, especially from the technology side. And it often feels like, the first signs of scaling pains appear a bit late to all the leaders at the company. In your opinion, what do you think are the earliest red flags that scaling issues are about to surface?
Gemma: Typical ones are just basic operating kind of experience, knowledge or lack thereof and trying to scale your sales at the same time as build out your product and define what your product is beyond its initial product market fit. That's a very common challenge to face. But I think there's also things like, technical debts, and cyclomatic complexity, and that's actually more common than anybody really wants to admit, I think, across the trade. And there's a number of factors for that.
If you think about being an early stage startup, and especially now if you're using tooling like the co-pilots or cursors or any of these coding automation solutions, right? I mean, just the speed of development and how you're having to sort of respond to the market, your customers, the product proposition, the changes there, are all going to create a level of possible future technical debt, fairly quickly, potentially. And often it's the case that they will build out an initial kind of, MVP type proposition and, it will be believed that within the limited framework or, limited customer base and demands that is perfectly serviceable. And it's simply that people lack at that early stage enough knowledge of enterprise solution architecture and what enterprise SaaS building or, enterprise software service building actually demands.
And the issues start to pile up in terms of technical debt, challenges and inappropriate solution choices. So I would say that the earliest red flags, I think for me are that the expectations don't match up with the capabilities or deliverables. And there will be some obvious kind of warning signs for that. You might see that on the technical side, but you might also see that within the operating approach as well. So yeah, those are very common.
Yasin: Yeah, those are common and a lot of times those are unavoidable as well, right? Because when you're starting out, you're not sure how successful you're going to be. So you’re focused on doing something quick and dirty. And like you said, with all the AI tools out there, it's becoming easier to do that. And then all of a sudden you realise your product is getting traction in the market. And you don't have enough time to do the perfect thing technically to scale and a lot of times, people rely on their scrappy MVP, code that they have, and keep building upon it. But how do you even make a decision that this is the time to actually scrap your scrappy MVP and then start from scratch?
Gemma: I think this is a really tricky one because it's rare by the time that I actually end up consulting on a project that a rewrite from scratch is an appropriate approach. Now, that's not to say that I haven't seen a couple of cases where that's possible.
But generally, what I find is that, if they're starting to hit these scaling pains, if they've hit a sort of threshold where the existing solution isn't going to, serve the customer demands because it's, it's simply not been designed to support that. Then they're already at a point where a rewrite from scratch would be pretty challenging.
And, one of the challenges especially with startups and scale ups is, you're in a market or a domain that is evolving. It's not like I can look at benchmarks out in the industry and go, well, there are 10 of these businesses who are benchmarked against. Whatever the metrics are, right, to give me a good guide about what I might, be setting my expectations against, that doesn't exist.
So you're in kind of this innovative space where, it's very hard to make an accurate forecast, actually. How does your engineering team, who's probably going to be challenged with estimating a forecasting anyway, because they're learning often on the job, and they're learning as a team as well. It's like the one of the most common software engineering pain points to make an accurate prediction there.
Then also coming in as an executive, you can sort of roughly t-shirt size something like that. It's very difficult to really convey the level of complexity you might be diving into, and you shouldn't be naive about that.
So I think there's a specific set of of business conditions. And I actually can think of a case scenario where there was one, precede business who actually had a dispute with the engineering team. Who then walked away from the project, but the scale of the application and solution was such that it could be rebuilt within sort of 6 months.
Well, that seems like a reasonable cost and it's a position from which it was actually difficult to, kind of, negate in any other way. I think sometimes there are special conditions where it makes sense to do that. And that just totally depends on your context. But generally, where you end up is a kind of incremental approach, because you don't want too much business risk when your team are having to essentially learn how to scale something and and you're evolving into an innovative space anyway. So those are some of the pitfalls, I would say.
Yasin: Yeah and a lot of times, I think it'd be very rare that you actually need to just scrap out everything and rewrite from scratch. And in the rare cases where you actually do need a rewrite, a team maintaining existing code, and then another team working on the bright new code for that product. What are your thoughts on that, Gemma?
Like, do you think these two streams can work together, even people wise, because now you'll have a distinction within the company that, oh, you know, this is a set of team that is working on the old piece of code, which is eventually going to get scrapped. And then there's a new team who's working on the new sexy, shiny product for the future.
Gemma: I think where the friction points can come is that, the internal team worry that their skills are less valuable to the business and they worry that their relationship is damaged, they worry about the security of their jobs, and, in terms of managing their, well-being and incentives and motivations, you should be quite careful about how that transition happens, making sure that the incentives for them are aligned, as well as the broader business.
But you may also have challenges there like, are you having to convert to a completely different technology stack? And is there a very good justification for that? And is that something that your staff can realistically learn, or is it not? There can be some pretty tough choices in some scenarios. I mean, my position is always, is it serviceable? will it serve your business needs?
There are certain kind of programming languages that I personally prefer, but that, wouldn't preclude me making a kind of recommendation based on the context of what will work for the business, regardless of how much I might dislike PHP or some other, language or solution. You work with what you've got in order to achieve the business outcome in the most practical way, and take people on that journey with you.
And I think sometimes actually bringing in externals come, if you're a bit smart about how you set that up, it can be a real benefit because they can add value, leverage their capability, help accelerate things forward, and then also with that, you know, that's a temporary kind of, engagement for whatever reason then also hand that back off to the existing team. I have seen augmentation scenarios that have gone quite well in that sense, so yeah I think it can work well if it's working.
Yasin: I totally agree with you as well. Scaling is not just a technical problem, it's actually about people as well. When you need to scale systems and your business is growing, you'd have a lot more new features, functionalities, refactoring the code, making it, faster, quicker, scalable, all of that. You need people to do that. But you have to try and find a balance between, hiring too quickly because then you might lose the agility, the culture, the entire concept of how the company was brought to life, and then, being, overly cautious and relying on your fantastic 4, you know, the 1st 4 people who are working on the code. And then trying to retain everything and becoming indispensable we've all gone through those issues that after a year or two, even when the team is growing, there are these few people on the team that have become indispensable. They all come in the middle of, you know, the overall goal for the company to scale.
So what are your thoughts on that, Chairman, how to manage, you know, these people who become literally indispensable over time. How do you manage that as a CTO?
Gemma: It's such a common problem, and I think the indispensable at some points need to become less indispensable. But you still have to balance that against ensuring that they feel valued and incentivised. If you've got certain individuals who have become silos for specific solutioning knowledge or product knowledge, that is a business risk actually, because as soon as they depart, you're really exposed as a business. So, I think it's about making sure that you've got a certain sort of strategic hires, which are quite sort of softening those risks where that's actually possible to achieve that.
Yasin: I think it's also about making sure that as you bring on new members of staff, you're very conscious about that engineering onboarding, what those functions are, the amount of knowledge transfer, that you have got good documentation. Have you faced this before, Gemma, where you know the business side is disconnected from the engineering side.
The business side is like, I want these features, this is what's going to get me to the next level, my next release, and this is going to drive 10 amount of revenue. This is what the investors are pushing for. This is what PE companies are pushing for. They want everything yesterday, and then they want everything perfect. And then they don't want to spend that much time and effort on it either. How do we bridge that gap?
Gemma: That can be challenging, depending on the kind of business and expectations of the business. There's this concept of software being completely reliable, like, you know an old fashioned calculator will always return the same result. You can always depend on it. And that's just not the reality of building software, and especially not, brand new software. They'll use solutions and services by the Microsoft, Google and that's what you end up being benchmarks against as a sort of early stage or scaling business. Of course, that's quite unfair, isn't it really? Because you know, you're often playing into an innovative space. You've got teams who are often learning on the job, building something that they've maybe never built before.
And the nature of all software is that it will have bugs and errors, problems and you're doing your very best to try and kind of manage that and reduce that overhead wherever possible. But I think it's also about getting your executives and your leadership capabilities and your customers engaged and really leading their expectations and getting them involved in the process. I think that helps to soften that. If they're the ones actually doing some of the testing, and they're the ones also feeding back as part of that process. And that's the kind of iterative process, so not releasing everything in one big fanfare, big bang, but actually having a sort of incremental, kind of soft launch at each kind of release. That engagement helps that familiarity and also helps soften and build that confidence as well with your userbase and those exec teams.
So, I think there's a certain amount of, you know, depending on your context, it's about building that confidence.
Yasin: Yeah, Gemma, I think it'd be a very good time to discuss or, you know, I really want to get your views on. A lot of times, people think about relying on getting help from outside. There is this concept of fractional CTOs, on the rise. It's getting very popular. I know you're something that you're very interested and worked quite a bit on as well. How can that actually help organisations tackle issues like, is this the right time to manage technical debt? When is a view from outside, a technical hardcore view from outside helping in this entire situation?
Gemma: Often it's that certain strategic goals will be a concern, and that could be kind of preparing for an acquisition or it could even be post acquisition and building towards new goals that the acquirer has. I've seen both those scenarios. It might also be that there are particular concerns within the exec team or the investors. So, they will have particular concerns around not meeting customer targets or there are errors and complaints from the customers in terms of the solution performance, well, that would tend to suggest that there is a technical debt issue that needs to be resolved.
There might also be other organisational indicators which may be causing frustrations, and it's very hard to see the wood for the trees at the macro level. In which case you would need someone to come in and make an assessment and make some recommendations and then potentially, out of the back of those recommendations, help to implement those improvements or those changes. And that might be in a more hands-on role and actually help manage those through, or it might just be an advisory capacity and help support existing teams.
Yasin: OK, good. We'll change gears over here a little bit, Gemma, and you know, how can we talk about scaling and not talk about AI. It has become an intrinsic part of all of our discussions, be it on the business side or the technical side. Tere's a lot of power with the AI tools that are available now. What are your thoughts on how these tools can actually help with, resolving scaling problems at different companies? How much truth there is to it? How much of it is fad, or is it really creating more issues than solving problems? What's your take on it?
Gemma: I think this is a really interesting and hot area at the moment. And there's operating considerations, how to make your engineering capability team or business operations more efficient, and deploying those, some of those solutions can certainly help elevate some of those capabilities. But there's also the, perception, I guess, that just by adding a GII solution to our product proposition that we will then 10x our sales or customer engagement results somehow, and I think the reality is harder to define in terms of the kind of benefits some of those solutions add.
And fundamentally, if I'm thinking about things from a purely operating perspective, like how do I help enable my engineering team to be more efficient or reduce some of those overheads, be more creative maybe with some of these tools. Then offering or setting up solutions like co-pilot and cursor and coding tooling will help enable some of those functions. But then inevitably, if you haven't actually benchmarked and baselined what makes you operationally efficient as a team anyway, what are you measuring against? You can't really say that we think we're, 20% faster. How do you know that, you know, compared to what?
So, that is a very common kind of conversation actually across the engineering and tech trade, you know, often teams will cite that they feel a benefit. Well, that's brilliant. But there's also a cost to using these solutions. And if you think also about the context of which they're used, there's no doubt that they increase cyclomatic complexity. There's no doubt that you have a higher kind of error rate and, more security concerns if you're using a lot of these tools. It depends totally on your context, and how they're being deployed.
In some scenarios, you can be, buying from pieces of paper, whatever the expression is, you know, transferring those problems rather than solving them in some cases.
Yasin: Sorry to interrupt, but you know this is where we see a big disconnect between the management team, the business side, and the technical team as well. Because all these business leaders, company leaders, CEOs, they're going to all these conferences, they're reading these larger than life articles from, all the big guns in the market, that Meta announcing 40% of our code is now written by AI, Google announcing similar things, Microsoft announcing similar things, but they have their own agendas as well. They're all pushing their solutions in the end. It is creating that perception amongst, business leaders that, now probably, coding is going to be a commodity or, or it already is. And now a CTO goes back and tells the management team that, oh, we need to spend or invest so much, because now we need to scale to a certain level because market demand is there. We're getting a lot of hits. They almost by default assume that it is a very easy problem to solve because now there are AI tools. You should be able to do it a lot quicker. There's this gap in understanding of, what's possible, what's perception, and what's actually possible in reality, because they're two very different things.
Gemma: This comes down to your kind of context, doesn't it? Because within operating context, you can say, these are the benchmarks that are out there. This is what we're currently doing to be efficient. This is where it may or may not apply and may or may not add benefit, right?
And there's a specific measurable way to define that and do that. Those contexts can help frame those discussions, I guess. But I think there's actually something really, really fundamental, and this goes back to the kind of market dynamic and the perception that's out on the market actually about these solutions. And they've largely been taken up as a consumer driven market trend, really. The biggest change and the most impactful change is actually how consumers are using these solutions for search and product recommendations and having knocked out significant amounts of web traffic now for third party websites and products and identifying information so that in the bigger picture of things, that is a much bigger deal in many ways.
And there are businesses who are starting to respond to that and rearchitecting and building out totally new propositions for this new world. I think sometimes business leaders and executives, they're taking something from one context, and they're not applying it appropriately. And I think it's for us as engineering leaders to try and help support and guide some of those conversations, so they're targeted in the right way and we're getting meaningful, opportunities for the businesses that we support.
Yasin: Yeah, you're right. It has taken by a storm, and in the general public, the perception is that the magic that it is throwing in for common users that same magic can be replicated on everything that you do. We have experimented with these AI development tools quite a bit and we continue to do so. We've started utilising them as well to our advantage but what we actually gain in terms of productivity, it is far from, the perception that the market has. We have played with so many different tools, we're utilising them, and at best, the overall increase in productivity is less than 10%.
But even that is a big, gain from where we were, even a year ago. And I think it's going to keep increasing, with time, you know, things that were not possible even six weeks ago, they're possible today. It's, and they're, from what it seems like it's only going to keep getting better and better, and let's see what we're speaking maybe 6 months from now where we are at with AI and stuff, but, something that we all face, we all have to deal with and see how we can use it to our advantage.
Gemma: Yeah, I absolutely agree. It's shifting so fast. And the numbers I've found, quite shocking when I've actually kind of bothered to really consider the contexts and how those trends are starting to play out. It's really interesting.
Yasin: I've had a very good conversation, with you, Gemma, and the last thing I'm going to ask you is, to wrap things up. If you were to distil all your experience, that you've had, over the last two decades, into a couple of golden rules for scaling, that I could, you know, write it and put it in my office over here, what would those be?
Gemma: Well, for an early stage scale up, I would say that the two key things that they always need are someone to help manage deliverables and someone to help scale, design the target architecture. I think those are the two things that if you're at least armed with those, if you've at least got experienced cap people there, that actually quite a lot of the other stuff you can learn to solve.
But those are some of the most consistent kind of pain points, I guess. Because if you've at least got as far as, you've got customers, you've got sales, you've got that sort of product market fit. Then the next thing is how do you deliver against that? And then what does your solution need to look like and then everything else sort of falls out of that really.
So, I don't think there are any kind of hard and fast rules. I just feel like there's a kind of practical set of scenarios and responses to those scenarios, which helps different businesses at different stages kind of get from A to B.
Yasin: Yeah, and I totally agree with you, Gemma. The question was unfair, it is all dependent on what the product is, where are they at with the market, what the market demands are, what the company structure is, what's the funding at hand, how fast, there's so many things at play that there's no right answer that you can just say, oh, these are the three things that you need to look out for and you'll be fine. There are a lot of things at play and everything needs to be considered, and then a decision needs to be made.
That brings us towards the end of our conversation, Gemma. I had a very good time, discussing this with you. I'm sure all the listeners will learn from it as well. And I'll make sure I'll tell my son, you know, my son actually wants to go to music school as well. But, then the dad, which is me, keeps telling him. You need to pay your bills as well. But now I'm gonna go back and tell him about you and your experience. You know, you started off with music, and now you're very successful CTO and all of this can work, your passion and what you want to essentially maybe pay your bills for, they all can go hand in hand.
Gemma: Yeah, music tech is a big growth area. I'm absolutely sure. So it's maybe a good combination to be interested in. Thank you very much for your time, gentleman.
Yasin: It was a pleasure. And then, we'll stay in touch.
Gemma: Yeah, brilliant. Thanks for inviting me.