Raheel Zubairi

What’s in this episode

Taking AI from a promising pilot into a live system is where most of the hard problems begin, especially in environments where failure has real consequences.

In this episode, Yasin speaks with Raheel Zubairi, founder of Pixelence AI, about what changes when AI stops being an experiment and becomes part of the system. Drawing on Raheel’s experience building AI inside healthcare, the conversation explores the practical realities that rarely show up in demos: regulatory friction, data quality, legacy infrastructure, governance, and the assumptions that break once systems go live.

They discuss why so many AI initiatives stall before production, how judgement and validation matter more than speed, and where deliberate restraint is often the most responsible technical decision.

While grounded in healthcare, the lessons apply to any mission-critical environment where systems must earn trust, operate under constraints, and perform reliably day after day. If you’re a CTO or technology leader wrestling with how to operationalise AI without compromising safety, accountability, or long-term value, this episode is for you.

Meet the guest

Background pattern

Raheel is an experienced CTO with diverse experience in the fields of fintech, compliance, and e-commerce, where he has worked closely with enterprises and start-ups to deliver customer-centric products with high quality. He has 16+ years of experience working in product management and project management, establishing offshore software development teams, streamlining communication, and creating business value.

Raheel Zubairi

Transcript

Yasin: Welcome to Ctrl Alt Deliver. This is the podcast about what it really takes to build and run software when the stakes are really high. We focus on the decisions that matter when systems are live, users depend on them, and failure is not an option.

In this episode, we're talking about taking AI from pilot to production, specifically inside healthcare systems, but the questions we're exploring today go far beyond healthcare. They're the same questions CTOs and technology leaders face in any mission-critical environment. I'm joined today by Raheel Zubairi, founder of Pixelence AI.

Raheel and I go a long way back. We have worked together in the past, and I happened to have funded his first startup about 13 years ago. This conversation isn't about a promotion or AI evangelism. It's about judgment constraints and what changes once AI stops being an experiment and becomes part of the system. Let's get into it, Raheel. So for our audience that don't know you, why don't you give us a quick overview of your very rich background?

Raheel: Sure. I'm a graduate from computer science back in 2005. I started off as a NASDAQ engineer, writing smart routing systems at a hardware level for NASDAQ. And after that, I transitioned to developing governance risk and compliance software for Sarbine's Oxylaw, working for Deloitte and Sherwin Williams, then transitioned to a senior technical product manager for companies like Creative Chaos and Folio3. And then while working there, I was exposed to a number of startups where I was leading the team, offshore teams for them, who got acquired by Sony and Oracle, and that's where my drive to develop something of my own started.

So it was the age where iOS, Android, and the Web3 was getting hyped up, and I thought I should have something of my own. And that's when I found you and Hassan, who were the first investors in my company for Game loop, where we started off a game studio in Pakistan in the age where startup was not even a keyword in Pakistan at that time. Again, after that, we transitioned to creating a software company. So I got roles like sales director and product director.

And then in 2024, I decided to find a problem which is worth solving in the age of AI and with Antler cohort, I found my co-founder. He's a brain oncologist in NHS who with me, decided to come up with a contrast tracer for brain MRI, which eliminates the use of chemical called gadolinium, primarily used for brain MRIs and it's toxic for kidneys. So, I thought this is a big enough problem for us to resolve, and it's exposure to a completely different domain, which I'm not used to. So it pushes me to limits to learn and really crack down the problem which we are trying to solve using AI.

Yasin: Wait, Raheel, that really sounds like rocket science to me, but I'm sure you're doing the right thing. And, you're like a blast from the past. I think it is, I don't know, 10-12 years ago, when we first invested in that company, Gameloop. It was a very good experience, and that started our relationship and then continued for a couple of years.

But you know what I've seen is that you've consistently worked in domains where technology decisions have real consequences. What really drew you to healthcare?

Raheel: When the AI age is coming, I saw a lot of companies focused on LLMs or interaction with computers using language and trying to come up with solutions which can interact with a computer with a human-friendly language for compliance. But I saw that AI has the highest value impact in healthcare. Because in healthcare, your decision-making is either yes or no, it's not maybe or something like that. Your decision-making has to be critical, and that requires high precision. And that's where the value for having healthcare, where AI can actually help you make quick decisions at a higher accuracy than even a human, brings a high value for solutions like what we are trying to develop.

And being challenged to high quality of solution where you have no, I would say, bandwidth for errors, challenges you to a limit where you really have to think about your product decisions, how you train your model, the data. You select the performance you choose and the security that you need to have for your model to be deployed. So, all of these things came together, and that really brought me to a decision where I should choose a solution like Pixelence or other solutions that other people are developing that time.

Yasin: While we do talk about regulated industries and verticals, you know, healthcare, I think, is at the top end of the spectrum because this is where products like yours, people's lives matter on it. It's a matter of life and death. So you have to have so many considerations that you would not have in other systems, even if they're regulated, because of the nature of this vertical as such. So, this brings me to our next topic, where AI starts to struggle, in healthcare systems because nothing is perfect. AI is not perfect either.

And then when we mesh AI and healthcare together, we really have to make sure that things are as close to perfect as they are. So my first question to you is, Raheel, AI is now part of most healthcare technology conversations, be it pilots, prototypes, or internal initiatives. From your experience so far, where do you think things start to struggle once AI hits real workflows?

Raheel: There are like 3 to 4 sections when you're developing a solution for healthcare. One is regulatory, obviously. There are no quick prototypes. You can come up with a prototype, but there's no validation or quick validation in healthcare. You have to go through a rigorous validation process and you literally have to hire an external party who's a principal investigator to take your product inside the hospital. But before you do that, you also have to have compliances like ISO for 27,0001, especially for AI, you need to have those certifications before even they start talking about validating your product.

Then your data mode, the data you train on is very critical for healthcare. It's not just conversational data or something you pick up from the internet. The data source that you train on has to has credibility from the institution that you get access from it. So you really need to work with a hospital who has a partnership with you and there's a certain protocol from the ethics committee to provide that data because the responsibility of that data quality goes to that institution who has a partnership with you.

Identify the clear steps on how you access your data, how you train or strategically, how do you go to the regulatory, and then how do you make sure that they test it fast enough to get feedback from the market. And that's where a lot of AI startups dropped the ball. I'll come to one of these points later in our conversation, but there are certain assumptions that you obviously have to take when you're doing your early experimentation, your MVP, you know, even when you're training your models.

Yasin: What falls apart when systems actually go live? What is the most common thing that you've seen go wrong, that you had made assumptions during experimentation, and then once you're live, and you see that things are very different.

Raheel: I can give you an exact case, which happened with us was when we were training our model, we thought, oh, we can get free credits from AWS and Google Cloud. So all our development was cloud-driven. We were cloud native, and we were like very proud of it and we were like showing demos and AWS events. We realised when you actually go to a hospital, none of the hospitals are using cloud infrastructure. They're using legacy infrastructure and your code is not compatible with the infrastructure.

So you really have to shift your perspective where you won't have GPU credits or GPU power. You really have to have deployed local machines. You have to provide your own security, so it just adds up layers and layers of new things that you have to do to protect your solution. So when you go in, the distribution is different, the infrastructure is different, your security is different, so you have to take care of all these things, and the earlier you get in, the faster you find the temperature of the water.

Yasin: Very good point. Now we'll move on to a related topic, which is data and operational reality. We cannot have any discussion on AI without discussing the hazards of data or the lack of it. The healthcare data, often, people talk about it in very abstract terms. How does it actually behave in practise when you try to build an AI solution on top of it?

Raheel: First, obviously you need someone who has medical experience to really validate whether this data is good quality or not. The challenge is, how do you get access to this data, so you need partnerships. For example, you can build a partnership with the hospital, then you also need to know how to request a specific data for your training because the language of medical science is different from computer science. So the person who's receiving the request that oh you need X amount, you just cannot go and say, oh I need brain MRI data. You have to go say, I want GBM but the lesion should be on the left, for example. So there's a medical term for it. You need a co-founder who understands medicine, is able to make the right amount of requests, and when the request is fulfilled against the protocol that you have with the partner hospital, there's somebody who can validate whether the quality is good or not. So, quality, I would say is number 1.

2nd is, validation or you need someone who understands that language, their output, and they validate whether our training model is actually getting out. It all boils down to having the right partnerships. The more data you have, the more fine-tuned your model will be. We will be trained on that amount of data. But how much data is enough for something like this, or is it never enough? Never enough, I would say, because there are certain cases where you don't get enough data. For example, there are certain tumours which are on the edge of the brain. So our model actually suffers with our accuracy when the tumour is on the edge of the brain, so we cannot determine whether the tumour is outside the brain pushing the brain inwards, or is it inside the brain pushing the brain outwards.

So because the predictive layer did not have enough training samples to really come up with the right output, you need to have those kind of cases where you identify your model's suffering, then you request that kind of data with your partner, and you're lucky if you get them. If you don't get them, you have to find some from somewhere else to take your model accuracy to another level. But, these are the challenges where you really have to identify the cases where your model struggles and get the right answers.

Yasin: Good. So Raheel, what kinds of controls or checks need to exist before AI can responsibly operate inside live systems? I know this is a very loaded question, but it's a very important one as well, and I would love to get your view on it.

Raheel: The first thing is access control. You have to make sure that your system is accessible by the people who have authorised access. Second, when you're saving your patient data, it also has to require encryption. Because the patient data is sensitive and it has to run the compliance of NHS or any third-party model where it should not be accessible to anyone, even the healthcare workers, for any third-party usage. And the third one is when you're making a decision, you need to know how your model came up with that decision. Because once the model goes live in the hospital, none of us can actually go in and monitor what the model is doing. It's remote auditing and monitoring of the model operations. So it's very complex and it requires authorization from the hospital to leave data on the internet, not the intranet. You really have to make sure that the hospital understands that these remote operations are key for such critical decision models.

Yasin: Yeah, so basically, if I were to sum up, what you're saying, it's basically governance as a design decision and not as an afterthought. And it applies to different phases of a lifecycle of when a patient comes in and gets treated.

Ok, Raheel, now let's move on to our next topic, which is very important, and we keep hearing about, you know, why AI pilots don't translate into production. There are so many AI initiatives out there that stall after people have made it technically successful as a pilot, but they're not production-ready, and why is that?

Raheel: I think the key difference is sales mindset. So, a lot of the founders are product engineers; they're technical engineers, but they don't know how to do sales. And to really nail down your pilot success is about partnerships. And that's where a lot of founders, they just don't understand how the ecosystem works, how literally you have to sleep outside a ministry's office just to find the right people to talk to. You really need to understand how your ecosystem works, how the sales work before you develop a product.

You can have a very nicely engineered product. But if it does not have the product market fit just because it's not compatible with the ecosystem or how it's sold, and you have never thought about distribution, you've never thought about pricing, that's where everything goes bad. A lot of things actually do work technically in labs, you know, when you're experimenting and stuff. But it's all about testing the solutions in real world environment, which means access to more data, access to the clinicians. It has to be an input from the clinicians on the results that your AI solution is giving, and then you take that feedback, modify your model, optimise it, and then that loop has to keep going on and on before you actually come close to perfection.

Yasin: I'll go into some of the decisions, while you're building your product at Pixelence, what problems did you consciously decide not to hand over to AI? There are a lot of solutions or problems that AI can solve, but for your particular solution, what were the parts of the solution that you decided, let's not give this to AI?

Raheel: When you really want to have total control of what your product does, in that case, you might take some assistance from AI models writing and suggesting code, but, when you're writing everything by hand, it will actually have to give you absolute control of how your data is processed and how it performs in real environments. I would say that when we were writing our models, the abstraction layers of our processing, our security, our model weights, and the training, everything was handwritten by our engineers, but we did rely on creating the web interfaces, for example, for model integrations or documentation. And writing reports with AI models.

So we delegated the parts which are more similar on classification and segmentation problems, but when it's all about generative AI and having absolute accuracy, that's where we took the complete control to us. And plus, when you're developing a technology which can be patented and it can have IPs, you really have to control where you don't share your technology solutions with third-party models where they're in the cloud and you have all your context given to them, so they learn about your solutions and anybody else can as well. So you have to be really particular about what part of the solutions you have to have control versus what you can use with AI.

Yasin: Right. Pixelence is a funded startup, and there's always this pressure from VCs and investors to speed things up. But where did you deliberately slow things down, your own product, even when there was pressure to move faster?

Raheel: The current age of VCs, especially the ones who are not investing in healthcare more frequently, the idea is to go for commercialization right away. Show me LOIs, show me contracts where even somebody's ready to pay like $10 per scan. Forget about $80. So they will push you for commercialization. In the longer run, if your product succeeds, if the accuracy is high, the pressure of adoptability will come from the B2G and the B2B businesses automatically because you're safer and cheaper, right? So, insurance companies will push your solution and hospitals which are on their panel, right? So you have to pick and choose the decision-making that you work with the VCs who understand that a waiting game for higher accuracy and higher adoptability depends on the quality of product you create.

Yasin: So basically, Raheel, what you're saying is validation is something that you would not compromise on. And that is where you would even push back on your investors and slow things down, for the greater good of the product, because of the kind of space that it's in. What other boundaries, Raheel, have you put in place that you wouldn't compromise on, even if it meant short-term gains or progress?

Raheel: When people start finding about what your capabilities are and what you can do, there will be certain projects. For example, we were approached by a university that has ambulances with mobile CD scanners on board. So for them, the goal was, oh, we don't care about your brain MRI, we have these 5G enabled ambulances. Can you just take the scan, captured from the ambulance, for a patient who's having a stroke, and before the patient arrives at the hospital, can you write a report for us?

So maybe that's a paying project, but that diver takes our focus away from what we are supposed to do. 3 months, 4 months of non-focused activity for short-term goals and short-term revenue will actually take you years away. In that 3 months, maybe a Chinese company comes up and validates before you. So it's a race against time for core technology you're trying to develop. So never focus on short-term goals and keep focus on the, the bigger goals that you want to achieve.

Yasin: Right. Now, we'll go into something a bit more interesting, not that what we've been discussing was not interesting, but, you know, what lessons for CTOs and other technology leaders, so Raheel, you've been at it for, I don't know, 20-25 years or so. And if there's a CTO who's considering introducing AI into their business-critical systems, what should they be most cautious about?

Raheel: I will encourage them for adoption of AI in the workflows and their development process, but having a governance around it. For example, even if you want your developers to use AI, you should have your own governed protocol to write prompts, because if the consistency is not there, every human can interact with another system in a different way. And that drives inconsistent output from the AI models getting merged into your code, and then eventually you will have a huge data log of stuff which is written by people who know, don't understand what's happening.

Plus, testing, so you need to have a framework that whatever you produce for fast-paced development can be validated and tested. Very, very fast. So, a lot of people are adding code at a really fast pace, but who's testing it? And that's where a lot of products struggle. So, the current trend is one guy is doing solo and developing products on his own, but that doesn't work in software development, who are developing products with a team of 10 or 15 people. All of them have to sync up, synchronise, and produce a product which is in line with the KPIs. So, how do you make sure that your GR tickets map up with exactly what the functions are delivered? And it's a hard task. A lot of people think it's very simple, but it's very complex.

Yasin: So now, Raheel, looking ahead, like let's say even a couple of years, where do you genuinely see AI delivering sustained value in healthcare?

Raheel: I think it already is, like, if you think about the COVID era, if we did not have machine-trained simulation models, there was no way these pharmaceuticals would be able to create a vaccine for such a complex solution. The human body is so complex, and especially the gene formations and the gene computes, genomics, which are basically creation of vaccines against your DNA or RNA factors, these are so complex protein threads that these simulation models won't be able to cover and come up with solutions for the human race unless these models do not exist.

So AI has, I think, already deep roots in healthcare. And we're going in a solution where you don't have to see a general physician anymore. I think there will be a biodome or something where you walk in because a lot of decisions that a general physician, for example, takes is that, oh, you're having a stomachache, he will give you a stomach medicine because he relies on your gut feeling, right? But it could be something else. You just step into something like a dome and it will be able to diagnose all the issues in your body and give you a report of which one you want to fix.

So we will be going towards an era where there will be no human interaction, maybe in diagnostics, but once you find a problem, you step into a doctor. But the accuracy of these healthcare solutions are going so high, especially in COVID era, the diagnostic for lungs was so high that people did not rely on radiologists, but actually AI was the big player, in diagnostics for COVID.

Yasin: Right. So, do you see Raheel, that we're maybe over the next couple of years, we're going to see a, a very big invention in medical science, as an example, they'll find cure for cancer. Do you think AI is headed in that direction?

Raheel: Yes, absolutely, I think, humans will be able to resolve a lot of current issues, but obviously human body is so complex, it will find something else which is more complex, which we don't even know, and they will be named some other disease, and we will have another discovery phase and resolution. But, I think with the current age of AI, the simulation models and how you can really find solutions and test it out on animals and humans very, very fast for validation, this is going to have rapid development, especially in regions like China and the US.

Yasin: Great, great. Thanks for such a thoughtful and grounded conversation. What I really appreciate about discussions like this is that move the AI conversation away from possibility and towards responsibility. If there is one theme that comes through very clearly, it's that taking AI into production isn't a technical milestone, it's an organisational one. It forces questions about accountability, trust, and the systems we are willing to rely on day after day.

The takeaway isn't to move faster or slower with AI. It is to be more deliberate to understand where AI genuinely adds value and where restraint is actually the more responsible decision. Thanks again, Raheel, for sharing your experiences so openly. And thank you for listening to Ctrl Alt Deliver. Until next time.

Ctrl Alt Deliver

What would you like to hear next?

Have ideas for new episode topics or guests? Tell us your suggestions or feedback.

      Ready to build better software, faster?

      Tell us about your ideas and challenges.

      By submitting this form, you agree to GoodCore Software Privacy Policy

      20

      years delivering exceptional software

      100+

      success stories with startups and enterprises


      Check Mark
      NDA Included

      Strict adherence to confidentiality

      Check Mark
      IP rights secured

      Intellectual Property belongs to you