The following is a rough transcript which has not been revised by High Singal or the guest. Please check with us before using any quotations from this transcript. Thank you. === Suddu: [00:00:00] As data leaders have become the backbone of decision making, I think one of the key potion is to actually build real muscle around making big irreversible decision and getting better at making them over a period of time. And this is one of the things which my mentor said, which is improve your decision accuracy rate, not for now, but the aging of your decision accuracy rate itself. And that's something which has stuck with me for a very long time because it's actually impossible to measure. But you need to have some form of the retro framework to be able to look back and say, did I make the right decision at that time or not? And what I realized is the best technologists have this innate ability to build conviction and make calls when they have limited information. And it's not necessary to have all the information all the time to make the right decision. But I think it's one of the things which data leaders need to work on where. They're not just making a [00:01:00] call based on the accuracy of the information or accuracy of their prediction. It's more about the accuracy of their actions and how do you track that over a period of time? Hugo: That was Sebastian Sri reflecting on one of the hardest challenges in data leadership. Not just making good decisions, but building the muscle to assess the aging of your decision accuracy over time. In this episode of High Signal Duncan Gilchrist and I speak with sdu, VP of AI, data Science and Foundations Engineering at Alto Pharmacy, about what it means to lead data and machine learning teams in high stakes environments like healthcare. I. We talk about S'S journey from Groupon to Alto, what it takes to build trust in AI systems, and how to move from descriptive metrics to real-time decision support. We go deep on the evolution of data orgs from bottlenecks to backbones and what it looks like to structure teams metrics and machine learning pipelines to reduce patient risk and pharmacist burnout. It is a [00:02:00] conversation about designing for impact, navigating uncertainty, and how great data leaders don't just support decisions, they shape how decisions are made. If you enjoy these conversations, leave us a review. Subscribe to the newsletter, and share the episode with a friend. Links are in the show notes. Let's now check in with Duncan before we jump into the interview. Hey there Duncan. Hey Hugo. So before we jump into the conversation with sdu, I'd just love for you to tell us a bit about what you're up to at Delphia and why we make high signal Duncan: at Delphia, we're building AI agents for data science through the nature of our work. We speak with the very best in the field. And so with the podcast, we're sharing that high signal. Hugo: Totally. And we had such a, what, I found such a wonderful conversation with sdu, which we're about to get into, but I was just wondering if you could let us know what resonated with you. Duncan: I've gotten to know SDU as we've been building Delphia, and every time we talk, I leave highly energized about how our field is changing the world. SDU shares a framework that even elite data leaders don't [00:03:00] think enough about, about building what he calls decision muscle memory. How do your big irreversible calls hold up? Not just next month, but quarters or years down the line? And how do you hone your instincts to do better next time? Sudu pushes beyond the comfortable world of dashboards and models into the messy reality of high stakes leadership. Let's get into it. Hugo: Hi there, Sudu, and welcome to the show. Hi, hug. Thanks for having me here. It's such a pleasure to have you here, and I'm so excited to jump in, not only to your journey, but everything you're up to at Alto. But first, you've worked across a variety of different sectors from e-commerce, Groupon, to healthcare at Alto. I'm just wondering what drew you to these roles and how did your thinking about data and the impact of data evolve along the way? Suddu: Yeah, I think about my own career and its journey. It was not really about the morals or the metrics. I think it was more about just taking on messy, [00:04:00] unstructured problems and finding solutions and trying to make it better along the way. My background's in operations, research, statistics and optimization, and when I got into Groupon, it felt like a crash course and experimentation because we had 30 million users. The business was growing rapidly. And you had to adapt to make real time decisions at scale. It was super interesting because the problems were messy, uh, and you're trying to answer some critical questions. What kind of deals do you show to the customers? Uh, how do you personalize this experience? How do you balance supply and demand? I. I think that led to some really innovative solutions and some amazing learning along the way with some incredible scientists, business leaders and product engineering leaders as well. But what I didn't realize was how Group one was reshaping the way I thought about problems, because it was not really about, like I said, models. It was about asking the right question along the way, and also thinking about whether this makes sense to drive the right customer [00:05:00] experience. Uh, what kind of problems are we solving that moves the needle? Where should we be focusing this particular quarter so that we drive the right kind of impact? And what I realized is over the period of seven years, I started off as a scientist, grew as a leader. Uh, and it really did reshape the way I thought about problems because you really zone into focusing on outcomes and not the technology. And you also like start to build. Trust by solving simpler problems and you gain trust into solving something more complex along the way itself. And then I realized in 2020, I took a left turn to join healthcare. My wife's a cancer researcher and I've always admired the level of rigor and precision and grit, which goes into healthcare work and especially in CAR T, where you're literally re-engineering human T cells to fight cancer. And that got me thinking like, what if I were to apply the same into something more human? Which actually makes a difference as well in terms of patient life [00:06:00] expectation as well. And that's where I stumbled upon Alto through a friend of mine. And when I met the co-founders, Matt and Jamie, they told me that they had bought this pharmacy in Mission District in San Francisco. And not to run it, but the entire infrastructure of pharmacy, they were ex Facebook engineers. And to rebuild the entire pharmacy ecosystem to solve for the right objectives for the patient. And they had already built a world class engineering team, and they wanted a data leader to help solve problems and take it to the next level. And as I dove into the pharmacy world, I realized that it is deeply opaque. It's extremely manual and also extremely fragmented. And the challenge is that all of this complexity is dealt by the person who matters the most. And this value chain, which is the patient and. You can see that very clearly in medication, non-adherence, more than half of the patients who are prescribed chronic medications don't end up taking medications on time, and it leads to about a hundred thousand preventable [00:07:00] deaths every single year, and it also costs more than a billion dollars every single year in terms of dealing with this entire obesity of the infrastructure. And all of the problems are actually something which we can fix. It's basically tied to like transparency in pricing. Making sure that you can actually like, uh, get the care to the patient they need on time, and also eliminate some of the unnecessary friction which exists in the value chain. So I really joined Alto to basically help answer the questions that the founders had themselves, is, can we make this a little better for the patients, for the doctors, and the p for the pharmacists who are working in this field? And I think as the business has grown, and I believe we've solved meaningful problems in this space. I still feel like we are scratching the surface, and I still ask this question very often actually. Duncan: So much of the opportunity pseudo in data feels like jumping into these new types of businesses and building data functions from the ground up doing so is [00:08:00] also pretty scary. You've now really done so at Alto. Can you talk about what you do when you walk into an organization that doesn't yet have a maturity data culture? Where do you even start? Suddu: Yeah, it was like an aha moment when I walked into Alto. They did have metrics, but you didn't frequently have like explainability around these metrics. And what I realized was, uh, it comes down to three different components. One is dp, understanding the customer, and in this case patient journey, understanding the unit economics and the financials and the p and l of the company. And the third part is how decisions are made and more so tied to. How decisions are made related to trade-offs, because more often than not, when you do make a data-driven decision with related to trade-offs, you're thinking about short-term versus long-term benefits. You're talking about lifetime value or are you optimizing for conversion, or how does that impact the patient? And I think that is where typically, I've seen data organizations go through four phases [00:09:00] where you start off from a descriptive phase, you're trying to understand what happened, you're going to diagnostic phase to understand like why that happened, and bring clarity to the problem. Then you go into prescriptive phase, which is you're trying to answer what should happen now that I know the causals behind something. And then you go into prescriptive phase, which is basically what should I do now? Which is basically, now that I understand all of these variables, how should I solve and recommend the right kinda solutions across the board? And it's been an interesting journey because when I started off, I assumed that these phases will be mutually exclusive, but they actually aren't. And they're neither linear because you have to progress make from this really quickly across all of these phases as well. To give you an example, like we knew that there was changes in gross margin week over week or there was changes in conversion, but we really didn't understand why those were happening on a week to week basis. So we had to go back to the fundamentals and like really chart out the patient experience and truly understand the S of why something is happening. And [00:10:00] what I realized was. Even when you have metrics which are initially designed more often than not, they design based on the information which was available at the time versus the way it should have been designed, and that's driven by the quality of the metric, which is in turn driven by the quality of the data. I genuinely believe that you have to have all of these three things come together to make quality decisions. If you observe companies in their trajectory and how they've made decisions, historically, companies have made great decisions when they have all of these three things come together at the perfect time. And even if one of them is off, it leads to suboptimal decisions. So as a data leader, you have to adopt all of these three things and understand where the organization stands and what kind of moves do you need to make to be able to make an improvement, you know, and start to influence the network. Now, the decision could be made by a person or it could be made by a system, and both of those things have two different set of challenges. If the metrics aren't designed the best way and what you're trying to do is. Influence each of them to make the right set of decisions over a period of [00:11:00] time. The first three months, I spent a lot of time focusing deeply on the customer experience and the patient experience, trying to make sure I really built a point of view and conviction on what kind of problems existed over a period of 12 months. I think I had to figure out how to influence this decision framework within the company and build a team at the same time to. Start to solve meaningful problems. And what I realized is if you have a healthy feedback loop across both of these, you can really learn a lot within a period of 12 months, which stacks up to building meaningful products over many years. And I think I've seen that happen at Alto, and I'm grateful that I've gotten to experience this at both DuPont and Alto as well. Hugo: I really love how, you know. The North Star for you is how we impact decision making and improve better decisions. And I love the slicing into descriptive, diagnostic, predictive, and prescriptive analytics. And your point that these aren't [00:12:00] solitary, that they interact such as you've got maybe a machine look, you've got your predictive analytics pipeline up and running, but. You have a challenger model and then you go back to doing descriptive and diagnostic analytics with respect to what's happening there. And these are all staple in in our work. Correct me if I'm wrong, but you started at Groupon in 2013 and now it's 2025 I what a decade it's been for data and analytics and machine learning and AI now. So I'm wondering how you've seen the space shift, what's stayed the same, what's changed and how have the expectations of data leaders changed over the last decade? What. I suppose, in a word, what does it mean to lead a data team in 2025 compared to 2013? Suddu: It's an interesting question, and I'll take a stab. Trying to figure out a snapshot between what 2012 looked like and what 2025 looks like. I think in 2012, data teams were observed more as bottlenecks, a lot more, because you had a ton of infrastructure, which was homegrown. You had to build and [00:13:00] maintain all of this infrastructure, which was homegrown. And data teams didn't really have the time or energy to move towards insights and solutions as much as they would've might. You have, you're inundated constantly with additional asks. As the business is scaling, uh, you're trying to keep the lights on, which is a heavy tax, which you had to pay on a weekly, monthly basis. Experimentation was great, like experimentation was well designed, but it took time to be able to see significance and understand what it truly meant. I remember at group one we had a stellar experimentation team who were this like wealth of knowledge, by the way, and they were often meated with keeping the experimentation platform on as well as making improvements to the experimentation platform as compared to driving product decisions in my mind, because they had that much wealth of knowledge when it came to machine learning. Models were built, but I think there was a drop off rate in terms of what made it to production. Now ation speeds are slow because you're really trying to understand how and where to improve these models. How do I log [00:14:00] this information? How do I, where and how do I prove that this particular model is more significant than the other? And at a, at an individual basis, like it took time to ramp up, be scientists in the company because you have to like really build knowledge across these specific tools across processes. And across the business remains. Now, fast forward to 2025. It's a completely different space. I genuinely believe data leaders are the backbone for decision making. Now all of tools are plug and play, so it unblocked the data science team to focus on what they can do the best, which is basically make a difference to the patients here at I two, the customers and how you can create a differentiated experience. And that is a huge burden off your shoulders because you can build models, you can actually test them. It's not about whether we can take a model to production or not. It's about how quickly can we test and learn from this and irate as compared to having this. Whole energy being spent on seeing the drop off of the model, which you built out as compared to now you're seeing more models make it to production, actually test and learn and irate much faster. [00:15:00] Experimentation has become a platform now, so it's actually much more easier to experiment across the board. So I genuinely believe as a data leader, it's shifted from being this pure play technical expert, product thinker versus like part-time. Engineering and platform strategists to part from storyteller. I think the profession itself is meshing into something different, uh, which is exciting. Uh, I do think there was one thing which was interesting when I think back to 2012 is you often understood exactly what was happening under the hood. Like you, you had to build models and think through first principles all the time. And I think because of the scale and a lot more plug and play approach. I think it's a different world now, but it's also an exciting space to be in as a data leader Hugo: in 2025. Absolutely. So much richness in there. And one takeaway is from bottleneck to backbone. So originally a bottleneck for a lot of processes and decision making to now being a fundamental backbone because we have all the infrastructure and tooling [00:16:00] set up to, uh, allow us to do this. I am very interested. How we build strong data orgs and in particular establishing trust and alignment. So in my experience in many companies, teams struggle to agree on basic metrics and or have too many metrics they're chasing as as well. Right. So I'm wondering how you go about creating shared definitions, building trust in what to work on and what the North Stars should be in in data across functions. Suddu: Yeah, I think it. It comes down to the code of where organizations spend time overall. If you're spending a ton of time trying to explain the uncertainty of the past, then it becomes extremely hard to make decisions for the future. Let's be honest here, like metrics are incredibly powerful and valuable when they lead to clarity, consensus, and explainability. Otherwise, they aren't actually useful if it don't do all of them. I think the key question, which you asked is how do we have shared definitions here? [00:17:00] I would say a crisp relation would be the problem you're trying to solve, the metrics you need, they need to be precise, they need to be unambiguous, and they need to be reproducible. And I think going beyond that, the key part of it is they need to be able to explain what's happening causally, and we need to be able to explain why something happened. Uh. We often talk about like metrics where we are connecting patient journeys or customer journeys and financials, and I think it's really critical when we have metrics which we can bridge across the board where you have metrics which can bridge product, it can bridge operations, it can bridge engineering and start to create a shared common platform across the board where you can start to build products as well as you can execute operationally. Let me give you an example with an alto. We were trying to, uh, explain. Positive patient experience or negative patient experience for that matter. Uh, you have [00:18:00] NPS Net Promoter score, which is basically trying to explain whether a patient was delighted or they were unhappy, but net promoter score is a low sample problem. Right. What we ended up having to do was take a little more of a holistic approach to this. We had to stitch. All of the input data, which is tying every single thread of the patient experience, their conversion, how their prescription was processed through the value stream, anytime they called, they messaged the pharmacy or what their pricing was, whether it was outta stock or not. Uh, and we tried to match it up to the NPS over a long period of time. And what was interesting was there was some patterns which started to emerge because we were trying to actually understand. Was the patient unhappy, and if so, why? And was the patient delighted? And if so, why? Because both are separate questions, which we are trying to answer. I think the negative patient experience was, uh, more intuitive. Uh, let's say the patient calls and they have a long wait time, they tend to be unhappy, or if the medication is on a supply [00:19:00] shortage and if it's outta stock, they're unhappy because they're not able to get it on time. Or if they have promised a delivery window and we don't deliver it on time, they're unhappy as well. And those were fairly intuitive factors which came up. But when we thought about the light, I think it was interesting because what we saw was when a prescription appears at their doorstep. They weren't even anticipating it because we processed it so fast and when the pricing was even better than what they exp expect expected patients were really delighted and in hindsight when you put that together, it looks intuitive, but it took us some time to be able to match out those relationships through the value stream and what the patients were submitting. This actually created like really strong network of shared definitions across business teams. It led to. Two separate metrics, which helped us experiment such as the perfect prescription score where we could actually score every prescription, which is going through the value stream. And we were able to see whether it beat or meet patient expectations.[00:20:00] Uh, it also created another metric called the self-service rate, which basically saw how many times did a prescription had we had to touch the prescription itself before it was processed. This aligned the product. Design data engineering and data science teams to focus on optimizing for these metrics within the product where we moved from lagging indicators to leading indicators, and we were able to actually move the needle on the patient experience. It also created a shared definition for business teams because now you could see how the prescription was flowing through the value stream and what the likelihood was that it would actually go off. Happy pop here, and how do you escalate and solve for this ahead of time as we built out this automation. And we aligned here and the operations teams, the results were pretty amazing When we started off on this path of automation about three years ago, on an average, the, the median time from when the doctor sent the prescription to the time it was scheduled by the patient who was about two and a half hours. Fast forward to now, the median time is two minutes, [00:21:00] and it's a massive change because we went from two and a half hours to two minutes, and the NPS remained at 85 because we really spent time understanding what the. Causal factors were and how do we move the needle on all of these causal factors while maintaining patient experience and building automation? Duncan: Wow. The depth of that analysis is really inspiring, Sudu, and one of the things that is really critical to do analysis like that is having an amazing team who can actually was capable of diving into the numbers at the level that you're describing. Something you shared before is that you've built teams with extraordinarily long tenure and low churn, both at Groupon and at Alto. How do you go about retaining talent and what's your strategy to drive that kind of loyalty and stability? Suddu: That's a great question, and it's something which is very near and dear to my heart actually. I've come to believe that one of the main [00:22:00] reasons for both retention and thriving while you retain. People within a data or a technology team comes down to three main factors. One is the company dimension, I would say. The other one is, uh, the team dimension, and the third one is the individual dimension. Let's go into each of these. Let's talk about the company dimension First. People really want to work in companies that solves problems that matter. For example, a Groupon. The main objective was to support small businesses. As you saw a shifting digital landscape, it was through, you know, attracting new customers. It was launching deals and supporting local businesses. And at Alto, the mission is even more personal. You're literally trying to help patients, quality of life improvement by delivering medications to their doorsteps. And you're doing that by eliminating a ton of friction through the value stream. Getting their medications to them on time at a cheap cost. [00:23:00] I do believe that when a company's purpose feels real and it's tight to impact, people do stick around. The second one is the team dimension, and one of the most critical parts of the team dimension is the team leader as well, your manager, your team leader. But the team dimension part of it is underrated here. There's some fascinating research by Dr. Richard Hackman and Dr. Anita Woolley, who talk about collective intelligence as a team, and I think the idea is that you don't have the smartest people who build the best team all the time. It's an end condition because you have the smartest people who also deeply care about each other, who can actually teach each other how to solve problems, listen to each other, and create an environment which they can thrive in. There's a. There's an amazing Hidden Brain Podcast by Shankar with Stop Wood, which talks about the secret of great teams, which I highly recommend to new managers always. [00:24:00] And the third dimension I would say is the individual dimension. And this is personal motivation. Uh, it's really understanding what growth means to an individual. It could be in terms of knowledge, it could be in terms of career trajectory and feeling like you're tackling meaningful problems as well. That along with being fairly compensated and you also feel like you're. Supported as well in this. Everyone has the reasons to stay, but I feel like when these three dimensions hit a critical threshold, people just don't retain you do. You just don't retain people in a company. They actually thrive in the company. So how does this all come together? And as a leader, I think it's essential for you to continuously work on all these three dimensions on behalf of the team. You're moving the needle in terms of the company dimension by taking on challenging problems, which. Works towards the mission and the objectives of the customers. You're continuously improving collective intelligence of the team by, uh, enabling them to take on challenging problems as well as hiring the right way, which fits [00:25:00] with the team IQ as well. And you create space for people to build knowledge, build growth as they evolve as people. And I think when all of these three are in sync, I think the benefit always flows outwards because you build great products. You also build a great team where people stay and I think they thrive. Hugo: I love it sdu and I think not only retaining people and preventing churn, but you know, as, as you said, having an environment where people thrive and find meaning as, as well and can make meaning. I'm wondering about the type of people who, who really work well on, on teams that, that you lead. I, last time we spoke, I got the sense that you are interested in working with people who can take a. Problem from concept to execution. So I'm just wondering how you think about role design. Do you prefer full stack practitioners or specialists, or is there room for both? Suddu: Yeah, it's a great question. I do think there's room for both, but I think there's a tell towards full stack [00:26:00] practitioners. To be honest. If you think about 10 years ago, there were four specialists because of how much infrastructure was built in internally within the company. Now as data teams have evolved over the last 10 years, you do need people who can actually take concept from identifying a metric and a problem to its completion in a short duration of time. That involves bringing a combination of skill sets together, and as the data team evolves, the complexity of problems obviously increases significantly, which does create space for specialists. I think that boundary is starting to shift more and more where you can have full stack practitioners and grow in each space and eventually become specialists, as well as some level of full stack practitioners as well. When you have people within a company or you have data scientists, they bring a unique set of skillsets. This field is extremely vast and it's impossible for everyone to know everything, and you bring this unique combination of skill sets at any [00:27:00] given time. It could be depth in a specific area or it could be breadth along with business domains. I think that creates like a interesting T shape like framework for every individual in terms of how their skill are, skill sets are aligned, and if you map out the T-shaped framework for every individual within your team, you can actually then understand where skill sets are distributed within the team, where someone has depth in a specific topic and where they have breadth. It also helps you map out what kind of problems am I tackling right now, and where do I have the right level of skillset sets to be able to solve the problem? It also creates a network where you can actually coach each other and increase depth and breadth across specific topics, and actually individuals explore new problems, which are adjacent skill sets as well. And I think by creating a network of that sort, you can actually continue to push on this boundary by creating full stack practitioners because you can always tap into an individual strength. Have them open up to problems and they can pick up [00:28:00] tools and skill sets along the way. And when they're actually able to take a problem from start to completion, they become entrepreneurs in their own way, they can actually really understand the problem. And it opens up space for newer skill sets. And as you do that across the team, it opens up sets across the entire network within the team. And I strongly believe that when you do create this network. You can continue to push this boundary of needing a specialist all the time is continuing to develop full stack practitioners within the team and then revisit when you, when your problems become exponentially more complex, where you've had someone who's done deep research and your company evolves to be a lot more scaled as it is right now. I hope that answers the question in terms of this network can exist, but I do think. There's a space in which you can actually have full stack practitioners for a very long time through the data journey now, as compared to what it was a few years ago. Duncan: I'd love to take that one step further and talk a bit more about the [00:29:00] actual applications and how you're using AI and LLMs decision support at Alto. Maybe you can talk a little bit more about why invest in ML at a in data at a pharmacy, and how that pulls, of course, from your team strategy. Suddu: Yeah, it's a very valid question, by the way. And to answer that, I think we need to dive into how the typical pharmacy workflow is. Currently the doctor sends the prescription to a pharmacy. Let's take Alto in this example. They usually send the prescription in either a semi-structured way or an unstructured way. The pharmacy is responsible for structuring that information, uh, billing it to the insurance. Let's take the happy case here, which is basically the prescription is structured. We bill it to the insurance, and insurance covers the medication, and patient has a certain out of pocket to pay. Based on that, the patient schedules the medication and then [00:30:00] parallel the pharmacist checks it for safety and ensures that it's compliant. The medication is then filled into the bottle where a pharmacist verifies again to make sure that the prescription is valid and what's in the bottle is confirmed, and it's then dispatched and delivered to the patient. Now, this is a, this is the happy part for the prescription. Now, there are a lot of off roads through this part. What if the medication is not covered for the patient? What if we need additional authorization and we need to submit paperwork to solve this? What if there's a national shortage of this particular medication and you need to find alternatives? What if there is some risk which a pharmacist identifies and you have to go back to the doctor and address upstream issues? And given that these are the norms, it always becomes extremely contextual depending on who is addressing these problems. And exploration becomes extremely critical in a regular pharmacy. There isn't much exploration, which [00:31:00] happens because exploration takes time and it costs you money as well. And given that pharmacy margins are extremely thin, typically exploration isn't done as much. Let's take a real world example for that instance, but Formin, which is a common diabetes medication there, there are more than 600 forms of metformin available by various manufacturers. Two different people with the same formulation could see completely different costs. Hugo could see $0 out of pocket. I could see $500 out of pocket and the next month that could be completely flipped. And it's all dependent on the insurance and the plan, which we have right now. So if you're a pharmacy and you're trying to do the right thing by the patient, which is minimize cost, maximize speed and safety, you have to explore. And that's where machine learning comes in. It not just helps us scale exploration, but it helps us do it faster. More consistently with lesser human overhead. Within Alta, we solve [00:32:00] machine learning in four different ways. The first phase is clarity, where we try to take unstructured information and structure it so that we can actually understand it better. The second portion, like I mentioned, was optimization. We want to be able to predict things such as insurance coverage, likelihood whether a prior authorization or additional paperwork is needed. Can we find a lower cost therapeutic alternator for the patient, which might be covered, uh, and how do we get the best parts to fulfillment for each patient? The third part is safety. We built an in-house AI pharmacist assistant, which helps with real time decision making and support. Eliminates clinical decision making burden from the pharmacist and makes it easier for them to be able to validate for safety and efficacy and also compliance for the patient. And the last portion is intelligent task assignment. This is one of the things which we might overlook, not everything or should be automated within pharmacy. There are still [00:33:00] scenarios where you need a real expert to step in. So when you think of machine learning use cases. We should also be able to assign tasks intelligently to our care teams. Each of the incoming issue or task, which is there, let's say, exploring billing, the way you'd explore billing for a fertility patient is extremely different from the way you would explore billing and what medication is covered for a diabetes patient versus a cardiology patient. And not all of these should necessarily be automated all the time. But it also creates a strong feedback loop for systems to understand where there are patterns and what can be automated as well. So you still create a high quality human in the loop process and you can still scale your system to train for future purposes as well. So the main question is in a system that is so context contextual, where the best answer often depends on who's asking what their plan is, what the medication is when they need it by. Machine learning isn't just [00:34:00] helpful. I think it's necessary. It helps us manage complexity, scale judgment, and deliver care, not just in a fast way, but in the right way. And I think that's critically why in the, we invest in AI within Alto. I don't think the objective is about replacing people. It's about empowering them to make sure that they can make decisions with clarity, precision, and support. And keeping the patient in the center of all of this. Hugo: That's such a fantastic elucidation of Y ML and AI today at Alto and in pharmacy. More general. I'd love to drill down into a particular product that I know you, you worked on, which you helped lodge a system that supports pharmacists decisions. The AI pharmacist assistant. I'm wondering what that looks like in, in practice and what type of problems it's solving. Suddu: Yeah, it's again, a problem which is near and dear to my heart. As I spent time at Alto, one of the things I learned was the pharmacists are dealing with an incredible amount of stress and burnout, and one [00:35:00] of the biggest reasons is the sheer number of decisions which they need to make, and they need to actually verify the prescription. They need to actually ensure that the pricing is right. They need to make sure that we have the right medication availability. And at the end of it all, the main objective is to make sure that we can dispense medications safely. And the scope of a pharmacist's role has increased tremendously since 2020. Now, within other retail pharmacies, you have pharmacists also administering vaccines as well. But if you think about the core part of the decision making for the pharmacist, the role which pharmacists actually play and excel in is actually counseling and guiding patients. Which actually accounts only 15 to 20% of their time right now. The remaining 80 to 85% of the time is spent on coordinating with doctors, ensuring that they have the right paperwork and actually making sure that the medications are safe internally. We always have this view with an al, which [00:36:00] is basically to ensure that we can help pharmacists operate at the top of their license, and the more time you unblock pharmacists to providing counseling and guidance to the patients. You can take on the burden from a technology standpoint to make it easier from a decision making standpoint. Now, this decision support is basically intended to minimize the burden, provide all of the information at their fingertips so that they can make a safe decision for the patient. Let's walk through the decision tree for a pharmacist. When a pharmacist reviews a prescription, they do a lot more than just taking a quick glance at the prescription. It's actually a high stakes structured decision tree, which goes through every single time for every prescription. The first part of the safety check where the pharmacist evaluates, is this safe for this individual patient? Do they have any harmful drug interactions? Do they have any allergies on the file where, you know, I can look into patient's history and understand if this [00:37:00] particular medication would interact with any allergies, which the patient has. They make sure that the medication dosages within safe limits, which could vary by a patient. And they're also checking for duplicate therapies where if they have multiple prescriptions which are intended to do the same thing, they can actually validate that you don't have any duplicate therapies within the file. The second part of what they assess is the prescription as a written clearly and completely. Do the instructions make sense? Are the dosage, quantity and refills consistent with the way this particular medication is prescribed? And at the end of it is the data entered into the system accurately. And then there's a third, which is a more complex layer, which is basically compliance and regulation. Pharmacy regulations change so widely across each and every state. How to pharmacy operates in multiple states and the way you prescribe in one state and you actually assess a prescription in one state could widely defer in another state. So it could be compliant in one state, but it needs an extra verification in another [00:38:00] state. It's extremely hard for the pharmacist to make that decision real time. And then there's a source of truth behind all of this, which is the clinical guidance. And all of this information is with NIH and FDA, which is detailed documentation. They are in quick summaries that actually dense multi-page documents for each medication, outlining how it should be used, when it's safe, when it's not, and how it interacts with different patient conditions. Integrating all of this in real time for every unique patient scenario is incredibly demanding. Pharmacists are incredibly well trained, but as the business scales, it's extremely hard to be able to make these decisions real time. At the end of all of this, the pharmacist makes a judgment call. Is this safe to dispense? Do I need to adjust the prescription so that it's within the approved guidelines? Should I give some patient specific instructions so that they understand what medications they're taking so that it's safe? Or do I need to go [00:39:00] back to the doctor because I'm seeing some clients and I need to actually confirm with them whether the treatment is the right one here. And what we are doing with an AI pharmacist assistant is trying to build a set of modular tools with ai, which makes it easier for pharmacists to go through these set of questions, uh, and eliminates burden so that they can actually make these decisions real time. Hugo: Fantastic. We've talked a lot about data and ML and ai and. Of course, we live in the world of generative AI and large language models these days, which in some ways are synonymous. But to say that can miss several very key points, such as, for example, the use of LLMs for things such as in context learning, which is incredibly powerful. So something we've talked about before is how large language models are often associated with generative use cases, but you've applied them differently. So I'm wondering how LLMs are actually being used in your systems and why that approach? Suddu: Yeah, it's a very valid question because taking a generative [00:40:00] approach to healthcare is risky, and you have experts in this particular field for a reason. The main way we use LLMs is to extract information from trusted clinical sources, classifying information in free form, and also understanding nuanced context from the prescription language. And those outputs don't go straight to the patients. It actually gets fed to a set of decision engines. So we use these structured insights from large language models to build multi-class classification problems where we take these through multiple set of decision trees and assess risk medication appropriateness, completeness. And what we end up doing is we have trained historical data and we blend it with the LMS to eventually make a decision as an output from these models. So if you imagine this pipeline architecture LMS act as a smart pre-processor, which are highly tuned, [00:41:00] understand clinical language, they pass on these features to the classifiers, which are then optimized for precision entry recall. The ultimate goal is that when you do have a prediction which comes out of this, which is to say, Hey, this prescription is risky and you need to actually clarify, pharmacists should be able to trust this supervision. And as you introduce these human in the loop signals, which are high quality in nature, they help us reform and refine our l and m prompt strategies in terms of how we use the information from these trusted sources. We also improve feature engineering, which can help continuously boost the performance of our classifiers too. And we are very much in the initial STA stages of this particular problem. What we are also aware is as the business scales. The volume of these prescriptions increase the data set is gonna become more imbalanced. And we are starting to look at more advanced techniques, especially bay neural [00:42:00] networks, bns or, and reinforcement learning in, in tandem with Beijing neural networks. Because I think bns play a critical role in not just giving you a point estimate. They give you a distribution, which is absolutely essential in healthcare. Because we are not interested in predicting just something when they're risky. We also wanna know how confident it's, and what's the range better? What's and how do we trust that error? I think this kind of prob probabilistic reasoning is a natural threat in healthcare because every decision is a probabilistic reasoning and you act differently when you're unsure. And DNS aren't just enough, obviously, where you have. A reinforcement learning, which would eventually help optimize those decisions over a period of time based on how the environment responds in our case, how pharmacists interact to correct the system. So as we route these prescriptions, should it go to a manual review? Does it need a specialist, or can the [00:43:00] model learn? The optimal is based on uncertainty and with bns providing uncertainty estimates. I feel like reinforcement learning would provide very valuable feedback so that we can end up making the right decision for the system. I think the main objective at the end of the day is to have a risk aware, uncertainty aware, self-improving system, and that's what we are trying to build, utilizing LMS with expert feedback provided by the pharmacists. I think it's mainly to help them make smarter, safer decisions. Continuously improve the system. So it makes it easier to understand risk and uncertainty through the healthcare system. But eventually, if you do need to validate more, we would like for the system to be able to flag it safely so that you can actually get the additional information, which is needed too, and just that's how we are using L LMS along with machine learning overall. Duncan: One of the things that you touched a little bit on there is that LLMs are [00:44:00] non-deterministic. And, and also, of course, the stakes are so high in healthcare. So maybe you can speak a little bit more about how you balance automation with the need for kind of safety explainability and of course, regulatory compliance in your setting. Suddu: Yeah. With ACTO, we have taken a very deliberate and phased approach towards automation and ai. It's always started off with keeping patient safety at the center of all this. And also Pharmacist Trust. At the center of all of this, we started off by moving away from manual workflows by taking on extremely low risk and low stakes problems, which would add value and enable speed within the pharmacy prescription processing framework. The earliest part, like I mentioned, was prescription intake by being able to structure something which was relatively unstructured. We started using models like support vector, vector machines, [00:45:00] spms, and also named entity recognition, NER models to help categorize key fields within the prescription. What we also did on top of that was we learned from the pharmacists in terms of short time notations, clinical abbreviations, and build a pre-processor on top of all of this. This really helped unblock pharmacists in terms of how prescriptions were read and structured in the first mix. What was critical here was every single piece of this was designed to be explainable. Every entity was logged, and every correction made by the pharmacist was tied back to the original prediction. This also helped us like fine tune models and thresholds to make sure that we started to minimize the intervention which was needed, or the edit such was needed. So as these models learn, we realized that over a period of time we had, you know, cut down 50% from. Pharmacists needing to edit information, and these models became more precise, [00:46:00] but we just didn't deploy and walk away. H and a B feature, which was developed, was tested rigorously. We had a set of pharmacists who were part of the early access group, so we build a model, share it with them. They would review it and provide feedback. We would start to run models in shadow mode for a very long time and understand how pharmacists were making edits. And if the models which we were running in shadow mode were actually solving for those problems. And then, and only if we actually beat all of the metrics, not by a small amount, but by a significant amount, is when we actually started to roll it out in production and experimentation phase. As we moved into higher risk domains, I think the stakes became higher. For example, with the AI pharmacist assisting the bar is even higher. We have anchored on the same core principles, which is co-designed of pharmacists do rigorous prototype reviews and testing and benchmarking, and also apply strict gating on safety and performance. Shadow [00:47:00] mode deployment is into checkbox. It's a necessity now because what you need to do is run this in a long-term phase and you validate for consistency over a period of time across all possible edge cases. Explainability is super critical, so. We can actually log every single edit and have them reviewed by the pharmacist to make sure that the models are actually performing as expected. And I think the most counterintuitive part of all of this is that we did not optimize for speed within healthcare. Like you mentioned, given that the margin of error is so razor thin, we actually optimize for consistency and rigor and by building models which are step functions above what was done before you build trust. Because it's much more harder to build trust, and it's very easy to lose that. And I think that's the responsibility, which as a data science team, we carry all the time to make sure that these are foundational pillars and every single model is significantly better than the previous one, and they're [00:48:00] thoroughly tested and validated before we launch it in production. Duncan: It's really inspiring to hear about that journey and the hand in hand collaboration you've really had with the pharmacists and the rest of the team. Maybe you could speak a little bit more as well about how you actually measure these systems and the kinds of specific metrics you use to judge whether they're working and is it around time saved, reduced cost errors, all of these things? Suddu: Yeah. If you were to think through this, it's actually all of the above. We do have to solve for safety first, and then you optimize for quality and then efficiency. Let's start off with safety. Within healthcare, the average prescription safety range is somewhere around 0.8 to 1.3% across the industry, but we actually hold ourselves much higher than that. We actually have a fraction of that particular medication dispensing at a rate, and in fact, [00:49:00] as we have lost these systems, we have gotten even better. So we have not just maintained the advantage of having safe dispensing. We've actually improved on that for Bill as well. And this is on top of the complexity of the prescriptions and the volume growing over the period of time. And this could have only been done because we designed these systems hand in hand, that pharmacists. The next portion is quality. Like I mentioned before, we designed the perfect prescription score, which is trying to track how well the prescriptions are going through the value stream. We even built sub metrics like a billing health score, which actually tried to track if he actually did enough exploration and understood if he could actually reduce the patient's price even further. We'd also have a delivery health score, which actually tries to track from the time when the prescription let the pharmacy to the point time when it actually got billed, whether it was done without any defects overall by creating sub quality metrics and a very quality metric. What we are trying to track is. How do [00:50:00] we hold ourselves accountable to optimizing for patient experience? And each of these metrics are actionable. So you can actually go back and understand defects, understand where it, where we missed the mark, how do we need to do to improve and also have very strict guidelines and SLAs across each of them. And the third part is efficiency. I did speak about the ready to schedule time dropping from two and a half hours to two minutes, but what is critical here is I. Once all of these factors are met, the ready to schedule time is actually a delight because if you don't do any of them well, it doesn't matter whether you process the prescription within two minutes because the patient is unhappy, because you didn't actually have the basics the right way. And you do have to think about cost itself. Pharmacies, like I mentioned, have a razor thin margin. Typically the margin is around 4% and there is no room for inefficiency. Over a period of time, we've been able to reduce the cost of processing a prescription by 60%. [00:51:00] And automation has been a significant part of all of that because what we've done is make sure that we solve for safety first and quality and over a period of time as you build automation that can stack up to build momentum, but not at the expense of safety and trust. And that's where we have been able to enable free delivery to the patients. And still maintain high touch care where it's required and ensure that patients are getting the white glove treatment they deserve. So at the end of it, yes, we have saved time. We have made sure that safety is the significant priority here and we've also had few errors and all of those things matter. And I think in my view, when you look at it from the lens of the patient, it's only possible when your systems are explainable and you're constantly learning from actual real world use. Experimenting and using that as a feedback system to improve your models as well. So we've talked a lot Hugo: about the past and present of data ML and ai and I'd like to look forward now, particularly with [00:52:00] the way you've framed data as the backbone for decision making now resonates so strongly with me. And last time we spoke, you hinted at an idea that the next generation of data leaders will, will go past that and take on broader executive roles. So I'm wondering. Firstly, if you can expound on what you mean by that, and then tell us a bit about what type of skills or mindset shifts you think will be key to making that leap. Suddu: Yeah, it's a great question, and as data leaders have become the backbone of decision making, like you mentioned, I think one of the key potion is to actually build real muscle around making big irreversible decisions and getting better at making them over a period of time. And this is one of the things which my mentor said, which is improve your decision accuracy rate, not for now, but the aging of your decision accuracy rate. And that's something which has stuck with me for a very long time because it's actually impossible to measure, [00:53:00] but you need to have some form of the retro framework to be able to look back and say, did I make the right decision at that time or not? And what I realized is the best technologists have this innate ability to build conviction. And make calls when they have limited information, and it's not necessary us to have all the information all the time to make the right decision. But I think it's one of the things which data leaders need to work on where they're not just making a call based on the accuracy of the information or accuracy of their prediction, it's more about the accuracy of their actions and how do you track that over a period of time. I think the second portion is storytelling. I think the most effective CEOs and product leaders are great at this. They have this craft to build a narrative and they know all the metrics, but they still have this incredible way of narrating the story from a user perspective. They can talk about user [00:54:00] journey, they can talk about the pain points and how they've helped solve problems. I think data leaders need to do the same, pair that with the analytical rigor and storytelling, and I think that unblocks this next phase. To grow as executive leaders as well. And I think the third part of it is you do have to be a leader where great people want to work for and stay with you for a long time. And I've actually seen a lot of growth here as I interact with the next generation of data leaders. They're a lot more aware, a lot more intentional, and they have all the tools in that toolkit now to be able to build strong, enduring teams. And at the end of the day, I think it's not just about solving hard problems. But it's about building the company and building the team, and which actually makes a difference in the company trajectory. And I think given all of these things coming together, I do think data leaders are well set up to take on large acceptive roles. Duncan: I love that. I especially love your description of the decision [00:55:00] accuracy rate. And really preparing yourself to make irreversible kind of one, one way trap door decisions. That's a really hard thing for most people to make and doesn't come naturally from the academic background that many data leaders come from. Curious to talk a little more about kind of retrospective advice and if you were looking back at your own role. Or for somebody else stepping into their first time, being ahead of data, what's one thing you wish you had known earlier and that you would've invested in differently? Suddu: Yeah, it's a great question. I think when I look back, one of the biggest things I've realized is you're not managing experiments and not managing dashboards. You're not ma managing intelligence. You're in fact, actually managing the entire decision framework of the company. You're actually influencing how decision is being made, who makes them, what decisions are being made and how that evolves over a period of time. [00:56:00] And the hard truth is that building this decision framework is extremely hard for a company, and especially as a first data leader, you are responsible for this. And it's not just about building it overnight or building a metrics layer, which actually solves for it. It takes constant citation. And even, I think when you, when you think that you've gotten it right, it requires constant reinforcement and improvement because your objective is to architect a framework that is precise, unbiased, and comprehensive, whether the decision is being made by a system or a person. And I think that distinction matters because when a system is making a decision, you're optimizing for bias and precision, clean inputs, clear rules, clean outputs, and you're continuously trying to solve for it. But when a human being is making a decision, you're also optimizing for noise and it's your job to make sure that you design tools, processes, and also a culture that [00:57:00] reduces that noise and sharples that signal. What I didn't appreciate before was how central this was to the business, because I think there is this tendency to pour your energy into the tech stack, into the pipelines, into the models, and. Those are important and they do drive wins and they're tangible. But unless and until you focus on improving this decision framework, you will not be able to sustain the tangible wins. So I think my advice would be to make sure that you code the team's work to customer and business impact from day one, and ask yourselves every quarter, is our decision framework getting clearer, faster? Is it more trusted or is it just complicated? Because I believe at the end of the day, the decisions which you make is the strategy. Everything else is execution. Hugo: Thank you so much, Sudu. That's a [00:58:00] wonderful note to to end on. The decisions you make are the strategy. I'd like to thank you for your time and wisdom and generosity in sharing not only everything you do, but also all the work you've done at Alto. It's absolutely fascinating and can't wait to hear what happens next. Suddu: I really appreciate you guys having me on this podcast. Looking forward to learning more. Thank you so much. Hugo: Thank you, du. Thanks so much for listening to High Signal, brought to you by Delphina. If you enjoyed this episode, don't forget to sign up for our newsletter, follow us on YouTube and share the podcast with your friends and colleagues like and subscribe on YouTube and give us five stars and a review on iTunes and Spotify. This will help us bring you more of the conversations you love. All the links are in the show notes. We'll catch you next time.