Ty Aderhold (00:00): This is not a one and done scenario. "Oh, we evaluated this piece of technology, we plug it in. We don't have to worry about it anymore." You have to be able to monitor these solutions continuously for many of these risks. Rae Woods (00:16): From Advisory Board, we are bringing you a radio advisory, your weekly download on how to undangle healthcare's most pressing challenges. My name is Rachel Woods. You can call me Rae. Look, in 2025 there is still a ton of hype around AI. It may have rightfully come down a little bit in the last few years since the dawn of generative AI, but I still get so many leaders talking about the revolutionary impacts of AI in healthcare business, how it's going to solve some of our biggest challenges, that all we need to do is find the right product. Unfortunately, it's not that simple. (00:56): Yes, there is a lot of promise around what AI-enabled innovations can do. They can help enhance revenue, reduce burnout, improve clinical outcomes, but merely investing in AI does not guarantee success. There are risks. Frankly, the benefits might be small or slow. And even if it worked for one organization, that doesn't mean that that product will work for yours. And that's exactly the reality that we have to understand, and frankly embrace when it comes to health AI. To give us the clear-eyed view of the risks and benefits of AI investment, I've invited advisory board's, AI expert Ty Aderhold. Ty, welcome back to Radio Advisory. Ty Aderhold (01:39): Thanks Rae, glad to be here. Rae Woods (01:42): This is your lucky number 13th appearance on Radio Advisory. Did you know that? Ty Aderhold (01:47): Wow. I knew I was in the double digits. I did not know I was up to 13. Rae Woods (01:53): But I actually think it says something that you've been on Radio Advisory 13 times. Because every single time you've been on this podcast you've been talking about artificial intelligence. And that is a huge signal for just how interested the healthcare industry is in AI. You're actually talking with health leaders every day about AI and its use cases. So give me the pulse check, what is new or different today when you speak to health leaders about artificial intelligence? Ty Aderhold (02:23): It's a question I ask myself because, I've been deep in the sort of AI research game for years now. I would say there has been a shift towards organizations considering implementation. Maybe not going full steam ahead with implementation, but a shift away from, "This is a future thing for us to consider," towards, "We need to start thinking about implementing something this year potentially. What are the benefits if we were to do that, what are the options available to us out there? What are the risks we need to be able to manage if we were to go invest in something?" Rae Woods (03:01): So they're focused on implementation. I want to believe that, Ty, you're doing your job at advisory board and saying, "Here are the case studies. Here are the best practices. Learn from these other people who've implemented AI." Is that what we're doing for the market? Ty Aderhold (03:16): We do have case studies. I don't think they are a case study in the sense of, "Here is X organization who bought Y product, and you can go do the same thing if you follow these three steps." We're not trying to build our case studies in that way. Rae Woods (03:35): Which I have to admit is probably frustrating for our listeners, that don't want to have to reinvent the wheel, and they do want to say, "Just tell me what the heck to buy." Ty Aderhold (03:43): Fair. I would argue for listeners out there and for anyone looking to invest in an AI product that, you want to follow a standardized approach to arriving at the investment that you want to choose, managing the risk and benefits making that decision. But that the actual product itself, you don't want to be out copying another organization's investments. There's so many products out there. And your specific needs, your organizational challenges, your goals, your strategic vision for your organization, is going to be different than another organization's. So you should be investing based on your own priorities, your own challenges, not going out and copying another organization's investments. Rae Woods (04:29): So it's just not as simple as being able to say, "Here's what you should buy." And instead we want to push leaders to think about how they go about deciding what kind of investment is right for that organization. Ty Aderhold (04:44): Exactly. Rae Woods (04:45): And that it sounds like is where we can help, and where our case studies are. Is that right? Ty Aderhold (04:49): Exactly. You got that right, Rae. Rae Woods (04:52): And the question on every health leader's mind is, "How do I get the biggest return on investment in this thing?" And they're looking for the greatest return possible. So I want to take a minute and just unpack the specifics of ROI. And I want to start with the R. What kind of return can leaders get? Is it as big and bold as the hype that I'm reading? Ty Aderhold (05:18): Right now, I would say no. Rae Woods (05:20): Okay. Ty Aderhold (05:21): From what I have seen from talking to leaders, there are returns on investment out there, but I think they are smaller, and less directly tied to bottom line finances than leaders might want. Certainly less so than say, a CFO might want at an organization. Rae Woods (05:42): So the return is just not matching the kinds of immense hype that we are still seeing around AI. And here I kind of particularly mean generative AI. What kinds of returns are we then seeing? Ty Aderhold (05:56): I think there's two things that are happening. We have very narrow use cases where we're seeing real returns. An example would be in the rev cycle space. But those are small bands of use cases that have a relatively small impact once they're implemented. So the return is just not going to be huge there from a financial sense. On the other side, you have a bunch of these administrative efficiency applications. Whether it's built to help front office staff, clinicians, back office staff. That, while these tools are theoretically very powerful, seem very cool and very useful, we're not seeing the time savings returns, or a direct financial return necessarily from the adoption of these tools. That's not to say there's no return. I think there is return in the sense of, being able to measure a reduced cognitive burden for clinicians, example from the adoption of some of these generative solutions. Rae Woods (06:59): Which is a win, but it's perhaps not the win that the CFO is looking for. Ty Aderhold (07:02): Right. That's the exact way I would phrase it. That is a win, but it becomes harder to start to put a dollar amount on that win. And that's not to say you shouldn't put a dollar amount on reducing cognitive burden for clinicians, but that has to become a conversation your organization then has about, how do we measure it? Rae Woods (07:26): And a part of the decision-making process. Ty Aderhold (07:29): Exactly. Rae Woods (07:31): So in my mind, there's another version of the R in ROI that we should be talking about. We've been talking about return, but there's another one that comes to mind for me, and that's risk. And frankly, having slow or marginal ROI is a risk. But I have to believe there are other risks that leaders need to keep in mind as they're doing this decision-making and considering when, and where, and how much to invest in AI. What are the risks? Ty Aderhold (08:01): Rae, you're right. I think the first risk is a risk of failed investment, or an investment that doesn't give you the return you were expecting. But there are much greater risks I think, in the AI space in particular, and we're talking organizational level risks. I would say risks around bias, around hallucinations, around, what if we implement a solution and we thought we did our due diligence, but we didn't do enough and something goes wrong? Rae Woods (08:29): How risky are those risks? What I thought you were going to tell me, is that the concern is cybersecurity. And maybe that's because I am a bias, because we just had a big conversation about a huge ransomware attack on Radio Advisory. I assume that is part of any technology investment that's not specific to AI. So, how risky is bias, how risky is hallucination here? Ty Aderhold (08:54): It is going to vary a lot based on the type of AI model you are investing in, and what you are asking the AI model to do within your workflow. And then also based on how directly involved are humans with that AI model. So there's varying levels of risk here. But I would say, baseline as an organization, you have to be ready to both check and mitigate any bias that you are finding within an AI model or an AI solution. You have to locally tune your models to your local patient population and that data, you need to be able to do that. (09:37): You need to have processes in place to mitigate hallucinations, educate the staff, the clinicians who are going to be using these tools. And that brings me I think, to the most important piece here, which is that human in the loop. And making sure that the people who are using these solutions understand when those solutions are supposed to be used, how they're supposed to be used, and the level of oversight they need to be bringing. To ensure that if something does go wrong with the model, they are still sort of the the backstop that prevents further organizational issues. Rae Woods (10:13): Allow me to be a pessimist for a moment because these risks are serious, and they're serious in healthcare. We're not talking about the Chicago Sun-Times using AI to write a list of books that should be on your summer reading list, and oh, by the way, it turns out the authors were real in most of the books were not. That's a hallucination that got picked up in the media perhaps that's associated with a lot of laughs. It's very, very different when you're talking about a summer book reading list that is published in a newspaper, versus hallucinating about how a patient should manage their medications. The question I have is, are we then completely wrong around the benefit of AI? If you're telling me that best case scenario we've got marginal improvement, worst case scenario we're dealing with a clinical hallucination, should we all just become AI skeptics? Ty Aderhold (11:01): As someone who spends a lot of time in the AI space, and also is a little bit default skeptical, you're maybe asking the wrong person here. Rae Woods (11:09): Same, same, same. Ty Aderhold (11:11): But what I would say is, it's about understanding the risk evaluation for using AI in a particular manner. If you're a leader out there that's dreaming of replacing a part of a process with AI, I think that's pretty risky. That would be something that I would say like, "You are at risk, and opening your organization up to risk." If we're talking about duplicating a process that is going to continue to happen, but, we're going to have AI in the background as a double-check, I think that's very low risk. And so, those are two sort of different types of how you are using AI, and interfacing AI with human processes that bring very different levels of risk along with them. Rae Woods (11:59): So it's not about just being skeptical, even if that's your and I's default, it's about the risk evaluation, and trying to size expectation or return with the risks that we're putting into place here. What I'm hearing you say is that the level of risk really depends on how you're using the AI. So I think we should put this into context. How might a healthcare organization evaluate these risks specific to their organization, and maybe it's more specific to their use case? Ty Aderhold (12:30): I think it's specific to use case, and it's part of that sort of framing that I just brought in. So for example, that duplication that I mentioned. An example of this would be an AI model running in the background, reviewing an imaging scan and checking for nodules. Maybe it's lung nodules on a chest CT, a classic example. If that is duplicating a human read, I would say that's pretty low risk in terms of the risk associated with the AI. A second bucket is AI models that are adding to a human process. They're not necessarily duplicating something, but they're also not replacing something that was already happening. (13:14): I think an example of this would be an AI model that automatically pulls in guidelines based on specific information that a clinician says during a visit, and ambient listening being used as well. So those guidelines are pulled in automatically. That is adding a new function that wouldn't have previously existed as part of the process. Again, relatively low risk here. I think the higher risk examples of AI implementations, is when AI is being used to replace what otherwise would've been a human intervention. Predictive overbooking as part of a scheduling process would be a good example of this. But you can think of countless others across clinical and non-clinical space where, "Hey, there's a real inefficiency here. Maybe we could use AI to do this," but it's much higher risk. Rae Woods (14:08): So it sounds like the risk evaluation is more based on how much the people are still involved. So, duplication or the double check, like the lung nodule, low risk, additive to a human process like ambient listening, also low risk. But when you're talking about taking the human out, replacing what they would do in the case of predictive overbooking, that becomes more risky. I have to say, I'm surprised that you didn't break this down into clinical tasks and non-clinical tasks. Why is that breakdown not actually helpful for us to be able to evaluate risk? Because that's how I'm hearing folks talk about it in the market. Ty Aderhold (14:47): It's an easy shorthand, but I don't think it's necessarily the correct way to think about it. That lung nodule example I gave, that's a clinical AI application. But again, I think it's relatively low risk, because there is still that human doing the exact same job they would've been doing previously. Nothing has changed about the human workflow in that case. Whereas predictive overbooking, sure there is a human in the loop still involved with scheduling on down the line. But they're not necessarily able to double check that a predictive overbooking algorithm is biased or not against a certain race, or a certain type of person, and that can lead to bigger consequences. (15:28): I think the second reason I would mention here, is it actually gets pretty hard to separate what exactly is a clinical versus administrative use case? If you take something like ambient listening, default it sounds like, "Oh, that's administrative." It just sort of handles the note-taking, the documentation straightforward. But there are so many clinical things that are tied into that piece of documentation that can change, based on whether you're using ambient listening technology or not. Does that make it clinical? It really starts to blur the lines between administrative and clinical. So I think that also makes it harder to use that shorthand to think about risk. Rae Woods (16:09): Yeah. It's really, really hard to actually break down what kind of risk we're looking at here. And what I'm hearing you say is that our leaders, our listeners, they can't default to assumptions about easy black and white clinical versus administrative. You also can't assume that something that is, for example administrative, is a safer bet, or it's a better short-term investment, because it may actually have more risk. Bottom line, it sounds like the less human oversight, the more risk. We've just given a really helpful framework for how to understand risk. (16:44): It's not clinical versus administrative. It is not about assuming that administrative is going to be easy, or it is going to be lower risk, that's really important. But I want to come back to where we started, which is that, leaders are eager to have examples of exactly what someone else invested in that gave them an impact that they can take back to their home organization. Why can't we say, "But here are the low risk opportunities. Everybody should just invest in that." Ty Aderhold (17:16): I think the reason is, it is very hard to identify a truly low risk AI solution in healthcare right now that is being operated at scale. The closest thing to it, the best example I could give, is ambient listening. I've already referenced it a couple times. This helps with the documentation, the note-taking for the clinician, they're not typing the notes on their laptop during the patient visit. This is I would say, generally one of the more adopted scaled AI model healthcare solutions we've seen. And with that, we've certainly seen success stories, we've seen that cognitive burden decrease. We've seen organizations say there is some ROI they're seeing from that reduced time, and documentation allowing clinicians to use that time for other purposes. (18:10): But that's not across the board. There's also organizations and pilots where we've seen an NPS of zero after a pilot, which is not great for a pilot. It's equal numbers of promoters and detractors. And when you dig into why this is the case, you'll see that you have promoters and clinicians who are saying, "This is saving me time. I'm making minimal edits. It's great." And then you have clinicians who are saying, "I'm actually spending more time on edits than I otherwise would have in the note in the first place, because it's getting all these things wrong. It's not using medically correct language all the time." Rae Woods (18:45): It's back to the return. The risk is low, but the return is still so variable, and it might even be variable within the same organization. Which makes it really tricky to evaluate what to invest in. Ty Aderhold (18:57): And, Rae, I don't even know if I could say the risk is low in that scenario. Because if you have this disagreement among clinicians, some saying, "Hey, this is perfect, it's doing exactly what I need. I make minimal edits." And another group that's saying, "I need to make a lot of edits." I think you need to as a leader sit down and think about, "Well, what does it mean that we have this disagreement?" And I may not be able to arrive at what the true answer is. It may be that neither of those two groups are truly correct in their minimal edits versus a lot of edits. But it sets up a possible scenario, where you have some clinicians who are taking the efficiency gains, whose note quality is getting worse. (19:41): Because maybe they're not catching necessarily all the edits they need to be making. That all of a sudden becomes a much larger organizational risk than you might want as a leader. And so again, it's not to say, "Don't go out and invest in ambient listening," but you need to have that nuanced understanding of, what are the potential risks that are here? And in this case, it's not bias, it's not even necessarily hallucinations. It's back to, "How is every physician that I roll out ambient listening to going to interface and use this tool?" Rae Woods (20:14): And it does sound like the challenge isn't that one doctor is a super user, like if I would think about a different kind of tech implementation versus one needs more training. Something is actually happening in the tech here, that is allowing for more of this variation. Ty Aderhold (20:31): I think there's some variation in the tool itself. There's variation by perhaps, which type of specialty it's best tuned to be used for right now. There's variation in physician comfort with the tool. I also think there is variation in what we expect the documentation and the note to be once we are using ambient listening, that maybe leads to a larger conversation about physician documentation more broadly. And are we holding non-AI documented notes to the same standards? Lots of broader questions there. But I think you have to get into those conversations as part of the decision around whether or not to invest in ambient listening. Which brings us back to this point of, it's not as easy as just copy, paste what this organization did. Because you need to have these conversations at your organization. Rae Woods (22:39): So far we've focused this conversation on the R part of ROI, the return, and we've added in the layer of the risks. If I continue to use this analogy, we have to talk about the investment, the I part. And I have to believe that the investment here is actually more than just the dollar amount. What do we need to see organizations invest in to appropriately manage the risks, and actually get the returns that we want? Ty Aderhold (23:09): Governance is where it has to start. I know that's a boring answer potentially. And specifically I would say, governance designed to evaluate and manage AI models themselves. You can't just use your standard technology governance that you've had in place for years. Rae Woods (23:29): Why not? Why is it different? Ty Aderhold (23:30): I think it goes back to the risks and the challenges associated with AI models. So, bias, hallucinations, making sure that human in the loop is an educated user of the solution. Rae Woods (23:44): New problems requires a new kind of structure. Ty Aderhold (23:47): Yes. Your existing governance is not set up to handle those nuances, and you may not have the expertise in that governance process that you need. I'm talking about data scientists. I'm talking about ethicists that need to grapple with some of these decisions. And going back to the balance of risks versus benefits, and how are we going to use the AI model, that need to be answered in that sort of upfront investment governance process. Rae Woods (24:14): So then what does good AI governance look like? You perhaps said one thing just now, which is that you might need to hire different people. For example, you might not have an ethicist on staff, but what is good AI governance? Ty Aderhold (24:28): This is where I think organizations can learn from others. This is where advisory board has case studies that speak to this. If I was going to give a few nuggets right now, I would say one, take a problem first approach to investment. Identify the problem you need to solve. Let that guide your investment decisions. And two, prepare to manage AI models over time. This is not a one and done scenario. "Oh, we evaluated this piece of technology. We plug it in, we don't have to worry about it anymore." You have to be able to monitor these solutions continuously over time for many of these risks that we've talked about. That is a new problem for organizations to solve, so you have to be prepared for that ongoing monitoring as well. Rae Woods (25:19): It sounds like because these are new problems, you need a problem first approach to looking at investment. Which means, you need the right expertise at the table to even come to those decisions. Am I hearing that right? Ty Aderhold (25:34): Yes. Rae Woods (25:36): There's a couple takeaways that are emerging for me in this conversation. The first is just some basic expectation setting when it comes to AI, when it comes to what you're going to get out of these investments, and when it comes to the risks that you need to manage. The other big part of the value story of AI is making sure that you're focused on the right problems. If that's what leaders need to do, and they need the right governance in order to do that well, how do you want our listeners to focus their attention today? Ty Aderhold (26:05): Think of AI as a way to solve small discrete problems, to enhance efficiencies, maybe to bring in a little more of a quality check to function in the background and support the existing human processes you have. It's easy to dream of the AI that is able to do every part of a single healthcare journey, we are so far away from that. And thinking in that way inhibits I think, organizations from identifying the small but impactful uses that might be out there. On the flip side, I know people love to think about that transformational change that is out there. Rae Woods (26:51): And I have to believe that can still happen. We're not that skeptical. We're not that pessimistic, are we? Ty Aderhold (26:58): I am not that pessimistic. I don't think it is as clear where that is going to come from in the generative AI chat interface in particular. I think it is much more likely to come from predictive analytics, and predictive AI that helps us identify, say, new drug applications or drug discovery, personalized medicine. Ways that is not directly diagnosis, or using generative AI solutions to chat against, but much more longer-term medical developments. And I think that's where, especially in the clinical space, we will see the transformational change. Rae Woods (27:39): Well, Ty, thank you for bringing us down to Earth, and thanks for coming back on Radio Advisory. Ty Aderhold (27:45): Glad to be here. Hopefully I wasn't too skeptical for everyone. Rae Woods (27:52): At Radio Advisory, my goal is often to get you to act differently. But before we even get there, I want you to think differently. I want you to think differently about what AI can do for you today. I want you to think differently about the return, and the risks. And the action step that I want you to take, is less about finding the right product to invest in, and more about investing in what you need inside your organization in order to find the solutions to the specific problems that you have. That mindset shift is a lot easier said than done. Which is why, as always, we are here to help. (28:55): New episodes drop every Tuesday. If you like Radio Advisory, please share it with your networks, subscribe wherever you get your podcasts and leave a rating and a review. Radio Advisory is a production of Advisory Board. This episode was produced by me, Rae Woods, as well as Abby Burns, Chloe Bakst, Atticus Raasch. The episode was edited by Katy Anderson, with technical support provided by Dan Tayag, Chris Phelps, and Joe Shrum. Additional support was provided by Leanne Elston and Erin Collins. We'll see you next week.