Abby Burns (00:14): From Advisory Board, we are bringing you a Radio Advisory, your weekly download on how to untangle healthcare's most pressing challenges. I'm Abby Burns. (00:22): Today we're going to talk about AI. But we're actually not going to talk about generative AI for a change. So put ChatGPT to the side. Instead, we're going to talk about another AI model that our experts say is flying under the radar a little bit, and that's computer vision. (00:39): This episode is the second in our lead up to Advisory Board's upcoming Strategy Summit. And in light of that, we want to make sure we're helping listeners think as comprehensively as possible about the ways that AI can serve your organizations. So to help us check our blind spots when it comes to AI, I've invited Advisory Board digital health experts Ty Aderhold and Elysia Culver to help us understand why computer vision should be on our minds, arguably just as much as generative AI. (01:07): Hey, Ty. Hey, Elysia. Ty Aderhold (01:08): Hey, Abby. Abby Burns (01:11): Guys, I have to admit, I'm a little nervous for our conversation. I feel like, I mean, we're definitely no strangers to AI on Radio Advisory. But typically we talk about AI at a little bit more of a strategic level. You all were on the podcast last fall and you were talking about how health systems should approach deploying AI at their organizations. How they should pace it. (01:33): The conversation we're having today at least to me feels a lot more specific than that. And I have to say it's about something I candidly know very little about. Ty Aderhold (01:45): Yeah, Abby, I think it is a little more specific. I don't want to scare listeners off though. It's still, I think, a very strategic conversation. Basically, I think we're here today to tell you that you shouldn't quite be so focused on generative AI exclusively. Abby Burns (02:02): Okay. Why is that? Ty Aderhold (02:05): So I think there's a couple of reasons. The first is that when you think about investing in AI, you can think about it as just improving an existing process, making it more efficient, or you can think about transforming how you're doing a set process. (02:21): I think a lot of times with generative AI, it's really about the efficiencies. Let's automate something that humans are doing. And as we learn more about ChatGPT and other generative AI tools, I think we're finding that some of the costs for energy, the costs for training are actually a lot higher than maybe we first realized. So I think that's one reason. (02:41): But I think the bigger reason is there's advancements in other AI technologies that are exciting that are happening right now that aren't being talked about because everyone's still so focused on generative AI. Abby Burns (02:54): Other technologies like computer vision. Ty Aderhold (02:58): Yes. Abby Burns (02:59): Ty, when we were talking before, you said computer vision hasn't really been on the radar for a lot of the organizations that you've been talking to. What do you mean by that? Ty Aderhold (03:09): Everyone's really been focused on generative AI, really since the sort of rollout of ChatGPT. And that's certainly what we found as we reached out to healthcare leaders. Elysia, correct me if I'm wrong, but it seemed like when we were trying to get on the phone with people to talk about this, not a lot of people were sort of ready to have the conversation about computer vision. Elysia Culver (03:35): Yeah, I would agree with that. It was hard to find organizations who I guess were looking at more of this transformative version of computer vision, and we can get into that later, as I do want to mention that there are versions of computer vision that we've seen around for a long time. But for some of these new versions of computer vision, I would say it was definitely hard to get people on the phone. And I think that it's maybe just because they're not there yet. Abby Burns (04:00): So part of our goal today is to kind of put the little bug in folks' ear to take a closer look at computer vision. I think to do that, can you all do some level setting to start us off? What is computer vision? And maybe how does it fit into the picture of AI more broadly? Elysia Culver (04:19): Computer vision is a pretty broad term. It's an entire field of artificial intelligence. And a simplified way to look at it is I think that it's basically teaching computers to see and understand pictures and videos like humans would. So imagine when you look at a photo and you can recognize somebody's face, computer vision aims to give computers that same ability. (04:42): And like I just mentioned earlier, it's technically been around for a long time, especially in other industries. If you think about how factories can use cameras to check if products are being made right or if security systems can spot someone who's not really supposed to be there. But it's super important in healthcare too, and it's being used to look at a lot of medical images for example, and then basically find patterns and make predictions. Ty Aderhold (05:07): Yeah. Elysia, I think you nailed it in terms of what computer vision is. I think the way we're seeing it used in practice, because just being able to identify something from a video or from a camera is just a step one for actual usefulness of an AI model. Computer vision becomes much more powerful when you pair it with machine learning and some predictive AI. So you can imagine, instead of just identifying, oh, there's a patient in this room, you can imagine some of these more advanced computer vision models being able to then make a prediction about that patient's movement and if they might want to get out of bed or not. And so that's where the power in computer vision lies is pairing it with some of that machine learning. Abby Burns (06:01): So you're using the word predictive to describe the AI. And that sort of feels like it might be a foil to generative AI. Is that a fair assessment? Ty Aderhold (06:07): Yes. I mean, in some ways, they're very similar. They're both built upon sort of machine learning and advanced statistical models as a background. But I think they're doing two very different things. So if you think about generative AI, something like ChatGPT, it's reliant on its large language model. And it's really producing new text for the user based on a prompt. And you can see generative in a wide variety of use cases beyond ChatGPT. There's video. In healthcare, there's certainly some use of generative AI and drug development to sort of predict which molecules might be sort of relevant to a given drug. (06:51): But then you have the predictive side, which has actually been around much longer. This is sort of the... As AI, the field of AI developed from advanced statistics, predictive AI was the first place it went. And this is when you feed data into a model, and it is able to start to, based on that training data, be able to take an input and make predictions going forward in the future. So this is the same sort of AI model that Netflix uses to predict what shows you would want to watch in the future based on the past shows you've watched, for example. Abby Burns (07:29): Yes. Suggested shows for you. Ty Aderhold (07:30): Yes. Exactly. Abby Burns (07:31): So to oversimplify, generative AI models focused on creating something that previously did not exist in response to a specific question that they're asked, whereas predictive AIs are trained to intake specific inputs and spit out or predict an outcome or an answer. (07:50): I'm kind of reminded of almost generative AI models would be having a really good chief of staff, that you can kind of ask them anything and they can probably get you an answer that's pretty close to what you're looking for, versus maybe a weatherman that is looking at whether inputs and making a prediction about what we might expect to see in terms of the forecast. Ty Aderhold (08:13): Yeah. And I think a key note here is, we're still in a world where AI is not able to do a wide variety of tasks. It's still sort of designed and trained to be able to do one task. This is why, even when you see generative AI and you think, "Oh, this could do anything," sometimes generative AI gets math problems wrong. Because it's not actually doing the math, it's just generating the text it thinks you want it to generate, which is a different task than doing a math problem. And so that's why, even though it seems like, oh, I could ask ChatGPT anything, it may not necessarily be able to do anything and get it correct. (08:52): Where that applies with predictive AI, and I think this gets back to why we think healthcare leaders need to be more focused on predictive AI, is predictive AI is still in the very narrow use case. We're training a model to be able to predict this one thing, and we're able to understand how good it is at doing that in a way that it's much harder with generative AI. And so when you think about transformative clinical use cases, I think there's still a lot of room for improvement and advancement across the next five years in the predictive AI space, computer vision being one of those places. Abby Burns (09:26): Yeah. And you all have already started to hint at a couple use cases that we see for computer vision in healthcare. But I'd love to just dive full in there. What are some of the main uses for computer vision specifically? Elysia Culver (09:40): I can jump in there now. So I feel like some of the use cases that we're seeing and have seen for a while have to do with medical image analysis. So think X-rays, CT scans, MRIs, and kind of looking at these things to see if there are tumors and lesions and things like that that might be imperceptible to the human eye. (10:00): Another thing that we're seeing is in the monitoring space. So right now, I think a lot of this is surrounding how we're monitoring patient activities in healthcare settings. If we're looking I guess more at that next level version of that, okay, how are we monitoring not just the patients, but also the facility and the equipment? How are we making sure that the facility is clean? And how are we monitoring clinicians and making sure that they're adhering to hygiene standards? Things like that. Ty Aderhold (10:28): And Abby, I want to call out a couple of things about what Elysia just said. I think both those examples are cases where 10 years ago, this wouldn't have been possible to have that sort of standard of checking or monitoring. We could not afford and don't have the human staff to 24/7 monitor every patient. That's just not possible. It's never been considered by a hospital. And now you have technology that can do that. (11:00): The radiologist example. These are things that, even if two, three radiologists were reading this scan, might not be perceptible to those radiologists that we're now able to find. So this is not just taking the existing work and shifting it to a automated AI model. It's instead augmenting the existing human work with things that were not previously possible. Elysia Culver (11:26): Yeah. I really agree with what you said, Ty. I just want to, I guess, nail down the points that all of these things really impact quality of care and will really help, I think, make a positive impact on workforce challenges, especially in regards to burnout. Abby Burns (11:40): It strikes me that these examples that you're sharing very much make the point that you all have made on the podcast before, and that honestly Advisory Board makes every time we talk about AI, which is, don't have an AI strategy, deploy AI to address your overarching strategy. What's the expression you guys use? Ty Aderhold (12:00): Yeah. We say don't have an AI strategy, take your existing strategy and see where AI can sort of benefit that existing strategy. I would say that on every single AI podcast I will be on for the next five years. That is not going away. That is a forever truth in terms of how to think about these investments. (12:21): And I think when you start taking that approach, again, you might not always choose a generative AI tool to solve your challenge. So again, a reason that we shouldn't sort of limit ourselves to the hyped generative AI products right now. Let's take a step back, think about the biggest challenges facing our organizations, and then identify an AI tool, or even just an advanced statistical model, that'll help us with that challenge. Abby Burns (14:33): To bring us back to computer vision, when I think about the examples that you all just shared of how this is currently already being used, the potential for impact seems very compelling. I think everyone would want to have a de facto team member that enables them to constantly monitor all patients. But at the beginning of the conversation, we talked about how you actually had trouble finding organizations that are deploying this in potentially newer or more innovative ways. Why is that? Elysia Culver (15:05): So I think for some of these next level applications that we've been talking about, it can cost a lot. And I think more importantly, it needs loads of data storage and data management. On top of that, like any AI solutions, there's going to obviously be concerns around bias and whether or not technology like this could work across races. (15:26): And then I think maybe something that might be potentially more unique to this technology is something like alert fatigue. So for example, if this technology is monitoring patients and facilities and it's constantly pinging you that something is happening, you end up actually contributing to burnout and not reducing burnout. Abby Burns (15:45): Throwing something at your computer. Elysia Culver (15:46): Mm-hmm. Which was the original intention. So I think those are what comes to mind for me in terms of some of the challenges. Ty, I don't know if you would want to add anything there. Ty Aderhold (15:55): Yeah, I'll add a couple of things. I think on the cost front, a big one is the sort of infrastructure side of things. Just are you able to and set up to have cameras in these places, and then to Elysia's point, to store this data? (16:08): The second one, and this is not something that healthcare organizations are going to be able to solve, but they should certainly be aware of, is are we ready as a society to have this level of monitoring just as humans? And so I think that's a real question as well around are patients going to accept this level of monitoring? A camera in their room, for example, or cameras in the halls that are always sort of tracking their movements. Abby Burns (16:38): And this I feel like is particularly relevant. I know you all have started some research into cybersecurity. And it feels like, Elysia, from the moment you said data storage, data management, my mind immediately went to data protections, data privacy, data security. So I imagine that's a challenge as well. Ty Aderhold (16:55): Yeah. I think it's sort of the same challenge that exists for any new sort of AI solution or any new piece of data that you are collecting, which is basically are the potential benefits of using this data for your organization, for patient care, for quality, going to outweigh the potential risks that come along with extra data, potentially extra third-party organizations that you're working with to build and use these models? Abby Burns (17:27): Is there a reason why for computer vision, that question, that calculation would be any different, any easier or harder, than other types of AI models? Ty Aderhold (17:38): The first thing that comes to mind is, again, the monitoring piece, and how directly tied this is to patients. So this is not just something that is running on the backend, maybe with a few data points from that patient pulled in. We're talking about patients on videos that is being documented. And I think that can fundamentally change the public perception of it. Abby Burns (18:02): Yeah. I think that makes a lot of sense. And still, understanding that ROI can be tricky to pinpoint, it sounds like technology is continuing to evolve. We are continuing to see newer, more innovative use cases. What does the maybe medium-term to longer-term future of computer vision in healthcare look like? Ty Aderhold (18:29): Abby, I think it's an open question. I would say we're certainly in a world right now where organizations are needing that ROI and need a shorter term ROI to be able to make investments. That's just the financial state a lot of organizations are in right now. And when you think about computer vision or other uses of predictive AI that are focused much more potentially on improving clinical care quality, the ROI can be there, but it can be a lot harder to measure, which can sort of slow down the process. Especially compared to something like generative AI, where when you're sort of replacing an existing process one-to-one with just a generative solution, you can see the hours and staff time saved as a much more immediate sort of ROI benefit. (19:23): That's not to say that there aren't computer vision use cases where you could also measure that. I think it's just harder when you start getting into the quality and outcomes improvement as part of the ROI to measure that as easily as some of the cost savings with generative solutions. Abby Burns (19:38): I think that makes sense. It does make me wonder, given the ROI can be tough to capture, and maybe that means that the case for investment can be harder to make outside of, for example, point solutions, what timeframe should we be thinking of for health systems to realistically be deploying some of these solutions at scale? Elysia Culver (20:02): Yeah. So like I mentioned earlier, the computer vision technology for images has been around for a while, so we're definitely seeing this for medical image interpretation, as well as some of the monitoring of patients. And then I would say right now, we're also seeing things used in surgery and helping surgeons maybe perform more precise movements. (20:26): But I feel like if you're looking further out, maybe a few years down the road, that's when you'll start to see stuff as it relates to maybe disease progression tracking or drug discovery and looking at molecular images to identify patterns and structures. And then what I think is really interesting about this, and we mentioned this next generation capability of looking at video content, but how are you I guess using these videos to capture emotions and behaviors and analyze expressions, or assessing pain levels and things like that? That's kind of, I think, further down the road, and something that we won't see for a while. Abby Burns (21:08): Why is the distinction between image and video important? Elysia Culver (21:13): Yeah. I think the reason why this technology is concentrated on medical images in particular is mostly because they're easier to analyze from that computing perspective. Abby Burns (21:27): Like the technical capability. Elysia Culver (21:28): Mm-hmm. Yeah. And then I would also say there's a direct correlation to how it's been able to help make diagnoses. So it's kind of got this clinical importance. But like I said, there's all this excitement about using it for videos now because those computing aspects have been able to improve, and we're able to process that better than we used to be able to. Abby Burns (21:55): Computer vision is something that we are saying listeners should be keeping their eye on, that they should have on their radar, right next to generative AI. I want to kind of turn this idea back to the two of you. What are you watching for in the field of computer vision? What use cases or evolutions are you most optimistic about? Ty Aderhold (22:17): I think we're at a place where we are ready to use computer vision for some key quality metrics that all hospitals have to be worried about. Things like falls, things like pressure ulcers. I think we're there in a technology space. I don't think we're quite there in terms of readiness for adoption and the ability to use this in practice. And that is what I am going to be watching for. Do the early adopters of this technology see a reduction in falls? See the reduction in number of patients getting or percentage of patients getting those pressure ulcers, for example? Because I think that is a key benefit that is potentially around the corner for us. Elysia Culver (23:03): Yeah. I would actually honestly just add on to that and say that that's probably what I would be looking at as well. You're kind of seeing this term emerge called smart hospitals. And I think that there's this idea that you can use this technology to make your hospital smarter, as the name says, with this technology. And I think that it'll go back to exactly what we've already mentioned in terms of improving quality and helping workforce burnout. Ty Aderhold (23:29): Another fun one that I always love to throw out there is this will be great for any hospitals who have lost ultrasound machines in the past. Abby Burns (23:39): Lost like misplaced or lost like they went out of use and you weren't able to replace them? Ty Aderhold (23:46): Lost like misplaced. And I'm sure some listeners who have been in the industry for a while know this happens. But sometimes, an ultrasound machine ends up somewhere and you don't know where it is. This is another great use case of just being able to find those ultrasound machines, what floor they're on, what room they're in. Abby Burns (24:04): It sounds like you have a couple stories from your research, so I look forward to hearing those offline. (24:10): Ty, Elysia, thank you for coming back on Radio Advisory. Ty Aderhold (24:13): Thanks for having us. Elysia Culver (24:14): Thank you. Abby Burns (24:23): What I heard from Ty and Elysia is that there's a lot of opportunity when it comes to computer vision beyond individual use cases, it can help health system leaders in particular address and maybe even solve for some of their main strategic goals, like improving the quality and safety of patient care. (24:40): This technology is already here today, but we're going to continue to see it evolve and improve in the next few years. Along the way, our team will be tracking the use cases, and critically, the ROI. Because remember, as always, we're here to help. (25:06): New episodes drop every Tuesday. If you like Radio Advisory, please share it with your networks, subscribe wherever you get your podcasts, and leave a rating and a review. Radio Advisory is a production of Advisory Board. This episode was produced by me, Abby Burns, as well as Chloe Bakst and Atticus Raasch. The episode was edited by Katy Anderson, with technical support provided by Dan Tayag, Chris Phelps, and Joe Shrum. Additional support was provided by Carson Sisk, Leanne Elston, and Erin Collins.