Rae Woods (00:02): From Advisory Board, we are bringing you a Radio Advisory, your weekly download on how to untangle healthcare's most pressing challenges. My name is Rachel Woods. You can call me Rae. I'm going to start off by telling you we're going to do something different in this episode of Radio Advisory. In just a moment, I'm actually going to pass the mic to my colleague, Eric Larsen, and he is going to have a really interesting conversation with someone at the forefront of technology and big tech in healthcare. (00:32): If I'm going to talk about big tech, we've got to talk about Google. Google is a company that is financially the size of the country of Brazil and whose technology has become indispensable to everyday life. I'm not sure where any of us would be without the Google Search. Now, Google's activity in healthcare has been a bit turbulent since it entered the space almost two decades ago, but it's impossible to deny the significance of the contribution they've made, especially when I think about things like health tech, therapeutic discovery, or even consumer activation. With all the change we've been tracking, especially with generative AI, Google is of course refining its healthcare strategy to drive maximum impact, and at the forefront of that is Dr. Karen. DeSalvo, chief health officer of Google. Dr. DeSalvo is a physician and an epidemiologist by training and helps to connect the dots across Google healthcare's research and their products and their services. Now, I'm going to hand the mic over to my colleague Eric Larsen for his conversation with Dr. DeSalvo. Eric Larsen (01:34): Karen, nice to see you. Dr. Karen DeSalvo (01:37): Oh, it's so nice to see you. I'm really looking forward to our conversation. Eric Larsen (01:39): Karen, what I'm really delighted about, you and I have had conversations about big macros and Google's place in the economy and societally, and we've separately talked about how consequential gen AI is. Let's start at the very top level, which is everybody's talking about gen AI as this combinational suite of different technologies, and there are those, me including Karen, that think that this particular set of innovational complementarities is among the most consequential that humans have created in terms of technology. I want to just start and ask for your perspective. As a technologist, as a epidemiologist, just as a societal level, do you share that characterization that this is hugely maybe even civilization altering or are you a little bit more modern in your view of gen AI generally? Dr. Karen DeSalvo (02:35): Well, actually, I'd start by putting on my hat as a doctor and saying that I am jealous of the physicians that are just starting out right now because the world in which they're going to be able to help people is extraordinarily different from the world in which I started. When I was finishing medical school in 1992, everything was paper, and in fact, where I trained, we were actually doing some of our own labs, and even just to go get the lab test results, we literally pulled them out of a wooden box. So when we talk about pulling down someone's labs, that's where the verb comes from. But now, fast-forward to a world in which it's likely that everyone will either have a doctor in their pocket or every doctor's going to have a part of their team that knows all the best evidence, I never imagined I'd see that as a physician in my lifetime. (03:23): I will say as a human, it definitely begs an awful lot of really interesting questions, and I've heard this expression, it's a cognitive industrial revolution, and I think if you just begin to think of it that way, it helps everybody understand that the potential is almost endless for how this helps us from creativity to filling gaps, all the way to what are the opportunities for us to be more efficient and effective and deal with resources on the planet better? I think the other place where my head goes a lot is what does it actually mean to be human? Because this is a big part of the conversation. The tools can do things. There should be a human in the loop. So there, I think we're just thinking about where the human fits into all this, plus where the technology, but then how will we know when it has achieved this place where it's good enough for use on an everyday basis? (04:16): At any rate, it's been an extraordinary time to be part of the cognitive industrial revolution, and I think I'm just learning as a person like many other people how to use it in my everyday life, how to use it in work, and then how to think about what it's going to mean for the future of health on the planet. Eric Larsen (04:32): What's fascinating about this particular revolution is that it's automating higher order cognition and taking analytic tasks that were previously the domain and province of highly compensated, highly respected experts like doctors, like accountants, like financial planners, and it's a real question and it's a very deeply humanistic question like how much of this is going to be amplification? How much of this is going to be obsolescence or replacement? This isn't a new 2024 breathless dialogue. I mean, one of my favorite pieces was in the 1930s. John Maynard Keynes wrote about technological unemployment and the potential of this mechanization to actually dehumanize and eliminate certain tasks and maybe even occupations. (05:22): You talked about being a doctor first, and I want to push on that just for a second because I'm so excited for what this gen AI moment could mean for democratizing medical intelligence, for making it accessible, for making it affordable, for making it ubiquitous. I mean, Vinod Khosla talks about expertise becoming free, yet you think about physicians that are justly the most respected profession in the United States, followed by nurses and military veterans, and they're also the highest compensated occupation in the United States. 9 of the top 10 highest compensated occupations in the US are medical. How do you see this playing out for doctors? How much of it is going to be amplification? How much of it could eventually be some of the different territorial boundaries among specialties getting super blurred? What are your thoughts on that? Dr. Karen DeSalvo (06:19): Well, many, and I'll start with physician training, and there's a really important set of questions that medical education, undergraduate, graduate and beyond are going to have to resolve and I think relatively quickly, including how do we want to encourage or allow gen AI to be a part of the assessment process? The examples that you might think about would be related to a paper that we published on a model called AMIE, which we put through the standardized patient exam. It's a common thing that we do for physicians, so it's not just about how well you do on a written test, but how do you do when there's a standardized patient that you have to come to a diagnostic determination, and do that in a way that feels empathetic and good to the standardized patient who's trained to assess those things? (07:11): Our model in that controlled environment performed better than the physicians, and I think for me as a former medical educator opens up this question of in addition to having physicians teach physicians how to come to a diagnosis, how to talk to patients in a way that's empathetic, how can the bots do some modeling and help and coach? It starts to help you think about scaling because one of the things right now is that each medical school has its own faculty who's good in different ways at different things, but to your ubiquitous point, what if every doctor in training on the planet had access to the very best professors irrespective of where they trained, and we could do it much more affordably and at scale and you could do it over and over again because the attending wouldn't get tired? It's sort of helping to coach and support the clinician. I'm not saying it replaces the attendings or the physicians, the professors, but it definitely potentially has some role in coaching and education. (08:07): You could think about competency-based advancement, which is something that's been talked about in medical education for a long time. Right now, it's four years of medical school in the US. That's just the way it is, but what if you could use gen AI to help people accelerate and go faster, especially where you have pipeline challenges like primary care, and help get more physicians out into the field, but know because you're able to do standardized and automated testing at scale in a way that we can't right now? There are a host of other issues there about governance and teaching physicians how to use gen AI responsibly, but I think that medical educators, I know that they're starting to pay attention and think about how it changes the curricula and how we take advantage and use them. Eric Larsen (08:48): I understand how it may evolve the curricula and it's going to have to. Do you think it changes how we select for students? Because so much of medical education is based still on the 1910 Flexner Report. I think we've selected students on quantitative skills, memorization skills. All of which become sort of obsoleted in a post LLM world or a post medical super intelligence world. What are the characteristics of the students that we may look for in a different way? Dr. Karen DeSalvo (09:20): I will tell you first of all, Eric, that there's this debate about this in medical education. I've had the fortune to talk with some folks, but I'll tell you my opinion, which has always been that the foundation of being a great doctor is one in which you have strong critical thinking skills. I think part of that is knowledge and memorization because you have to be able to assemble and manage a lot of facts about an individual patient, about a disease grouping, et cetera. (09:45): I think it's important for us to really understand deeply the basics of biology and chemistry and physiology because we have to understand how everything is interacting in someone's body and be able to know that science. So I would not want to see medical education get away from that and rely on the bot to give the answer because I do think the human needs to be in the loop, and I would worry very much about automation bias because if the bot says that's the thing, that's what they're going to do, and we have to be able to be critical and know oh, that literature is not the best literature that they're pulling from. So I think there'll be changes in medical education. I think we could improve it, make it more efficient, but I've always been a strong believer that we have to have strong basic science. (10:28): Now, that said, maybe unrelated to gen AI, what we don't do well in medical education is contextualize people's health, recognizing that health is more than pure healthcare and medicine, that it's also about where you live, learn, work, and play. These social determinants of health, we need to adjust that in medical education. Maybe gen AI helps that because maybe what gen AI brings to the table is the doctor knows the medicine, but it becomes the part of that team that says, "Don't forget this person's occupational exposures are these things," or, "They don't have access to healthy food." (11:00): I think in my lifetime, we're going to see a world in which every physician has another physician that is able to support and guide the care that you deliver to the patients in front of you, and that matters because evidence evolves. There's so much literature that changes about what's the next best action or the best antibiotic. For that matter, antibiotic resistance changes in communities. It's also true that quality is variable all across the country, so we just know the reality is, is that not everybody in the US gets the best quality care. So if you think about not only the best evidence-based practice, but also thinking about raising the floor on quality of care and closing that gap, there's a world in which we can begin to even that out, and frankly, that's going to lower price because we know that the better quality is, the lower the price is in healthcare. So just that alone I think would be extraordinary. (11:52): There's a lot of questions about if the bot says go left and the doctor says go right, which one's correct? Again, this gets to automation bias. But when there's clear things like give an aspirin for an MI, that's where the bot can go. "Don't forget doctor, it's 3:00 in the morning and you've been up for 24 hours. This is the thing you need to add to your orders." It's not quite frankly ready, I think, for that kind of use case, so I hope that we're real thoughtful about quality, safety, and equity, but we're going to get to that place. Eric Larsen (12:18): Yeah. I agree. I think we are going to get to that place, and then it's going to really require some deep introspection around how do we absorb this and assimilate this in a way that's not completely disruptive? I want to juxtapose a few things that have happened. First, we had Llama 3 introduced, which was pre-trained on 15 trillion tokens and GPT-2 was rumored to have been trained on 300 billion tokens, so an exponential increase in the pre-training on the models, right? And then you also had the California Nurses Association picketing outside of a San Francisco hospital protesting the application of AI to healthcare, and you had three separate research studies, one from UCSD, one from Mass General Brigham, one from Mount Sinai, really questioning even these nascent applications and documentation and summarization that may actually be adding time to physicians' work. So you're getting this sort of cacophony of noise, good news, bad news, technical updates, protests. (13:30): Let's just start basically on the competency of the models. You talked about AMIE. You've got Med-PaLM 2. Google's been a real pioneer in innovating some of the medically trained chatbots and they're showing great proficiency. They're besting human performance on some of the USMLE as an example, but a lot of those are trained on publicly available data. We haven't yet broken into the proprietary data really. So I guess I want to zoom out and ask you where are we with the believability, competency, maturity of the models in your estimation? Dr. Karen DeSalvo (14:09): Definitely, top of mind for us. I will start though to say that health moves at the speed of trust, and what you're seeing from the nurses, raising their voice about not trusting is real and important and needs to be listened to because no matter how great a technology is, if people don't trust it, it's not going to be helpful in the environment. (14:31): I think the difficulty about the gen AI stuff is that we have to build trust in a way that's a little different than we built with straight AI like train the thing to read the mammogram because there, you can show the code and there's clear area under the curve. There's ROC. You know exactly what it's going to do. It's going to be regulated. In the gen AI models, they're not quite that predictable, and that's the beauty of them is that they're multipurpose. They can be trained on massive amounts of data and images now, so the multimodal capabilities really steps up the opportunity, the wow of what they can do, but it also means that they're not always producing the same output. (15:14): I think that kind of gets to the phase that we're definitely in here at Google, which is moving from wow to how. Wow was a lot of last year eyes wide open like, "Wow, did you see it could do that?" But now it's, "Okay, how do we make sure that we're going to do this in a way that not only makes sense in the laboratory?" So if you just think about this drug development, we create a model like Med-PaLM and we can build it, we can test it and see how it performs in terms of its capabilities, but then we also want to understand how it goes awry for some of the things that were more common in the past like hallucinations. Can we make it more factual? Can we see the answers are more complete? Essentially, what is the quality, safety, and equity of the models? (15:59): But then what we do is we'll either use it in our own environment, say, we're thinking about how it's helpful in our hardware to create a personalized large language model, or we point it as an API to customers. And what we learned is when you start to then put it into production, think about moving then along the development cycle to figure out, well, what is the safety when you actually put it in the environment? What is the quality effectiveness of that over the course of the journey of the development of it? Especially as you get into that more phase four space of effectiveness, when you put it with somebody's data in their environment and see how well it summarizes a record at nurse handoff, then you have nurses that are helping you learn, "Oh, this is missing," or, "This is hallucination," and you learn you have to actually tune the models, so these would go back upstream. (16:48): I'd say it is arduous in a good way. We're learning a lot about what are these right parameters, and again, I'll just say quality, safety, equities are the big buckets, but it's others, subdomains in there about how complete is it? Is it hallucinating? Can it point to the actual right X-ray? And then what are the biases that are built into some of the answers across all those axes? And safety, of course, in medicine, first do no harm. So we've learned to do that in our own environment. We're learning to do that with partners. (17:19): I think the next generation, by the way for this, again, all this is about trust. The more organizations are transparent about how they do it. We published some stuff about how we're doing equity. Everyone's starting to put a little more out in the ecosystem. This Coalition for Health AI group that's formed in the US, it's been in existence but evolved to really focus on, for me, it's this question of how do we know we can trust what's out there? The we is very comprehensive. It's the scientists. It's the doctors. It's the nurses. It's the consumers, the patients. And when we get a better assessment model that everyone can agree to and how do we know what good looks like, and what are the worries that helps us move along a pathway, I think, where it's not just one company saying their thing is good, but it's actually more of a shared perspective. Eric Larsen (18:05): I love that and I love that this thing's going to move at the speed of trust. I think that's super important, not just thematically, but super tactically. Why? Because these models that we're talking about, already, the limitations are becoming clear. The hallucinations, is it a feature? Is it a benefit? These things are probabilistic, not deterministic, so we got to figure out is this the right architecture to get to that right answer? Dr. Karen DeSalvo (18:30): But what's interesting about it is that we were saying it's a feature and a bug about the hallucinations, and what's particularly difficult about the models is that they do feel trustworthy. Eric Larsen (18:42): Yes, I love... Yeah. Yeah. Dr. Karen DeSalvo (18:45): This goes back to critical thinking and skepticism. Okay, that seems right, but I'm going to triple check that. Trust but verify. Eric Larsen (18:51): This automation bias that you've described a couple of times is so important and this propensity to defer to the machine is a very dangerous human propensity, especially when the bots have such an authoritative voice. I mean, they're often wrong, never in doubt, and it's a little bit of a problem. Part of the hypothesis I've sort of been nursing, and tell me if I'm thinking about this the right way, Karen, is that obviously the more proprietary healthcare data that we can assimilate into the models, either through pre-training or post-training, the more accurate we're going to get. And right now, we know that one-third of the world's data is in healthcare. 80% of this data is unstructured, right? So these hospitals and payers and doctors are sitting on these huge repositories of structured, semi-structured, unstructured data, and most of them by their own admission don't know what to do with it. The average hospital's kicking off 50 petabytes of data every year. They use 3%. (19:54): Here's Google. As one of the most consequential businesses in the economy, you guys have the R&D budget of a nation state. You know what to do with the data, and to get access to the proprietary data of hospitals and doctors and payers requires that trust, as you've correctly said. How do we get the healthcare incumbents that have all of this rich, unstructured clinical notes and labs that are going to enrich our models, that are going to enable greater accuracy and diagnostics and finding contraindications in drugs, how do we get those partners to have trust in Google or another entity? How do you think about that? Dr. Karen DeSalvo (20:38): We think about it a lot because we don't run a healthcare system and we're not going to, and we don't have a health plan, so we don't have a PHI, personal health information, which means there's only so much we can do. When we build the first iterations of the models, we're using anonymized publicly available data sources. We mostly take the strategy of exposing the API to a customer and saying, "These are the building blocks. It's been to medical school. It's read the textbooks. It's read the literature. Now, pretend like that's a brand new baby doctor that you're going to put on the wards in your hospital or in your clinic, and you want it to know the standards of care and the way you practice in that environment." They have their own instance where they're working on the model, and what the model evolves in that environment doesn't come back to feed into our base model. (21:29): It's definitely how we want to do it, but there's also difficulties in that because the base model isn't really necessarily learning. It's an interesting thing though that we're seeing a little bit of. It's just that these really supercharged base models like Gemini 1.5 and others, they're pretty good at just about anything. So medically tuning helps as you get into a certain environment, but it's kind of interesting to see also that maybe you don't need the massive amounts of PHI data and clinical data, in order for them to really understand the... I'll just use medicine. The practice of medicine or drug discovery. But it definitely affects policy, procedure, formulary, all those things that HCA would have or whoever would have. (22:07): What we see is that some companies, some in the US, Mayo, HCA, in other parts of the world, organizations like Apollo Health have pretty sophisticated digital teams, so they're able to take the tooling, and then take it to the next level. I think over time, we're going to see more and more of those kinds of organizations wanting to create models that not only can think about how to help in their environment, but how to be more generally helpful in the health ecosystem. Be a little ginger with all that, but I think it's kind of known in the startup world that there's lots of organizations trying to create doctor bots and the organizations that have a lot of clinical experience to train the models on may have some advantage, especially if they're a really high quality organization. So that's just the reality for us is we won't make a doctor bot, we can't, or corporate practice of medicine, but we're trying to expose the fundamentals, so that others can build on it and expand. (23:04): I think there's a sort of related issue here, Eric, about building the models and thinking about their capabilities in terms of the expense of using them. Maybe it's not a capability with respect to expense, but how expense interferes with that. And the reality is these models, they do take time. I think you said this at the beginning of our conversation. It's not as though they're necessarily right now improving efficiency because they're new and folks are learning. So also, for organizations, especially the more forward-leaning bigger organizations, I think they're finding that they have to think not only about the clinically facing innovative use case where there's a lot of potential challenge around safety, quality, et cetera, but also thinking about the back office stuff where they can address bottom line issues and have some savings that can help them invest in the innovation side. So I think there's a little flywheel there that I'm starting to see a pattern of an organization. What that gains you is, as an organization, a chance to learn and work with the tooling in a way that's less risky. (24:06): At the risk of sounding like an advertisement for cloud, it is one of the things about healthcare that became really clear to me a year and a half ago is that we can work with organizations more readily if they're a cloud customer because then they've got an instance and we can expose the tools and our researchers can get in there with their technical folks and help them. They don't get to the data, but they can help them with like, "Here's the next thing you should think about." So it's just easier, and ones that I mentioned like Mayo and Apollo and HCA are examples of where we can move faster with them. (24:37): But we do do this in our own environment, and I mentioned that we're working on creating a personal large language model on Fitbit, and this is an example of where with people's consent, well, we're going to let people know when it's time, but we've announced that we're going to let people opt in to start to create a non-medical care, but kind of a coaching and a wellness model. And I think we're going to learn a lot as an organization about the vagaries of that. When people are ready to share their data to create a personalized coach, what are the things that we learn about the capabilities of the model, the challenges of the model when it's working, not on synthetic data, but on real-world data, and beginning to provide advice? It's analogous to what the healthcare systems are working on doing, but since we don't run healthcare, we just run wellness stuff, it'll I think give us more insights into the fundamentals of the model, and if there's anything that we want to do upstream to make sure that they perform in a way that's more useful. Eric Larsen (25:28): One of the things that I'm thinking a lot about is the generalized models are getting so proficient in their diagnostic capabilities and their medical reasoning. There are those that are postulating that you actually don't really need huge quant of clinical data and claims data to enrich the model. I'm a avowed skeptic of that, and a little bit, it's hard to prove a counterfactual because we haven't actually seen a model. We've seen models like even Med-PaLM 2 that are trained on PubMed and UpToDate, et cetera. Dr. Karen DeSalvo (26:01): But it's not trained on clinical data. Eric Larsen (26:01): That's exactly right. So the supposition here is that the data that hospitals are sitting on, especially the unstructured data, I think is going to be a major unlock for the proficiency of the models. Dr. Karen DeSalvo (26:14): This is exactly how we train doctors, and this is the way that I'm starting to think about... Apologies to all the other health professionals. I trained physicians. I'm a physician, so that's what I know best, but there's an analogy. What you do first is book knowledge. Read the books. Take the test. Learn the things. Get used to reading literature and know how to interpret it. Then we put you in a clinical environment. It's a little overlap, but basically, then you go start taking care of patients. You do that under a very controlled environment. You have supervision. You're allowed mostly to do an assessment to understand what the patient's complaints are and do the exam. And then as you mature into your third and fourth years, then you can start to dabble in diagnosis, and then dabble in therapeutics when you get into residency, as you get more clinical experiences. You've had more exposure to clinical information in real patients, which is what you're describing. (27:07): So I think the utility of these models as they basically advance across their educational training, they will get better just like doctors do when you've seen more patients. If you've done more appendectomies, you're better at them. If you've seen more heart failure, you're better at it. It is, by the way, I think a framework that is interesting for regulation because we have a way that we accredit physicians, and this is almost like a little mini part of your team, and we might need to think about that as a pathway for how you credit it, and what its capabilities ought to be at different stages along the pathway. Eric Larsen (27:40): By the way, I love that because I'm a huge advocate that you regulate the outputs, not the inputs. Especially, when the technology is moving exponentially and the regulators are plotting along at a linear pace, you get this major exponential gap here. We do have regulatory bodies who are institutionalized that can help judge this and, Karen, I hadn't thought about it like that, but that's a very interesting way to think about regulation of the models. Dr. Karen DeSalvo (28:07): Yeah. Does it pass USMLE step 1, 2, and 3? Then [inaudible 00:28:11]- Eric Larsen (28:11): That's right. That's exactly right. Dr. Karen DeSalvo (28:11): ... the exam? Exactly. We already have structures, so maybe that's a way to think about. Eric Larsen (28:18): Well, some have analogized these models to super smart but disobedient teenagers, right? They're evolving so fast, but as you pointed out earlier, they speak authoritatively and are often wrong. So I like that framework. (29:25): There's so much curiosity about Google and how it's structured, and you have this sort of armada of efforts going on that are kind of amazing. How should the world, especially the healthcare world because our listeners are healthcare leaders of all stripes, how should we understand how Google is showing up to market? Dr. Karen DeSalvo (29:45): I'm going to start really big picture. Google is actually Alphabet, which is a holding company, and the biggest portion of that company is Google. But then we have these bets, and the bets in health are Calico, which is going to help us all live a longer, healthier life, Isomorphic Labs, which is a commercial spinoff of DeepMind, predicated on AlphaFold. We have Verily, which is focused historically on helping life sciences be successful, whether that's clinical trials work or developing devices. And we have Google Ventures, which makes investments independent of Google and Alphabet priorities and key partners in the ecosystem. And then inside of Google, most people don't know that YouTube's owned by Google and Fitbit's owned by Google. So we have a lot of external brands, but the way we think about health in the company is Google Health is our umbrella brand that represents all the health use cases in the product areas across the company. (30:50): I steward that Google Health brand for the company, so we try to show up as single armada as much as we can, but that would include the R&D stuff we do, so what do we build in the research side of the house, which is a mix of DeepMind and related research team. They build AMIE, MedLM. I have doctors that are embedded in those teams, so that from the get-go, we're having clinical points of view. These are AI scientists, physicians that practice medicine, so that's the R&D team. Then they point the research to either cloud where we would make an API available for external partners, or increasingly, we're pointing all that at internal product areas where we also have health use cases. So think of our consumer-facing front door, Search, and it's related friend, Gemini, YouTube, and then our hardware, Fitbit, Pixel, a family of tools. So pre-gen AI, we had use cases in all those areas. About several hundred million questions a day on Search about health historically around the world, so really important area of information quality. (31:57): YouTube, for example, during the pandemic, we had a neighborhood of 110 billion views of COVID videos. So it's a very big scale business for us, so information quality, and now how do we use gen AI including the models that are being built over there in the research shop to make it better? And then as I mentioned in hardware, thinking about moving beyond some of the more traditional ways that we've supported health and wellness on Fitbit like Afib measurement or in the Pixel Phone, temperature sensing. Now, how does gen AI start to create a more personalized experience? That's a personal LLM that's going to know you and your data and be available for you as a small language model. I'll use John's language, sort of more basically on your phone or on device. So Google Health then is use cases in all of our product areas, and just think about Med-PaLM and MedLM, that's the research team and what they build, and then we sort out how we're going to make that available to third parties, to external customers, but then also how we're doing that on our first-party product areas. Eric Larsen (32:57): Of course, GCP is in there too. How does Google Cloud fit into the architecture? Dr. Karen DeSalvo (33:01): Google Cloud is the big platform we have, to work with external customers. We make Med-PaLM available as an API in our Vertex Model Garden on Cloud, so the Cloud team then is mostly enterprise facing. We do a little enterprise-facing work with our hardware. Fitbit has historical relationships with employers and governments, but most of our enterprise-facing stuff, if we want to work with healthcare systems or payers, would come through cloud, but it's the same science, the same research that we would be building in the background and potentially putting out on other surfaces. Eric Larsen (33:33): Let's talk about Isomorphic for a second because I just think it's this synthetic biology, computational biology, what AlphaFold 2 did to map... What is it? 200 million human proteins, right? Basically, resolving a 50-year insoluble. Dr. Karen DeSalvo (33:54): Isomorphic is just such a good example of how you see Google behaving in the ecosystem quite often. It's an example of how we do good and do well. AlphaFold created by the research teams, we make that available publicly, so it can be used by researchers to advance science and cure disease all over the planet and address food insecurity and climate change, and all the ways that things that Demis and that team built are available out there in the world. We make it available on Cloud for customers and we can help customers use it, just an example of how the research team points something to a customer. And then we also created a bet, a company, that can say, "Give us your problem. We'll solve it internally using our technology because we know the technology best." A good example, and you see the thread of that a little bit sometimes with Med-PaLM, the ways that we're thinking about how do we use it for ourselves, how do we help customers, and then how do we help researchers around the world? Eric Larsen (34:45): How do you work? How do you, Karen DeSalvo, as chief health officer work with Isomorphic and with Verily and with Calico and all these divergent but convergent sort of activities? Dr. Karen DeSalvo (34:54): My job is to lead health for the company in a few ways. One is drive health strategy that is going to be inclusive as much as possible across all of our PAs. What are our principles? What are the ways that we think about working with the 1P, 3P ecosystem? What will we do as a company? Example, what we will not do, we will not deliver care directly. We'll have a partnered way of improving health. (35:16): I have a team that is clinical, regulatory. I have a strategic solutions team. I have an equity team, and then I also have medical leadership for employee health and benefits, so Dr. Google for Googlers, Dr. Google for the world. My team has a mixed model, for example, that we're a part of the Med-PaLM team, so I have physician AI scientists that are embedded with engineers in research, in cloud. In all the product areas across the company, we're down deep in the bowels depending on the skill sets and how busy those areas are with health. And hardware, Fitbit makes sense, right? And then we also have cross-cutting supports that we provide for the company, regulatory strategy, clinical trials, support, health equity by design efforts. (36:03): I love that model, by the way, even though it can be kind of complicated and we're interdigitating into the product areas. I don't run the engineers. I don't run the UX, PM, but we're one of the critical components to building product, and we love being able to do that early stage in ideation, all the way through to landing that out in the ecosystem. We think that it makes our work better at Google because we have real doctors and real nurses and real clinical psychologists who have really been taking care of people and say, we're running a hospital in rural India that go, "No, that won't make sense. This is the kind of thing that we need." Because we definitely want to make sure we're building, whether it's direct to consumer or things that we're thinking about for the enterprise, that we're doing things that are going to be useful and make a difference. Eric Larsen (36:48): I think that's probably about the coolest job in sports. That's my editorial comment on that, Karen. Dr. Karen DeSalvo (36:54): I see so many cool things every day. I think that's one of the things I love about my job is it's a lot of frame shifting and it can be exhausting. For example, avian flu is hot, right? Eric Larsen (37:05): Yeah. Dr. Karen DeSalvo (37:06): The reason that we want to pay attention is because the more people are searching on it and the trends are that people are searching on it, we want to make sure there's good information. So I think one part of my brain is making sure that we're doing the right things on avian flu. The other part of my brain is what's the next stage in our personal LLM for Fitbit? And then the other stage is how's the nurse handoff thing doing with HCA and how do we make sure that that's working for them in the field? It's global, so I'm thinking always about different parts of the planet and really nice to know that you have a great platform where you can help improve people's health. Eric Larsen (37:39): Well, that's why my next question and maybe one of our last questions goes to... It's a little bit inseparable when we talk about how this is going to play out across healthcare and a little inseparable from potentially Google's role in that. Here's how I think about gen AI in healthcare, and this is not controversial. It's administrative simplification. It's care augmentation. It's molecular generation, so drug development discovery and it's consumer empowerment. Those are the four broad categories. And obviously, the surface area exposure of Google touches all of those, right? We talked about Isomorphic in drug development discovery. I think we're sprinting forward on the research side there. And then you have 8.5 billion queries a day on Google Search. So few organizations are better positioned to think about consumer empowerment or activation. How do you see gen AI playing out in healthcare across those domains? Dr. Karen DeSalvo (38:42): Well, I think there's a role to play in all of it. Administratively, every dollar we spend on healthcare is a dollar we don't spend on schools or bridges or the debt. So the more we do as a country to pull cost out of the healthcare system, the better off we are in many ways. Starting at a lower price point in parts of the world that don't have access to care is pretty important too. So rather than adding administrative trappings, finding ways that even if it's just automating existing processes, hopefully, we'll do more than that and some of the opportunities will start to manifest themselves about how the technology can be helpful. (39:17): I think the second about care augmentation, it's probably a little bit longer time till we can get there for all the things that we've been discussing, but it stands to save a lot of lives for healthcare amenable morbidity and mortality. Not only because you can get more people into the system if the care is extended or more accessible, but I think it's possible that better quality care is in essence better prevention, which is better outcomes. Lots of ways I think it will help with care, safety, and outcomes, but I think over time, in the next, I don't know, 10 to 20 years, it may actually also improve innovation and care. I think right now, I think it's more about evening it out. (39:58): Third area, drug discovery, which clearly there's significant potential, I think many folks in pharma would tell you that they've got no shortage of small molecules that they're waiting to study and evaluate. So that's less of the issue. I think there's probably more issue about getting people matched to clinical trials in a way that's equitable and accelerating that work, being able to do earlier assessment of safety in ways that rely on digital technology. This is not my area of expertise, but I think a really interesting one to watch. Not just gen AI, but also just the digitization of that earlier stage work accelerates understanding of safety and efficacy and addresses some of the challenges that come with more traditional approaches like animal models. (40:44): In the consumer piece though, it's why I'm here. I spent my life caring for patients directly, and I have so many memories of patients telling me, "Yeah, I got it. I got to walk more. Do you know that I have the highest murder rate in my neighborhood anywhere in the country? I can't go outside. Do you know that the playground shut down because there were people using substances in it, so the cops closed it down? Do you know there's no sidewalks in my neighborhood? Do you know there's only a bodega?" (41:15): They just help me understand that even if I prescribe the right thing at the right time, there's a whole other 99% thing happening in their life that is so much more important, so all of this stuff that they really need in their life flow, not in the healthcare space... Well, let's fix healthcare over here, but the rest of it is, do I have time to take the few extra steps? I can take the stairs. Here they are. Here's the healthy options on the menu that I can choose from. Even when there are limited options, what are the ways that we can help people know what their options are and how to stack rank them for themselves? (41:52): Some of that is about information. Some of it's about insights, which is one of the reasons also that we've done some health journeys work even on search, but it's why I love the Messenger Platform of YouTube. People don't just trust their doctor. Sometimes they don't trust their doctor at all. So the question is who do they trust? If it's Dolly Parton, then let's make sure she knows what's the right things to say about health that's evidence-based, so the trusted messenger we can help with. And then I think the insights piece definitely on our hardware, I mean, I think there's a world coming where the billions of people on the planet who have a smartphone, most of them happen to be Android. Eric Larsen (42:27): It's true. Dr. Karen DeSalvo (42:28): It's helping them get good answers to track on their own health, to organize their health data, and eventually, be able to connect with a doctor in their pocket that maybe is going to be made by the NHS. Maybe it's going to be made by Apollo Hospitals. I don't know who will start to build those, but I think just thinking about that paradigm of they don't have to come to a brick-and-mortar place that's run by a healthcare system, but there's a world in which they're much more in control, and we're helping them with that health and peace decision-making. Eric Larsen (42:58): Like always, I always extract a lot of learning when we chat, so I'm very, very grateful. That's a good spot to kind of adjourn on. But thank you for today, and as usual, there's 20 other topics I want to discuss with you, so we'll just have to do it again. Dr. Karen DeSalvo (43:11): We'll do it again sometime. Thank you so much. Eric Larsen (43:18): I used to think that big tech was largely an irrelevancy or a paper tiger in healthcare. Healthcare was too incumbent dominated, too oligopolistic, too hyperregulated. And there've been so many false dawns for big tech, but suddenly with the advent of gen AI and the need for these huge repositories of data and the R&D budgets of a nation state to build the computer or acquire the compute, they're here to stay. And I think they have an enormous capacity and power to do good as we think about administrative simplification and care augmentation and molecular generation and consumer empowerment. And Karen's just such a wonderfully articulate and thoughtful voice in this discussion, so I'm delighted to share this with our listenership, and as always, Advisory Board is here to help. Rae Woods (44:36): If you like Radio Advisory, please share it with your networks, subscribe wherever you get your podcasts and leave a rating and a review. Radio Advisory is a production of Advisory Board. This episode was produced by me, Rae Woods, as well as Abby Burns, Chloe Bakst, Kristin Myers, and Atticus Raasch. The episode was edited by Katy Anderson, with technical support provided by Dan Tayag, Chris Phelps, and Joe Shrum. Additional support was provided by Carson Sisk, Leanne Elston, and Erin Collins. See you next week.