The following is a rough transcript which has not been revised by High Singal or the guest. Please check with us before using any quotations from this transcript. Thank you. === Lis: [00:00:00] Also integrating these methods together, using the machine learning and the data science to help you understand a problem, understand a context, then bringing in the behavioral science to say, okay, what is likely to be the most effective intervention to shift somebody's behavior? And then bringing the machine learning and the data science in again to really think about what's the best way to test that. So. A lot of what we do is really thinking about how do we assemble this suite of methods in the best possible way to answer the whole problem. hugo: That was Liz Koa, chief of Innovation and Partnerships at the Behavioral Insights Team on combining behavioral science and machine learning to actually solve problems. Lis: We designed a chat bot, which was really designed to have a dynamic, interactive conversation with somebody in the province to really help them. Know where and when they could get vaccinated to have personalized information about [00:01:00] eligibility to give them very granular, practical information about like the address and the directions to a vaccination center. And what we saw was that the chat bot quadrupled vaccination rates compared to the control, but it also. Doubled them compared to the SMS message. So you're seeing actually both of those behavioral interventions are really effective, but the chat bot is significantly more effective than a kind of classic static communication. hugo: Liz is talking here about a real deployment one that dramatically traditional outreach methods by using behavioral design and adaptive delivery. Lis: And we think there's a lot of opportunity here across lots of different policy domains from business advice through to really sensitive areas about different types of health conditions or intimate partner violence. There's real opportunities to think about how do you [00:02:00] design thoughtful, behaviorally informed chat bots to try and shift people's behavior. hugo: In this episode, we talk about how behavioral science and machine learning can work together to shift real world outcomes. We get into how default shape behavior, why machine learning models often break in dynamic systems, and what it means to design AI tools that reflect how people actually make decisions. If you enjoy these conversations, please leave us a review. Give us five stars, subscribe to the newsletter and share it with your friends. Links are in the show notes. Let's now check in with Duncan from Delphina before we jump into the interview. So I'm here with Duncan from Delphina. Hey Duncan. Hey Hugo. How are you? So before we jump into the conversation with Liz, I'd just love for you to tell us a bit about what you're up to at Delphina and why we make high Signal duncan: at Delphina, we're building AI agents for data science. Through the nature of our work. We speak with the very best in the field. And so with the podcast, we're sharing that high signal. hugo: Totally, and I [00:03:00] love this conversation with Liz so much, but I was just wondering if you could let us know what resonated in it with you the most. duncan: Most of us in data science obsess over prediction accuracy. What will users do? When will they churn? Sometimes we obsess over causality. Did X cause Y? But Liz flips the script, not just predicting behavior or understanding it, rather architecting it. Her vaccination chat bot didn't just nudge vaccination rates, it quadrupled them. Think about that. That is changing lives. This conversation is a wake up call. What if the model is the easy part? And changing minds is the real frontier. Let's get into it. hugo: Hey there Liz, and welcome to the show. Such a pleasure to have you here and I'm so excited to dive into all the wonderful things you're up to at Behavioral Insights Team and, and the rich history and, and future as well. So you are the chief of Innovation and partnerships at BIT, or the Behavioral Insights team. So I'd love just a brief two or three [00:04:00] history of the Behavioral Insights team and its origins in Nudge theory. If you could help me with that. Lis: So the Behavioral Insights team was set up in 2010, and it was set up inside the UK Prime Minister's office in 10 Downing Street by David Cameron. And it was really set up with the explicit mission to bring a more realistic understanding of human behavior into public policy, but also to help people make better decisions for themselves. And. The evolution of the team really ran in parallel to the evolution of behavioral science in US government as well. So a, around the same time, professor Cass Sunstein went into the White House as the administrator of the Office of Information and Regulatory Affairs, and within OI ira Cass was really taking. A behavioral framework and approach to that work. And what we did at the Behavioral Insights team from 2010 [00:05:00] onwards was to really take that framework and apply it to specific public policy problems and to do that in a really empirical way. So testing in randomized control trials, what is the efficacy of a particular change or behavioral intervention on people's real behavior? And one of the really interesting details is that the Behavioral Insights team was set up with a sunset clause. So the deal was that within the first two years, we needed to prove a certain return on investment and to really prove the worth and efficacy of this approach. So what we did is we looked for areas where. We thought we could really demonstrate strong impact and really concrete outcomes for UK government. So the first thing we focused on was tax collection, and we tested the impact of adding what's called a social [00:06:00] norm to UK tax collection letters. And at the time there was really a misperception that not paying your tax on time was really quite a common. Thing to do across the country, whereas in fact, 90% of people, more than 90% of people were paying their tax on time. So what we did was very simple. We just added a line to the top of those letters saying nine out of 10 people pay their tax on time. And what that did was really indicate to people that if they were not paying their tax, they were in the minority. And they were doing something that was not socially acceptable or socially normative. And that one intervention brought forwards tens of millions of pounds of UK tax revenue and more than paid the return on investment of the behavioral Insights team. And it was really the start of a much more ambitious and expansive approach across lots of different policy [00:07:00] areas and. The White House evolved their team in a similar way. So while CAS was in OIA, there was also a behavioral sciences team set up within the White House, which worked on all kinds of policy problems across the US administration. hugo: Incredibly cool. And of course I'm so glad you mentioned Richard Thaler and Kass Sunstein's work. I'll link to their book in the show notes, which everyone listening should definitely read. Particularly 'cause a lot of people listening do work in in technology, which for better and for worse users, a lot of the principles of Nudge. I'm wondering if you could just explain what Nudge actually is, and perhaps with particular reference to techniques or concepts such as choice architecture. Lis: Sure. So a nudge is an intervention that makes a particular behavior more likely. So a classic nudge, which is also a choice architecture intervention, is changing the positioning of the salad at a canteen. So putting it [00:08:00] at the front where it's easy to get where people fill their plates first and putting the chips behind that. Or in a place where people need to ask for them and. This, it hits two notes that Cass and Richard developed as part of the theory of libertarian paternalism. So firstly, it has the paternalistic note to it, which is that I. The salad is first, there's a strong suggestion that the salad is something that you should be putting on your plate that's good for you, that's healthy, but it's also choice preserving and therefore libertarian. So you can still choose to put whatever you want on the plate at any time. And a really core tenant of nudge theory and libertarian paternalism is that. Freedom of choice and preservation of choice and autonomy, that at any point people are free to choose to do something different. hugo: Yeah, I really appreciate that context, and that's a wonderful e [00:09:00] example. And a lot of the time people say, why do we need to have a choice architecture or have any structure? So for example, in a social media feed, people are like, why do we need an algorithm to decide? And I suppose part of the point, and correct me if I'm wrong, is that. There is always a choice architecture, whether you've decided it or not, there is always an order of of presentation. Lis: Yeah, absolutely. There is no default neutral choice architecture, so you're always operating within an environment where a choice has been designed for you, whether that's trying to encourage your kids to go to school, whether it's getting vaccinated, whether it's going to the supermarket, or going out for a night out with your friends. These choices are designed for you. The environments in which we make decisions are designed, and really I think the innovation of behavioral science is to be a lot more thoughtful and deliberate about that design, rather than letting it be designed in a way that's either [00:10:00] unthoughtful or at the extreme is manipulative or exploitative. hugo: Absolutely. I'm also glad and grateful that you mentioned the term libertarian paternalism. For several reasons. The term libertarian aside, which I think has been co-opted today in, in, in ways that aren't what we're talking about, but the term paternalism makes my spidey sense tingle a bit. Just to be transparent, and I'm wondering why to be provocative, the provocative question would be, why should cast Sunstein decide? Whether it's healthier for people to eat a certain type of salad or or another. Mm-hmm. And we'll actually get to a point in this conversation, I hope, where we talk about that a lot of the foundational studies have occurred in western educated, industrialized, rich, and democratic context and how we apply them globally. But in terms of the paternalistic aspect, how do you think about that framing today? And how do you balance, quote, unquote, influencing behavior with preserving autonomy and liberty? Lis: Yeah, absolutely. Well, I guess the first thing [00:11:00] to say is it's not caste deciding. It's, for the most part, the, particularly the early work done on nudging was done by democratically elected governments. And so the institutions, if you like, that are making those decisions that institutions that have been, and mps politicians who've been democratically elected on a particular platform. And I think a lot of the. The legitimacy of nudging and the people's trust in nudging is really derived from the legitimacy of the institutions that design those interventions and roll them out. I think it's interesting you say that paternalism makes your spidey sense go. I think I, from a personal political philosophy, I'm more on the side of paternalistic. I do think that there are within good, strong. Democratic government and institutions. I really do think that there are things that are demonstrably good [00:12:00] for us as individuals, as societies as a whole, and I think making choices that align with those things easier, more socially normative, more attractive is a really legitimate and positive thing to do in a society. But I do accept that's it is. That is a contested idea and that's partly why I think Cass and Richard put together that surprising juxtaposition of the libertarianism and the paternalism, because alongside that, paternalism goes your rights as an individual to exercise your own agency and your own freedom to choose to do something different, including to do something that is. Demonstrably bad for you, if that is a choice that's freely made and informed. hugo: Yeah, that makes a lot of sense. And I, I was, I did frame it as a provocative question. I didn't literally mean Cass making these decisions. I do. [00:13:00] So I think we agree mostly. I do think it also relies on the understanding that we do have good, strong democratic institutions, which, um, I think we probably both agree, is a challenging conversation these days as, as well, or a conversation worth having. I am interested in how the field has evolved since the early days. What has changed for you in the past decade in terms of aims, methods and or institutional reach At bit, Lis: an enormous amount. It's been really exciting and and fulfilling to be part of it. So firstly, there's just been an enormous growth in. The number and scale and reach of behavioral insights teams around the world. So the OECD actually keeps the public observatory on public innovation, keeps a map of behavioral insights teams around the world, and there's now more than 600 operating at lots of different levels of government within international institutions like the [00:14:00] UN and multilateral development banks within private companies. So it's really grown as a field and alongside that has been a real growth in the breadth and ambition of the field, particularly the policy reach. When I started at BIT 10 years ago, the types of policy issues we were tackling were, I guess, if you like, relatively simple policy challenges. Like how do you get people to pay their tax on time? How do you get people to switch energy provider? Some more complex, like how do you get people back into work, which is a really systemic challenging issue. But over the years, I think the reach and ambition has really grown and there are now people working on really complex system level change like. How do you reduce corruption? How do you reduce conflict? How do you build socio emotional skills in children so that they're able to better [00:15:00] regulate their behavior and build meaningful friendships? Reduce bullying in schools. So these are all. Much more complex and ambitious projects for behavior change. And I think the methods have kept pace with that. So we now use a much wider and more sophisticated methodological toolkit as well to match those problems. hugo: Fantastic. And hearing that not only is this happening at local national levels but at at global scale as well, is incredible. And I'm sorry, something I meant to speak to is congratulations on. Demonstrating such value in the first two years, 'cause that is a very short tenure, to be able to demonstrate value with these types of highly challenging methodologies and areas of work. I am interested if you could speak to some examples that could let our audience understand where behavioral insights have led to meaningful change in public policy or service delivery.[00:16:00] Lis: Yeah, sure. So the big one is pensions order enrollment, and that was one of the first, and I guess, highest profile policy wins for behavioral science, particularly here in the uk, but also in other countries around the world. So in the uk, the government switched from and op opt-in pension saving system. Where it was possible for people to save into a pension for retirement, but they needed to actively choose to do that. It wasn't made particularly easy for them, and there wasn't a consistent employer contribution or a legislative employer contribution. I. And that switched to a default opt-out system where when you start a job in the uk, you are automatically enrolled into a pension scheme that has a set level of contributions from the employer, but also from you as an employee. And that simple change of switching to [00:17:00] opt-in. To opt out and switching that default around has led to about an additional 20 billion pounds of savings into retirement incomes in the uk, which is really transformative in terms of people's financial resilience, in terms of their quality and standard of living in retirement. And there's still a long way to go on that in terms of increasing those contributions, really making sure that. Those pension savings are adequate and also being more clever about the design. So a prominent US academic Shlomo b Nazi, along with Richard Thaylor, developed a scheme called Save More Tomorrow, which is really harnesses the same types of mechanisms, but does something called auto escalation. So basically when you get a pay rise, you pre-commit to automatically contribute. An additional amount into your pension. And the idea is that it doesn't trigger loss [00:18:00] aversion, which is our tendency to weight equivalent losses more heavily than equivalent gains, because actually your pay packet is still increasing. It's just that at the same time, more of that is going into the pension. So there's more to do, but a really tangible. Significant improvement in policy outcomes also on tax collection. So the work that we've done on that has brought forward more than a billion pounds in government revenues that have been paid sooner and been able to be used for public services. And those are trials that have been replicated in lots of different contexts around the world. From Mexico to Guatemala to Indonesia. So really robust, strong findings there in terms of the efficacy of a social norms message to increase tax collection. hugo: Such a wonderful and thoughtful example. Thank you for elucidating that. If people haven't [00:19:00] heard of loss aversion as well, do check out the research of Daniel Kahneman and Amos Ky and I'll link to thinking fast and slow in the notes, but it's wild stuff, right? I think I'm gonna butcher it slightly, but. If on average, if we're given a hundred dollars and then have $90 taken away, we're less content than were we to be given nothing and having nothing taken away, even though we're ending up net positive. Right. Lis: Yeah, absolutely. And for all of your listeners who were trained as classical economists, of which I'm sure there's many, really the findings of Daniel Kahneman and Amos Ky. And all of the others who have followed in their footsteps, what they're really doing is challenging some of those core assumptions of classic economics, classical economics, that we are these perfectly rational optimizing beings, and we just know that actually that's not the case. And that there are lots of ways in which our decision making and [00:20:00] behavior systematically deviates from that model and. Still so much opportunity to think about how do we apply that understanding of human behavior to. Our economic models to our economic policy, to all different aspects of social policy, as well as how we achieve our own personal goals as well. hugo: I also love that you spoke to the auto escalation that occurs and makes me think of what, I may not be using quite the right term, but like sensible defaults or something along those lines and mm-hmm. Can't remember where it is. It might be in Australia actually. Where on your driver's license. The default is being an organ donut. You can opt out of course, but that's the default because yeah, in our society that makes more sense than not. Lis: Yeah, and actually that's a great example because it's one that lets us dig into a bit of where a default's appropriate and where are they not. And actually organ donation is often held up [00:21:00] as. A success story of behavioral science in terms of exactly as you say, opting people into into organ donation. Actually many people in the field, including Richard Thaylor, have publicly said that they don't think that's in fact the right approach in this scenario, and that they would much favor an active prompted choice. And the reason's this, if you, Hugo, are defaulted into being an organ donor in Australia. And it's not something that you ever actively consider or discuss with your family or your loved ones. If you are unlucky enough to be in a terrible accident and for that to be triggered and you haven't had that discussion with your family a lot of the time at that point, the decision gets unpicked because in that moment. Your family, your loved ones who are actually making that decision, are not really sure what your preferences and what your choice would've been. And so [00:22:00] the alternative to a hard default is to say, actually, every time you pay your driver's license fee, you are asked the question, Hugo, would you like to be an organ donor? And which organs would you like to donate? And then if. God forbid, the time comes where that choice gets activated. Your family are then really confident and comfortable in the knowledge that that is what you would've wanted. So defaults, they're such powerful tools in the behavioral toolkit, but they need to be used really carefully because they do imply a really strong value judgment that a particular choice is. The right one is the best one for an individual for society. And so in my view, should be used really in cases where we are very confident that is the right choice for the vast majority of people. So things like pension defaults. hugo: Yeah. Thank you for [00:23:00] spelling that out and spelling out the deep nuances in involved here where there are no obvious or easy answers and. At the risk of being too explicit, anyone building data, machine learning, and AI products listening, very much be thoughtful and mindful about what your defaults are and who are they serving? Are they serving your users, your product, your company, your shareholders, your investors? Please do consider all of these things. So Liz. When we spoke LA last week, I mean I've been following your work at Behavioral Insights team for some time and highly sophisticated statistical and data work. I don't think I appreciated till we spoke last week what you do with machine learning as as well. So I'd love So you've applied machine learning to policy problems including building predictive models for regulatory inspections. I'm wondering what that involved and what did you learn about combining behavioral science with machine learning? Lis: Yeah, absolutely. So we were back in 20 16, [00:24:00] 20 17, we ran a series of exemplar projects for the UK cabinet office to really demonstrate the power of data science and machine learning for understanding and solving policy problems. And I think really at the time. Those techniques were very widely used within the private sector, and particularly used in very sophisticated ways within tech companies, and also used in very sophisticated ways within academia. And still they were not. Widely applied or understood across governments as a tool for looking at policy problems from a different perspective. And so what we did was run a series of exemplars to really show actually what can you do with these kinds of methods? And one of the things that we did was looking at targeting regulatory inspections. So particularly. Here in the UK there is a body called the Care Quality Commission, which has [00:25:00] a duty to inspect GP medical practices to make sure that they are compliant with regulations and basically operating at a high standard. I. And it's a really important job to maintain the quality of medical services across the country, but it's an organization that's operating with finite resources and so deciding actually how to target those inspections in. The most effective way is a pretty challenging problem, and what we did was investigate really if we could improve the way that that targeting was done by building a predictive model that pulled in some non-traditional data sources. So we had things like the clinical indicators that were already published by the Care Quality Commission. Uh, we had data from the Office for National Statistics on the type and number of [00:26:00] medications that were prescribed by different GP practices. But one of the things we did that was really quite novel at the time was also to scrape text from reviews of NHS practices. So national health service practices. So this was essentially patients going online and saying. I've been to see a GP at this particular practice. This was my experience. This is what I thought was really good. This is what I wasn't satisfied with, or this is what I'm concerned with. So it's a really rich and under exploited data set that gives you actually, if you can harness it properly, gives you a lot of clues as to where you should be targeting those inspections to have the highest chance of finding the underperforming practices. And so what we did is we built a predictive model that drew in all of this data and it performed really well, so we could identify nearly all of the inadequate clinics by only [00:27:00] inspecting 20% of them. So we could identify. 95% of inadequate clinics by only inspecting 20%. And if we'd just been using the CQC data, then our estimation at the time was that we would've been able to really only identify 30% of those inadequate practices. So a really significant improvement and from a very practical perspective. Enabled the CQC to be able to really use those finite resources in the most effective way possible. And these were, as I said, exemplar projects. There's a lot more that's been done since then, particularly on building algorithms that act as decision aids for public sector workers across different domains, but also thinking about how we can use machine learning and ai. To design more personalized and targeted interventions for people as well, [00:28:00] and I think there's some really exciting work happening on that at the moment. In particular, I'm a real fan of the work that Sanjo Misra is doing at the University of Chicago, which he's looking at using AI models to test many variations of a particular nudge or behavioral intervention to work out actually what's the optimum strategy for different groups of people and. There's a lot of potential there for us to improve the efficacy of behavioral interventions, but also to be able to operate at a much more sophisticated level and a much greater scale as well. hugo: Amazing. Well, thanks for speaking to that example and hinting at to everything else happening in the space. So what I'll do is I'll get some links from you of case studies and such things to share in the show notes with people who wanna dig deeper. I also. Very much love how you mentioned predictive analytics and all of the work you do is in service of decision making. For a bit more context, I've, I do a lot of work in the education space for data science, ml [00:29:00] ai, and one of the classic examples that we teach is churn prediction, right? Mm-hmm. And this seems silly compared to all like the fundamental things we're discussing now, but it's instructive because if you build a churn prediction. Model. It doesn't tell you what intervention to make. It's actually useless unless you have some sort of causal inference involved or something along those lines. So for example, if you predict someone's gonna churn from your business and it's because your customer service isn't responding, you'll intervene in a very different way, as opposed to if they've got an. An offer where you've been undercut by a competitor. So speaking to how decisions being made are incredibly important is so key here. Lis: Totally. And also integrating these methods together, using the machine learning and the data science to help you understand a problem, understand a context, then bringing in the behavioral science to say. Okay, what is likely to be the most effective intervention to shift somebody's behavior and then bringing the machine [00:30:00] learning and the data science in again, to really think about what's the best way to test that. So a lot of what we do is really thinking about how do we assemble this suite of methods in the best possible way to answer the whole problem. Yeah. hugo: Wonderful. And something I'm hearing there is it, it is systems thinking and holistic thinking in, in, in a lot of ways. Lis: Yeah, yeah, absolutely. Um, hugo: so something you just spoke to was more personalized in interventions and mm-hmm. A lot of classic behavioral interventions have used static communications, like letters or text messages often sent at scale. So I'm wondering what are the limitations of that approach? And if we could drill down into how the field is shifting towards more dynamic and personalized methods. Lis: Sure. So a lot of the early studies, and actually many still are communications based, and a lot of the benefit of this type of approach is that you can do it in a very low cost way if you are slotting an intervention into an existing administrative process. For example, [00:31:00] if a tax agency is already sending a letter to. All of the people who have a tax obligation, it's very easy to vary parts of that letter and to run a randomized control trial to test what is the impact of that variation. And so I think there's still real value in taking that approach, and it has been a very successful one for the field. The limitation is that necessarily, even if you've got the power to test many variations and many types of interventions. You are looking at a fairly blunt, if you, if for want of a better word, a fairly blunt intervention that you think is going to shift behavior at the population level and more and more where we're able to run really large scale trials. We have the power to detect actually what is the impact for different types of people, different groups within that sample. [00:32:00] And that's fantastic to be able to do that. But what we're also able to do now more and more is build, rather than sending letters or SMS using new technology, um, like WhatsApp, chat bots, for example, to actually build communications based interventions that are dynamic that can be personalized to a much greater degree and. Also are able to be iterated as well, and that gives us a much better chance of targeting a behavioral barrier at more of an individual level than a population level. hugo: That makes a lot of sense, and that's something that I see a lot in the machine learning space, but increasingly so in the generative AI spaces, not only the ability to rapidly iterate on products, for lack of a better term, but also the need to mm-hmm. Because of, mm-hmm. The stock spasticity of the product itself, but also, not only does it [00:33:00] give different outputs, but the amount of different inputs you get as well, which is something I think people Absolutely. And in other contexts I've worked on conversational AI with a variety of teams and I. You can write down what you think people are gonna say to your system, but you will always be surprised at the type of things you see. Right. Lis: I mean, we're endlessly fascinating and surprising creatures, aren't we? So, hugo: absolutely. Lis: And it's, you can have all the predictive models in the world and people still surprise you hugo: without a doubt. One example that, that I find really interesting was the use of chatbots to increase COVID vaccination rates in Argentina. Mm-hmm. So, I'm wondering if you can tell us a bit about what that intervention looked like. And what it revealed about how delivery methods can affect outcomes. Lis: Sure. So this is one of my favorite projects that we've done over the past couple of years. We worked with a particular province in Argentina who were interested in increasing the rates of COVID-19 vaccinations. The booster uptake was particularly low in this province, so only [00:34:00] 70% of the population had completed their primary course, and only 35% had a booster. And so from a public health perspective, the province was very interested in how they increased those vaccination rates. And in our exploratory work, we found that there were a number of barriers to taking the vaccination. So particularly there was a sense that the risk wasn't particularly high at the time when infection rates were decreasing across the population. There was a lot of friction and hassle associated with. Finding a vaccination center and booking an appointment, and just generally a real intention action gap where people were saying yes, they wanted to get vaccinated, but actually the practicalities of doing that were deterring them from going and getting, getting the jab. So what we did is we designed a chat bot, which was really designed to have a dynamic, interactive conversation with [00:35:00] somebody in the province to really help them. Know where and when they could get vaccinated to have personalized information about eligibility, to give them very granular, practical information about like the address and the directions to a vaccination center. So really trying to target multiple behavioral barriers at a personalized individual level, which would be very hard to do via SMS or a letter. And what we did is we actually tested the impact of that chat bot compared to nothing. So a control where people got no communications, but then also we compared it to a static SMS message because we wanted to understand. Actually, what's the additional impact of the chat bot itself and that dynamic personalized interactivity? And what we saw was that the chat bot quadrupled vaccination rates [00:36:00] compared to the control, but it also doubled them compared to the SMS message. So you're seeing actually both of those behavioral interventions are really effective. But the chat bot is significantly more effective than a kind of classic static communication, and we think there's a lot of opportunity here across lots of different policy domains from business advice through to really sensitive areas about different types of health conditions or intimate partner violence. There's real opportunity to think about how do you design thoughtful. Behaviorally informed chatbots to try and shift people's behavior. hugo: That's such an incredible example. And to see that those tiered impact o on outcomes, and as you're saying, the technology is at a place where, of course we need to be incredibly mindful and have guardrails and alignment are serious concerns like. Almost [00:37:00] existential concerns for us as a civilization. I, I think, but the technology is at a point where we can start deploying the these things at scale and hopefully finding deeply beneficial outcomes. Lis: Totally. hugo: You talked about boosters, which reminded me of boosting versus nudging and I, if. I think this speaks to our conversation earlier about paternalism and the challenges there as well. So I'm wondering if you could tell us a bit about what boosting is, how you are thinking about it, and why you're so interested in it currently. Lis: Sure. Let me tell you what boosting is and then I'll tell you why I'm so interested in it. So if we think about a nudge as something that helps you make a better decision, a boost is something that helps you become a better decision maker. And the reason I'm particularly interested in this is that I think. It's a, it's an evolution of our approach to behavioral science. It's one that really puts a lot of confidence in human beings, that we are able to [00:38:00] build our skills and capabilities to make better decisions, and also to exercise our own agency. And so I think it's a really promising evolution of behavioral science, and I really see them as on spectrum. These two things are not in opposition to one another. There are lots of. Moments and context where it's appropriate to have a nudge style approach. But I think also many where a boost style approach is also more effective. So I guess to, to delve into a bit more of what that means in practice. So if you think about the nudge that we talked about at the start of the conversation of you have a choice architecture intervention where you switch the position of the salad and the chips and. It's a particular decision at a particular point in time. So you can imagine that it helps you choose that salad in that canteen at that moment, but actually as soon as the [00:39:00] cafeteria switches them back around. The impact of the nudge is lost because it hasn't actually changed your underlying biases or capabilities or motivations or preferences. It's changed your behavior in a particular moment. And as I said, that has a real place, and I think there's lots of opportunities and context where it's the right approach, but what the ambition of boosting is a lot. More than that, it's to enhance our capacity to reflect, to decide, or to achieve something that really matters to us by building our capability and our skills and trying to understand and overcome our behavioral biases rather than taking them as given. And I think there's, there's a lot of different ways in which this can be applied from. Teaching kids to [00:40:00] stop and think about their options rather than fight in moments of disagreement through to prompting us to really reflect on our motivations and preferences in our engagement with social media and technology. And what we're seeing across lots of experiments is that these approaches are really effective and also those effects. Last and they're really sticky and they do help us become better decision makers across lots of different aspects of our lives. It's an area I'm particularly interested in and with my coauthor, David Halpin. We're putting together a book at the moment about boosting and about how we become better decision makers at an individual level, but also how we design a world that helps us make better decisions as the norm. Your, if your readers are interested, they can stay tuned for that later next year. hugo: Absolutely. So everyone, please do keep your eyes out for Boost and do follow. Connect with Liz on [00:41:00] LinkedIn and follow behavioral Insights team to keep in touch. And once again, at the risk of being explicit when designing and building products, think about empowering your users to make better decisions. So Liz, with respect to personalization, among other things. There's a growing interest in understanding what works for whom, when and and why. And I'm wondering how that emphasis on heterogeneity influences the design and evaluation of interventions. Lis: Yeah, absolutely. And it's, this is such an interesting. Area of the field. And actually it's an area I chaired a panel on. We just ran our behavioral exchange, which is a conference that brings together academics and policy makers. We just ran behavioral exchange in Abu Dhabi. And if this is something your listeners are interested in, all of the sessions are available online. We can link to them in the show notes. Cool. And I chaired a plenary session. About adoption [00:42:00] versus adaptation. And really what that session was looking at was trying to understand how do we as a field move from an investigation of what works to an investigation of what works for whom, when and why, and when is it appropriate to adopt successful interventions in different contexts, and when do we need to adapt them? And it is a. It's an area of the field that I think is vibrant and emergent and an area that will be, there'll be a lot more investigation of this over the coming years, particularly as different countries. Explore the application of behavioral science, and particularly where that's done in quite different cultures and contexts. And I think as a field we've really built our success on an empirical approach. We've talked today a lot about methods, a lot about experimentation, evaluation, and really a core of the field has been that deep [00:43:00] empirical investigation of. To what extent does a particular intervention change people's behavior and how confident are we in that? And that has been really the cornerstone of the success and credibility of the field. But really one of the trickiest challenges we then face is to what extent we can expect a finding in one place or context to work in the same way in another place or context. And. I think at the core of this is there's a temptation for us to see what works as a question that we can definitively answer rather than a more insistent question that we should explore over time and in different places and context, and particularly Navy academic. Dilip. Soman has written a really great book called What Works, what Doesn't, and When, and he. Talks about behavioral [00:44:00] scientists as cartographers who are exploring different policy landscapes and really encourages behavioral researchers to think about context, scalability, adaptation, and to really try and investigate actually what is it about a particular intervention that makes it work, if you like. What's the secret sauce? What's the piece of it that. Is really driving behavior change and then to be really thoughtful about how do you take that and adapt it for a different place or context. I should say though it, this is a contentious space, and Professor Cass Sunstein was on that panel as well, and Cass talked about. His observation that there are really some generalizable insights about human behavior that hold really well across lots of different contexts and lots of different countries. So for example, we are pretty confident that defaults work very well in lots of [00:45:00] countries, lots of contexts. And so his provocation, if you like, was. Definitely make sure that we are being thoughtful about scaling and adaptation and context, but also let's not lose sight of the core tenets of behavioral science as a discipline that these are systematic, generalizable insights about human behavior. hugo: I love that you spoke to the fact that there aren't static methods that work in outside. The dynamic nature of our human systems and our societal systems. And I actually love the landscape analogy because glaciers melt and the Colorado River carved out the Grand Canyon right. On a variety of different timescales. Totally. And recognizing how dynamic we need to be in in our approaches Lis: and the way that we intervene changes the landscape where you build a dam, you change the topography of a river and. I think this is so important when we're [00:46:00] designing and running experiments, particularly in digital environments. We know these are really dynamic environments, and to take a concrete example, if you are trying to reduce the level of disinformation or increase resilience to disinformation on a social media platform. You can intervene at a particular point in time and run an experiment about the efficacy of that intervention at a particular point in time, and that's fantastic. But we know that it's a dynamic system that's going to change. The content creators adapt their behavior. The platform adapts its behavior. The users of the platform adapt to their behavior, and we need to be constantly thinking about. How does behavior change? What is the most effective way to, to intervene and achieve different outcomes in a dynamic system? hugo: I love that framing. It's actually very important for, as I'm sure you appreciate, for data science and machine learning. Mm-hmm. And we spoke [00:47:00] about, a lot of our listeners do have economics backgrounds. I do wish data science itself had a lot more economics in it. I think places like Amazon and Uber for example, were able to get a lot of that talent, but. I don't think data scientists necessarily recognize at the start that if you do a predictive model and then perturb the system, your predictive model was done at equilibrium. Right. So none of your findings will necessarily hold there and that they likely won't. Lis: Well, exactly. And it's really about knowing. What questions your methods are going to answer. Right. And we're doing a lot of exploratory work at the moment about synthetic data and particularly the potential and opportunity to use synthetic participants to to triage behavioral ideas and interventions. And that is a really interesting method and it will give us some answers, but. As we were saying earlier, human beings are unpredictable. They're going to do things in a [00:48:00] dynamic system that we don't expect, and so it's not gonna give us all the answers of actually how something's gonna play out in the real world hugo: without a doubt. And speaking of the way we think, evolving in particular context, and then us attempting to apply them elsewhere. As we hinted at earlier, many foundational studies in behavioral science, uh, were conducted in. What are now called Weird context, the acronym, weird Western, educated, industrialized Rich, and Democratic. And I appreciate there are reasons why this acronym, perhaps some people don't like it used in certain contexts, but I think it is a wonderful resetting of what we consider normal and what not, and the fact that a lot of the studies are in like a narrow. Portion of the global population. So I'm just wondering how you think about applying behavioral insights across different cultural and institutional settings. Uh, we talked about vaccination in a Argentina before, right? Yeah. So I love your reflections on that. Lis: Absolutely. And it really comes [00:49:00] back to that question of adaptation versus adoption and increasingly. Being extremely thoughtful about how to adapt the existing evidence to new places and contexts. And one of the ways that we do that in practice is whenever we are running a behavioral research project, we have a phase of that project that we call Explore. And the whole end-to-end project methodology is called tests. So that stands for Target Explore Solution, trial or Test and then Scale. But the Explore part of it is so important because that is where we go out and with our partners and really try and understand and appreciate the context in which. The behavior we're trying to change exists. So if we're trying to reduce bullying in schools, we would go and sit in the classrooms and observe students and really understand [00:50:00] like what are the dynamics and in that environment, what are the motivations of the students? So really trying to experience a service or a situation for ourselves, particularly trying to identify. What are the things that get in the way of changing people's behavior, whether those are internal or external, but also using as much administrative data as we can to get a really rich picture of what is that context and place? What is likely to be the most effective way to change behavior? So again, I think this is a part of the field that is evolving really quickly and. There are a lot of people doing great, thoughtful work on adaptation and scaling. hugo: Thanks so much for expounding on on that as we have a lot of data and AI leaders in our audience, and I wanna ask you for some practical lessons. I know that I've broken the fourth wall and turned to the camera and been a bit too explicit a couple of times, but I'm wondering [00:51:00] what practical lessons from your work would you want them to take away and apply in their own organizations today? Lis: I would only encourage them to. Be really thoughtful about human behavior and how it interacts with the work that they're doing. We are all behavioral designers to a certain extent. Whatever problem you are working on, at some point it's going to hit a human being, whether that's a customer or or a colleague or a citizen. It will hit a human being and there'll be an interaction with what you're doing, and the more that you can understand actually. How does that human being think? What is the structure of their decision making? What is driving their behavior? And therefore, what are thoughtful, intelligent ways to design products, services, data to really encourage good outcomes and. If anybody is specifically interested in working on projects that integrate behavioral science, [00:52:00] I'd obviously be delighted to talk to people in more detail. But yeah, that would be my main thing. Really think about yourself as a behavioral designer as well as a data scientist or machine learning specialist. hugo: Fantastic. And as I said before, everyone please do connect with Liz on LinkedIn and check out the Behavioral Insights team do reach out. And in particular, you are starting to partner with a variety of different or organizations, so you'd be happy to hear from our audience. Right? Lis: Very much. I think one of the areas that I'm incredibly interested in is how we make decisions in online environments, particularly social media platforms, but all types of online environments. We have. Published work from our collaborations with Meta, where we've been particularly looking at governance and bringing the collective voice of meta users into the decisions made on the platform. And we are really interested, particularly in the interaction of AI and behavioral science and what [00:53:00] happens when intelligent. AI systems and tools hit human beings, and we're gonna do a whole other podcast on that, but that's an area where we would really love to collaborate with people across the industry and would be delighted to hear from you. hugo: Fantastic. So we'll link to the resources on the work you've done with Meta in in the show notes, and please do reach out to Liz and the Behavioral Insights team if you'd like to partner or chat in any way. And as Liz hinted at, we're gonna do a whole other episode on generative ai, large language models related to behavioral science and what we can learn there as well. Liz. Thank you, not only for all the wonderful work you do, but for your generosity and your wisdom in coming and sharing all of your findings with us. Lis: Thanks so much, Hugo. It's been an absolute pleasure and I hope people enjoy the episode. hugo: Thanks so much for listening to High Signal, brought to you by Delphina. If you enjoyed this episode, don't forget to sign up for our newsletter, follow us on YouTube and share the podcast with your friends and colleagues like and subscribe on YouTube and give us five stars and [00:54:00] a review on iTunes and Spotify. This will help us bring you more of the conversations you love. All the links are in the show notes. We'll catch you next time.