The following is a rough transcript which has not been revised by High Signal or the guest. Please check with us before using any quotations from this transcript. Thank you. === amy: [00:00:00] Case that I teach on two major public policy decisions that happened 60 years ago, both under the administration of President Kennedy, one being the Bay of Pigs decision widely considered to be a fiasco, just one of the worst decisions made. And the other being the Cuban Missile Crisis, which has been widely considered to be a very successful decision. And the difference can literally be described as failing to engage in. High quality conversation about the data. Largely qualitative data, although photographic images of missiles and so forth in one and the. L deeply engaging with not only engaging with the data in a room full of smart people who had different points of view, but debating the perspectives and the likelihood of what might happen. Thinking through the scenarios of what would happen if we did this or that shifting from, and here's a really good example of a high quality conversation shifting from a simple decision, [00:01:00] should we invade or not invade? And realizing no. There's actually nine options. That was Amy hugo: Edmondson reflecting on the difference between two of the most consequential decisions in 20th century American, and dare I say, world history, and how the quality of the conversation around data helped determine the outcome. In this episode of High Signal Duncan Gilchrist and I speak with Amy and Mike Luca about what it really takes to make high stakes decisions in data-rich environments. Not just analyzing the numbers, but questioning assumptions, surfacing uncertainty, and creating the conditions for good judgment. Amy is a professor at Harvard Business School and a leading thinker on group dynamics and leadership. Mike is a professor at Johns Hopkins whose research explores how organizations use. And misuse data Together, they've written about the failure modes that show up again and again in decision making, anchoring false precision hierarchy and the illusion of causality. We [00:02:00] also talk about what it takes to avoid these traps, how to run better meetings, ask better questions, and combine analytical rigor with cultural design. And as algorithms and LLMs increasingly shape our options, we ask what role human judgment still needs to play. If you enjoy these conversations, please leave us a review. Give us five stars. Subscribe to the newsletter and share it with your friends. Links are in the show notes. Let's just check in with Duncan before we dive in. Hey Duncan. Hey Hugo, how are you? I'm well, thanks Duncan. So before we jump into the conversation with Mike and Amy, I'd just love for you to tell us a bit about what you're up to at Delphia and why we make High Signal duncan: at Delphia, we're building AI agents for data science through the nature of our work. We speak with the very best in the field. And so with the podcast, we're sharing that high signal. hugo: We had what I found to be such a wonderful conversation with Amy and Mike, which we're about to get into, but I was just wondering if you could let us know what resonated with you the most. duncan: [00:03:00] Wow. What an example. The Bay of Pigs. A catastrophic mess. The Cuban Missile Crisis. Scary but masterful. The difference isn't better data. It's better conversation about the data. You know, I've been close with Mike since I was in grad school at HBS, and this is a really fun episode with him and Amy, and it is Chillingly real. It just rings so, so true. Amy obviously has decades of research on psychological safety and team dynamics, and Mike has cutting edge work on experimentation and data driven decisions. A theme in high signal has been how it's about more than the models, and in this episode it's about even more than the data. It's about how we debate it and challenge it. Let's get into it. hugo: Hey there, Amy and Mike, and welcome to the show. Glad to be here. Thanks, Mike. Your professor and director, technology and Society initiative at John Hopkins. And Amy, your professor at Harvard Business School [00:04:00] and author, and we're here for several reasons, but together you two wrote a wonderful HBR piece at the end of last year called, where data-driven decision making can go wrong. So. I'm really excited to jump into this, but I just thought maybe you could tell us a bit about your collaboration and various backgrounds and how and why you just decided to work together on this. amy: I'm happy to go first on that one, and I will say, as you mentioned, my, my field is really leadership and organizational behavior, and one of the things that I focus on in particular is group dynamics and decision making that happens in groups and. I've been studying that for a long time. When one does that, it's really a study more often than not, of things that go wrong, of bad decisions, of group dynamic breakdowns that led to organizational sort of results that nobody liked. And so I'm intrigued by the interpersonal dynamics of that, and I think Mike is the one who approached me knowing that about my background. With his deep expertise in, in data [00:05:00] science and thought that maybe we should get together and explore how our two fields play out in actual practice and give rise to problems that might be preventable. mike: Yeah. It developed a course that was called Data-Driven Leadership, and the idea of the course was to help MBA students understand. Yeah. What are some of the challenges that occur when interpreting data? What are the challenges that, uh, happen when you see an internal report, an external report? How can you better leverage analytics to guide your decisions? And then this last part of the course, we would discuss some of the issues around creating a culture of data-driven decisions. So I'd say as I look through the different parts of the course, I saw a lot of overlap with things I knew that Amy had been thinking about. And really, when you think about what goes right or what goes wrong with data, one piece of it is the analytics itself. But another piece of it really is about the discussions you're having and about how to go from an analytic result to a decision that an organization is struggling with. hugo: I love it and I'm [00:06:00] actually, I'm really excited to bring, I mean, a lot of the conversation in the space at the moment revolves around automation, yet a lot of people are trying to figure out how organizations can adapt to what's happening at the moment. So the human challenge is incredibly important. And I love that you spoke to thinking through failure modes as well. My background's in basic science research, and I always. A journal of negative results, I thought that would've helped us a huge amount. So I'm wondering if we could just open by perhaps you telling us some of the serious ways in which data-driven decision making can go wrong. mike: So we talked about some of this in the article and it's something I certainly cover a lot in that MBA course and see a lot in organizations. When I'm thinking about going from analytics to. Decision. Maybe I'll focus a little bit here now on some of the things that happened in that part of the decision process. So you know, some. Data point or some analysis has been surfaced. You're in a decision making mode and you're trying to make a managerial decision. So it could be, how should we set wages for the next year? It could [00:07:00] be, should we roll out this new product or not? So think about being in that situation and some of these surfaces causal claim, for example, some of the things that go wrong are things like conflating correlation and causation, which is something that we've all heard a million times. But often still, when presented with a data point, don't have the opportunity to dive deep enough into it to understand whether the causal claim is being supported. So that's one type of thing that we focus on. Another in that mode is thinking about what's measured, how long is it being measured for? So really basic questions. About thinking about the nature of the analysis and for people with research backgrounds, you might think about this as asking questions about the internal validity and the external validity of a finding, and then really trying to understand in its own context and then port it over to the decision that you are making Now, zooming out what goes wrong, a lot of what's going wrong is that discussion isn't happening at all. So what do you mean? I are interested in, is. [00:08:00] How do you, when you're in that decision making mode, have a good conversation about really understanding the data, the analysis, and how it connects to a decision that you're making? amy: Yeah, just let me build on that because for me, the, probably the most important issue is are we having a high quality conversation? And so if we're having a conversation that we need to make a managerial decision and data are involved. What's the quality of that conversation? And Mike has pointed to some of the sort of well-known pitfalls, right? Where you're conflating correlation and causation and so forth. And in a sense, we all know those, but we don't see them happening real time in part because the quality of our. Conversations and interpersonal skills isn't high enough, and I think of a high quality conversation as one in which, first of all, everyone's engaged, people are either listening or speaking about some [00:09:00] relevant aspect of the problem. Everyone's, nobody, people aren't on their phones or interrupting each other in, in harmful ways. Secondly, there's a nice mix of genuine questions, right? And we can talk about the kinds of questions that really help people. Unpack and uncover some of the assumptions they might be making that they're unaware of. So we have a nice healthy balance between people making claims. And people genuinely inquiring to learn more about each other's thinking. And finally, if you're in that conversation, do you feel like you're making progress? Are we, does it feel like we're getting somewhere or are we going around in circles or the loudest voices winning or what have you? So the I, I think we're, Mike and I come together, we say, if you can have the highest quality conversation about these. So challenging and often ambiguous issues where data come in, then you are so much better off in terms of the ability to make a good decision. I appreciate hugo: all of that and I, [00:10:00] I, I really love that you mentioned the correlation not being causation, example, because as you said, that's something we all know as in, in the abstract. But in your HBR piece, you give an example that I found wonderfully surprising. Like I wouldn't expect this to happen at a place like eBay, for example, right? So we see these HA things happen all over the place constantly, right? amy: Yeah. You have really smart data science people, and it's not always the case that people in the different silos and different areas of expertise know how to communicate effectively with each other in ways that are then heard and understood deeply. Engaged in the substance, in in productive ways. hugo: I also really appreciate how you're very much trying to tie data to the decision function and actions and what we actually do in organizations to, to move the needle. I'm wondering if you could share an example when you saw solid evidence and data [00:11:00] shifting strategy and delivering real impact, and maybe tell us a bit about. The things that happened there that allowed that to happen. mike: Yeah. So I could jump in on this. I'll give you an example where we had some data, we, an organization, saw the analysis and made some decisions that sort of changed, yeah. The way they were doing some aspects of their operations. That'd be in the context of Airbnb. So we had this study in which we had run an experiment and documented discrimination on the platform. So we were looking at discrimination against black guests on the platform. And what we saw is that black guests were getting disproportionately rejected relative to white guests for otherwise equivalent applications to state places. And, uh, Airbnb saw this and ended up putting together a task force. And then really had realized that they weren't measuring the potential for discrimination. Then we mentioned this line that goes back for decades, if not like a century now, is versions of what gets measured, gets done, or like that you're focused [00:12:00] on a set of outcomes and that's what you're tend to optimized to if you're being data driven. But what we measure is not necessarily always what we care about. So if you have narrow measurement or short term measurement. It could lead you to miss imported pieces of the puzzle. So here, Airbnb had already been running plenty of experiments, so it wasn't actually lack of data or lack of analytics, that was the problem. It's that they weren't measuring the full set of things that they as an organization were caring about. So after resurfaced this research, they ended up making some changes to the platform design. For example, if you look before our study, Airbnb. Would allow you to see the name of picture of a guest of a potential guest before deciding whether to accept or reject them. Now you can't see the picture until after you make an accept or reject decision. As a second example of the types of changes that they've made as they started increasing the use of instant booking, trying to automate some of the process that otherwise previously had it been. Done at the same [00:13:00] scale. Uh, third piece of what they did is really thought more carefully about their measurement and whether there are ways to understand, can you put in place a guardrail, for example, where if you make a change and it looks good on the stuff that you have been measuring, but also risk increasing discrimination on the platform. Can you surface that possibility and make a decision about which aspects of the data you're most interested in and whether or not to proceed rather than just ignoring the potential for discrimination. So to me, that was a situation where over a number of years, Airbnb shifted their strategy to go from not really paying attention to this is an issue. Just starting to think more carefully about how you measure that possibility and then account for it in some of their operational decisions. hugo: I love that example. 'cause it ties a challenge they were facing into measuring the correct things and then making product changes based around it. And I'm wondering whether that example with that example or another one, Amy. You'd be able to share just ideas for how we can [00:14:00] organize meetings and culture to have these high quality conversations. Yeah. Make sure we're doing the right things. amy: Yeah. It's funny because when you first asked for an example, I held back so that to have Mike go first, 'cause the one that popped into my head and then I couldn't get it outta my head. Was a case that I teach, which is on two major public policy decisions that happened 60 years ago, one being both under the administration of President Kennedy, one being the Bay of Pigs decision widely considered to be a fiasco. Just one of the worst decisions made at least up until time. And the other being the Cuban Missile Crisis, which has been widely considered to be. A very successful decision and the difference can literally be described as failing to engage in high quality conversation about the data. Largely qualitative data, although photographic images of missiles and so forth in one and. L deeply engaging with not only [00:15:00] engaging with the data in a room full of smart people who had different points of view, but debating the perspectives and the likelihood of what might happen. You know, thinking through the scenarios of what would happen if we did this or that. Shifting from, and here's a really good example of a high quality conversation shifting from a simple decision, should we invade or not invade? And realizing no. There's actually nine options, right? We could invade or not invade. We could have a blockade, or not a blockade. We could threaten Khrushchev and in the Soviet Union at the time, right? So there were, they realized the first thing you do when you're making a really consequential decision is first, make sure there aren't other alternatives. And indeed the one they ultimately picked was one that was not even on the table. In the beginning, but they went so far as to use techniques like forcing people to switch sides. Now you must debate the other side. And again, this sounds lengthy and and complicated. It isn't. It actually can be quite efficient. It's just thoughtful and deep and careful use of [00:16:00] data by managing the process. So probably the first thing you have to do, if you wanna have high quality conversations around important data and important decisions. And by the way. Where should we have lunch today? That's not an important decision, right? No need to get carried away and use these skills and tools for everything. But for those decisions where the stakes are reasonably high and the uncertainty is reasonably high, then those are the ones where we wanna really focus and just. Commit to a high quality conversation about the data, and you do that, I think first by calling attention to uncertainty and high stakes saying, this matters. Let's roll up our sleeves. Let's get it right. Secondly, becoming the process architect of the conversation, right? Good conversations don't just happen. They need to be helped. They need to be designed. Okay, we're gonna debate then we're gonna switch sides, whatever. It doesn't, I'm agnostic about what specific. Tools you use, but please use some tools. [00:17:00] Please use some structures that make sure we don't fall prey to the very real cognitive and group dynamics biases that we're so prone to falling into. duncan: I love that guidance, Amy, on, on how to guide, uh, deep discussion around an important decision. I'm curious though, to also take a step into. How a leader should evaluate even an individual analysis. And I think in today's, leaders are overwhelmed with dashboards and reports, right. And messages hitting them from all angles with analytical claims of one sort or another. Yeah. And I'm curious if you have guidance on kind of what clues to look to, to understand if an analysis is really credible, if they should believe it, if they should act on it. If they should maybe organize the meeting you're kinda discussion to, to actually go deeper. amy: I'll start by saying, have a healthy skepticism about anything you're being told by an expert with data, so long as the stakes are high and the uncertainty is real, right? [00:18:00] So again, for unimportant things doesn't matter. Like you don't have to stop and pause and dig deep into everything. But I think you should assume, and not because people are ill intentioned or deliberately misleading you, but we're all in our own. Thought worlds where we, it's like Marshall McLuhan said, we don't know who discovered water, but it wasn't fish. Right? Because if you're just the things that we're swimming around in as experts or the problems we get all consumed by. We fail to see, we're not able to see the things we're missing. So assume that any expert who comes as an individual to you saying, this is the right answer, assume they're missing something and engage with them in a learning oriented way to try to put about what if it's this or what if it's that. And what one of the things Mike and I write about in this article is giving people a set of good questions they can ask to probe the thinking further. And this isn't meant to be. Insulting to your [00:19:00] experts. It's meant to actually be a compliment. Like you respect them enough to want to stress test the thinking and open it up and figure out what we might be missing and where we might have to think a little more deeply instead. hugo: Those are a wonderful set of heuristics I am interested in, 'cause we're talking about data-driven decision making, right? And tying data to the decision function. I'm interested how technical we need to get in thinking. Our conversations true. So for example, I know that in my work, in your work, sample size and time horizon are really important to think about, right? So I'm wondering why these are important. What else is important, and how do you judge whether these things are strong enough to actually make a high impact decision? mike: Yeah, so sample size is one piece of it, but maybe helpful to zoom out a little bit and really think about what are the things that go wrong in a discussion. And I think building on both of the questions and also on what is just saying one thing [00:20:00] that a leader could ask themselves when having a discussion and trying to encourage fruitful conversations around data analytics or what mistakes am I likely to make? And I just wanted to sit on this for a second because. When thinking about some of the examples Amy was given and Amy used the term, I think qualitative data, I think we're thinking about and it makes sense 'cause Duncan, your kind of data science background. So I think when thinking about the context that we often are working in, it may be a tech company with large data sets and causal inference, but these problems actually happen all the time. Whether or not you're running. Your own experiments and whether or not you're talking about an experiment at all, and think about the Cuban Missile Crisis example Amy was given. It's the same flavor of questions that you might ask yourself as a leader, and questions like, am I disproportionately believing confirmatory evidence? Am I giving people room to probe and ask questions about what we might be missing? Are my beliefs accurate, or might I be [00:21:00] over optimistic about an approach that I was already excited about or about one particular strategy that our company might use? Then those things happen, whether you're talking about an experiment run within a large scale tech company or whether a policymaker trying to decide. A strategic approach of how to engage with another country. There's the same set of things that leaders really should be thinking about when trying to encourage these types of discussions. amy: Let me add the, uh, qualitative data are smaller in, in sample size, and the essential logic is the same, right? You're looking for the signal, amidst the noise, just as you do with statistical analysis and. You are very likely to be skewed by, you said, the tendency to seek confirming evidence or to stay, stay cognitively hooked by your initial judgment or your initial analysis. [00:22:00] So it's, it's, I think sometimes it's easier for people to see how we do those kinds of things and then when in the qualitative data and then we think, oh, but when it's quantitative, it's bulletproof, it's right. But, and not realize that we make the same, some of the same kinds of errors. And on mike: the flip side, building up on some of the opportunities that companies have around data. So it's common to think, oh, we're living in a world of big data. What you really need is more complicated analytics, and sometimes that could be valuable, but you also see gains that come from really simple observations that somebody asked about a data point or a source of information that could be useful that was being overlooked or not used. And just to give a concrete example about this, there's a nice case study on this, and people have certainly talked about this in various places, but if you think about the evolution of the three point shot within basketball, it's not that it was really complicated to think that yeah, you might wanna take more three point shots from close to the line, [00:23:00] rather. Uh, two, uh, deeper, two point shot. It's more that you just need to think about that at all and think, are you optimizing based on the data that you have access to? And that type of thing didn't require a data science team to run an analysis, but I did require somebody to think about what is the problem we're trying to solve? What is the data we might be overlooking? So, hey. Don't wanna lose sight it the fact that there's important data that's out there, and it doesn't always come in the form of advanced analytics or causal inference problem. Now, I think where you're going is then to say, A lot of times this also does come up in causal inference problems or in analyses where say sample size is what you mentioned, uh, comes up as an issue. There. I think the types of things as a leader you might wanna think about, especially on a causal claim. And I've mentioned this before but I'll just say it again 'cause I think this gets you a large part of the way there is to really think about internal and external validity. So on the correlation [00:24:00] causation, is this an experiment? If it's not an experiment, what's the approach they use to overcome any potential confounders? What are the potential confounders? How serious are they likely to be? Really having a little bit of discussion about that, what outcomes were measured. Do those outcomes reflect the things that we as an organization care about as if you really dive into analytics, you could see sometimes experiments that are just run for a few days or 10 days or whatever it is, and you see an organization that's trying to optimize for some long run goal that they have, and there could be importing gaps between the long run outcomes and the short run outcomes. And trying to understand what those gaps are and really think about how to overcome that. And on sample size, you can think, not just what is the point estimate, but what is the confidence interval and what's the range of estimates that this might be, and how is that likely to affect the decision that we as an organization would make? Depending on where in that range you, you might from. And that's a type of thought [00:25:00] experiment that any decision maker could make. duncan: Uh, I think it's really interesting and powerful how you describe some of the places that data-driven decision making can break down. And something I I find very interesting as a former data science leader myself, is you don't mention stuff like statistical significance as a major, uh, clue to watch for. Hmm. Rather you, you talk about how you need to be careful about watching confidence intervals, but I guess I'm curious to get your reaction to this because I think actually a breakdown in communication between data science teams and leaders can often occur where the science team says, oh, something's not stats. You can't look at this. Not even worth taking a peek at. But actually it could be an incredibly important qualitative data points around the performance of a product or around something that the company is pursuing. And so I'm curious if you have any guidance there on that messy middle between something that is truly quantitatively precise and the decisions that leaders really need to make. amy: I'll say one thing about it, which is [00:26:00] that when you have a situation where someone senses that this, that factor or variable might be important, but it isn't showing up in some analysis as statistically significant, it can be. That there's a missing moderator, right? It can be that there really are, just as that person's intuition is telling them, there is a significant relationship, but it's being drowned out by non similarities in the units included in the analysis. And then once you realize though, we have, oh, we have a sort of population one and a population two, we realize, oh, that relationship is beautifully strong and significant in one population but not the other. And. That kind of thing happens all the time. So I think the advice from a leadership perspective is pay attention to weak signals. They may be nothing and just, but pay attention doesn't mean drop everything and assume they're important. It means look into them. Be willing to explore and explore from a data analytic perspective. Explore, [00:27:00] conceptually, get what might we be missing, what hidden variability might be here that might help explain why this. This factor isn't showing up in the way that we thought it might. Rather than just saying, ah, it's not significant, so let's move on. Let's look at something else instead. hugo: I appreciate all of that context so much. And I do do agree that Stats Stig is stat statistical significance is something as a culture perhaps where overly ob obsessed with, and I'll link to this in the show notes, but one of my favorite statistical papers is called. Many analysts, one dataset, making transparent how variations in analytic choices affect results. Wow. It was nearly 30 teams analyzing the same dataset to address the question whether soccer referees are more likely to give red cards to people of color, and the results, like I think 30% of teams didn't see a statistical significant result. 70% did exactly the same data set. Wow. Slightly different. An analytic choices. It gets worse. What happened then is everyone was given options to look at everyone else's findings and research. [00:28:00] Everyone chose to do that, and most people were more certain of their own results after looking at others. So that really, I think, speaks to the point. Of course, all the analytics and data is incredibly important, but we need systems and organizations which take this as some sort of input and have a bunch of other mechanisms to feed into the decision function. Right. amy: That's such a good story and it points to that real need to help to educate people to know that you will make these errors, right? Nothing could be a more stunning example of that than the fact that once you gave me a chance to see the other sides, I got even more convinced. That's just human nature and deeply non-rational, if you think about it, right? I should have at least softened my view a little bit. From seeing the, I imagine smart, technically accurate analyses that you ran versus one I ran. But we tend to have these mechanisms in our brains that sort of harden us against disc confirmation, not go, [00:29:00] wow, cool. You see it differently. Wow. I wanna learn from you. hugo: Yeah. And how confirmation bias plays up in the analytic process. If you think you know what the result should be, you may munch your data or remove missing without amy: even knowing it, whatever. Without even knowing, without even noticing that you are biasing the results in to confirm your hypothesis or to support your hypothesis, you can be doing that unconsciously, not even consciously. hugo: Absolutely. And I am interested, a lot of what we're talking about is trying to generalize from incomplete information, and I know you two have worked and, and have thought a lot about. How to make decisions from findings, which may have happened in other markets or on other teams. And I'm wondering what you've discovered there and any advice for people trying to generalize from other markets or teams to make important decisions? mike: Yeah, so one thing that I think is coming up across these points, and it goes back to the. Statistical significance point and runs through how we interpret other data and also how to generalize [00:30:00] things, is taking a little bit more of a Bayesian approach to the way that you integrate information you're seeing from. Different sources, whether it's how you incorporate one new data point you're getting or whether you're looking across a bunch of things and trying to integrate it into a decision that when thinking about, say the take a statistical significance piece, there's actually errors that run in both direct. Like you could worry about a false positive, you could worry about a false negative, and there are. Are errors that could happen in both ways as something that comes out that is statistically significant, but really it's either economically unimportant or it's a false positive. But there are also situations where distinguishing between whether something has no effect. Or whether we don't know what the effect is, is also important because often we have limited data sets. There could be small samples, so you may say this thing didn't work and because it wasn't statistically significant, and there is a difference between saying, this thing is unimportant [00:31:00] or we don't know what the effect is. So I think just pausing a little bit. And thinking about what do we learn? And that's the internal and external validity. So you're getting at the generalizability here, right? So the external validity of the results, how it's gonna apply from that setting to the setting I'm interested in. Yeah. And the internal validity, just asking. Yeah, what do we learn there? How does it pour over and what have we also not learned? So we may not have learned that there's a zero effect or may not have learned that there's an effect. Yeah. We may have gotten a little bit of information one way or the other, and just thinking about how that affects the range of plausible values that you might see for. Something could be a helpful hero state for people in terms of generalizability. There's can a lot of questions that people could start to think about. So you could think about what is the setting that somebody was looking at? How similar is it to ours? Do we know anything about what the mechanism is? 'cause if we know, not only whether it worked, but also why it worked, that could tell you a little [00:32:00] bit about whether that, why is likely to have relevance to the setting that you're. Thinking about another one that can be helpful to think about at times is when you run this on lots of different populations, do the results tend to be similar and consistent across all these different groups, or are they wildly different? Because if it's wildly different, then that may be some clue that there's something unique about the way something works in one setting that may make it have a different effect or no effect in a different setting. duncan: Something I find really interesting about this discussion is. The importance of having a highly realistic perspective on uncertainty and maybe a recognition that there is a lot of uncertainty in the world and that we should get together and discuss that uncertainty in a rigorous and thoughtful way. And I find that both seems totally right and yet pretty at odds. I think of the expectations of leaders where often leaders are expected to have opinions out of the gates. You, you don't come into a meeting to collectively make a decision. You come into a meeting. [00:33:00] With a decision made to convince your colleagues of why that decision is correct, and that's been true in a lot of organizations I've been in and with different types of leaders, and so I'm curious if you have any guidance or reflections on how to navigate that and how do we become better leaders to recognize that uncertainty ourselves? amy: I think that's such a great question and great issue. And indeed it's very common. In fact, almost it's the default, right, which makes it almost universal that you come into a decision. With an opinion and. Consciously or unconsciously thinking of your role as to convince others, you're right, which is human and deeply problematic, right? Because you by definition only have a valid but only one point of view, one set of experiences, one body of expertise. And so it's as if we have to retrain ourselves as contributors and as leaders. To know that yes, I have a valid point of view and I am most certainly missing something, and that flips the frame [00:34:00] from how do I get in there and convince you I'm right and win the game, versus how do I come in and contribute what I know and learn as much as possible so that we can come to a better place and maybe even a place I didn't anticipate, and I'll be glad about that. And I think the short answer is. You have to do that deliberately, explicitly. You gotta make it discussable. You gotta, you've gotta, as a leader, you've gotta say those kinds of things often. We've never done something exactly like this before. We're in new waters. Really need to get, get input. Debate the thing and figure out if we can have greater clarity going outta this meeting than coming in. And I know we all had strong views, or I suspect we all had strong views coming in, and let's just suspend them. Let's play the game where we suspend them and we decide instead to roll up our sleeves and learn. It can be played right? It won't happen automatically, but it truly is possible to do this. hugo: You just [00:35:00] reminded me. Do you know the book Trillion Dollar Coach about No. Bill Campbell. So this is a book by Eric Schmidt, Jonathan Rothenberg and, and Alan Eagle. Bill Campbell mentored Steve Jobs Larry Page, Eric Schmidt in one-on-one situations for a long time, and he actually comes from a football NFL coach background or college football, something like that. But one of his whole things was telling executives that one of their most important jobs. In a meeting was to find the best idea and give it a ground, which allows everyone to feel like they're able to contribute so that they can collectively find the best idea. Not come in with the best idea, but find it together. amy: I love that it's a treasure hunt instead of a contest. Because it's a contest and I'm the boss, I'm gonna win, let's face it. But if it's a treasure hunt, I'm motivated to find the treasure. So are you. Let's do it hugo: without a doubt. So when we started chatting, I did frame it as we're in a world where there's. In increasing automation and [00:36:00] increasing desires, whether they're productive or not for automation. I love that we're grounding this more in how we make decisions collectively. I am just wondering, with increasing algorithms and now LLMs everywhere, how do we, how can we think about coupling these incredible systems with human judgment and having humans in the loop in productive ways? mike: Yeah, I think there's obviously growing use of algorithms and LLMs in lots of situations in business and in personal research as well. And, and at some level there are questions that we've been discussing that are gonna be relevant there as well. And thinking about things. And Amy, you nicely said this before, if you're deciding where to go for. Once, you may not need to revisit that. Just put something a little bit on autopilot. You don't need a 30 minute discussion to decide. So the speaks of a decision may be one relevant piece of it, but also having a good framework for understanding [00:37:00] what a human is able to. Add that an algorithm doesn't have. So maybe in some context algorithms may be better at reducing stereotypes of bias. Uh, they may be better at processing hard data that, that both the algorithm and the human would have access to, but the human may also have. More ability to process soft information or information that doesn't make it into an algorithm. So in those types of contexts, having human make a decision on the basis of an algorithm and whatever other discussion facts you wanna bring into it could be helpful to do. How about you, amy: Amy? Yeah, and I think that's a really, it's, it's obviously a, a very thoughtful question and one that we're in the. Nascent stages of trying to navigate these dynamics. And I guess I would say we have to recognize the very real risk of our laziness, right? That it's it, the ease of turning a problem over to an LLM, for example, rather than wrestling through it can be [00:38:00] irresistible. So I think we have to remind ourselves, remind people in companies that. That that creates risks, right? It creates risks of continuing to generate things that are consistent with the past and miss. Another thing that humans can do well at times is envision new possibilities or really off the wall ideas that are worthy of a little bit of time and thought. So raising, raising the awareness of the risks of. Simply turning our thinking over to AI in various forms is one that I think we need to do more of. 'cause there's an awful lot on the other side. There's a lot of happy talk, right? There's a lot of this is going to make so many things better and not thinking through what unintended consequences they'll surely be. hugo: Without a doubt, and I can give maybe a non-controversial example. Mm-hmm. Like outside the world of executive or team decision making, I write a lot. And since the introduction of [00:39:00] lms, I've used them to help with writing and there have been times where due to laziness or priorities shifting or whatever. I've outsourced a bit of the thinking that I would do otherwise to an LLM and I haven't got the fully formed idea. And even the act of writing or having a conversation to develop the idea is incredibly important. Whereas LLMs, using them for ideating, copy, editing a whole variety of things, fantastic, but perhaps you don't want them to create the idea for you. amy: There is something important about the struggle and not to take it as a sort of a spiritual dimension, but. When we really struggle with material or ideas, or arguments in, in trying to make a good decision, the struggle itself deepens our understanding. It shows us pieces that we didn't anticipate before and makes it stickier like then, and we have more of a command of. hugo: Without a doubt, and I know Duncan has a great question. I do wanna say that just reminded me, one of my favorite Python books [00:40:00] from learning to program Python from back in the day is called Learn Python the Hard Way. And that because he was like, look, yeah, sometimes you just need to do something somewhat the hard way in order to become proficient. duncan: Right? I do think a theme of our discussion has been this idea of like decision or data fluency. Mm-hmm. And identifying the things you really need to get right and spending effort on those. And maybe I'd also identifying things you don't need to get right? Mm-hmm. And moving quickly through them, I'm curious about how we, what we should expect of leaders. Like what level of kind of decision or data fluency we should expect, especially in non-technical leaders who maybe don't have a background in statistics and how we can educate them. What are the ways we can help them improve these skills beyond saying this stuff's important 'cause it's obviously important. Yeah. mike: So maybe I'll start with, um, a little bit of background on how he started developing this course on DA data-driven leadership, which actually had a lot to do with this [00:41:00] question. And one thing that had been top of mind for me is thinking about in a world where MBA students are going and working at companies that are running more experiments and have more data than they did before, what is the type of thing that somebody should know and be able to bring to a discussion or analysis? And I think. In some, it might be temping for one way you might teach a course like that is to teach it the way you would teach a PhD course, but don't go into as many of the details as you otherwise would. Another way you might do it is slightly more advanced version of an undergraduate course, but I, I didn't wanna do that in the course. And what I've been thinking about is really an MBA or somebody who's going to be in a business role at a company with data is doing something different and. Yeah, I could talk about some of the things that I've seen in organizations that sort of motivated the direction I ended up going, I remember the COO of a tech company saying, oh yeah, we've got an experimentation team [00:42:00] that's running tests, but I don't really engage with that at all. Or people in business roles saying, I don't really look at that. We have. Data scientists who are working on that, and it was clear that you can't just kind of say, oh, we've got data scientists working on that. They tell me the answer and then I just do whatever the data does. 'cause the data doesn't really speak right. It kinda starts a conversation that has to be interactive. But what I think is a helpful heuristic for a business leader to think about is what are the questions that I need to be able to answer that relate to the analytics? And how do I approach those pieces of doing, whether it's causal inference or working with algorithms? What do I need to understand and how do I translate the technical parts into managerial questions that are relevant to me? Something that's come up a couple times in this conversation is that you about how much weight to put on a specific finding or about which outcomes you're measuring. You don't [00:43:00] need to have a full causal inference course. To be able to ask good questions about what you're measuring and what's the gap between what you're measuring and what you care about. But you do need to know that those are things that are going to affect the outcomes that you're seeing in analysis. So being able to have good conversations about data, I think is very valuable for business leaders, irrespective of what your role is, even if you're just. Reading the news, looking at a report, looking at something I somebody sends to you in an email. It's good to be able to ask basic questions about what the source of the data was, how was it created, what are the outcomes? Was it causal? How do we know? How does it apply to my setting? What was that specific setting? And we actually tried in this article to put a little table together of questions that you could ask specifically around that. So I think that's. A useful thing for a business leader to be able to think about. Then there's a second set of things where there's a growing number of things that as a business leader [00:44:00] you could do without having, having the ability to run a full, highly technical analysis, but to be able to figure out what parts of an analysis should I be able to do myself. Um, if I'm working at a consulting company, should I be able to run an experiment across a large set of grocery stores? And understand what the impact is. Then there's a different set of skills that you would wanna have for that. That's also slightly different from training somebody purely for data science, but also different from just saying, Hey, you need to be able to talk about data. So I think as an individual going into a business role, one would wanna think about what's the role you're looking to go into? And what are the things that are going to help you to better engage in that role? I think for some people having good conversations around data is a lot of it. For other people, it may be, here are the types of analyses that I wanna be able to do myself, and that could be design a basic experiment, run an experiment, analyze an [00:45:00] experiment. Develop a simple algorithm and figure out what the scope of those things are and how that fits into your job. So I don't think it's a necessarily a one size fits all thing, but really thinking about how that relates to your objectives and then fit that in and thinking about it more as what works for me rather than what four things does. Every data scientist or every one who's not a data scientist need to know. amy: It's a way of thinking, right? It's a way of thinking. Analytically and rationally about signals and noise and be becoming acutely aware of the traps, right? That, that, that are just so likely to happen without a little bit of the opportunity to pause and ask some deeper questions about it. And as Mike was talking, because he's right in our, the five problems that we identify in the article, conflating correlation and causation or misjudging. The applicability or generalizability of results are really, are they're [00:46:00] logical errors that everyone recognizes. Even if they're not trained in statistics or data science, they recognize as errors or traps, and then you can attack them with statistical skill, but you can also attack them as we describe with the. Thoughtful, useful questions that you can ask other experts and then together get somewhere that neither one of you, neither the data expert nor the company leader could have gotten to on their own. hugo: Yeah, all of those, uh, wonderful. And I think that speaks to a lot of the rational, logical and aspects. I also, as we've discussed earlier, I'm interested in, I suppose everything we've learned from behavioral economics as well, and what are the failure modes, what can happen in meetings. That can get in. Yeah, to get in the way. How do we fix them and, right. I don't want to, I don't want to anchor too hard, but the anchoring effect, and I'm sorry to anchor with the anchoring effect Yeah. Itself, but the anchoring effect is one of the biggest ones where the [00:47:00] first thing that's said in the meeting can be the focal point of the rest of the meeting. So what are the concerns and how can we. You know, what's our antidote around medicine? amy: I think the antidote is raising awareness of the pitfalls, right? And it's, there's only a handful of them, right? The anchor effect would be one, the hierarchy effect would be another, right? If a senior leader speaks up first and perhaps even passionately about the thing they think is the right way to go, guess what? We're probably going that way. And it may be completely. Wrongheaded and easily disproven, but oddly enough, we are that irrational in groups and in in social systems. And then because what you do when you get everybody aware of the sort of handful of pitfalls is then we have language, and then we have people don't have to say, Ooh, I think the boss is speaking too much. You just say hierarchy or you say anchoring. Mm-hmm. You get to just, we've got a language. We can even post them on the walls of our meeting spaces physically or virtual and. It's like a menu of [00:48:00] traps. And because we're so committed to creating value and having our company do well, we're more willing and able to speak up to flag, oh, could we possibly be falling into the anchoring trap? So when we have that shared language and when we have greater awareness that these are just very real problems that even very smart people fall into, then I think we're better equipped to, to fight them. So that's the antidote. hugo: Yeah. And of course I love that those are the antidotes because I think a lot of us thought that data could be the antidote. Yes. As well. And there's the The hippo effect, right. You refer to it as hierarchy, but the highest paid. Yeah. The hippo. Yep. Opinion. And we all thought that with online experimentation, robust results, perhaps that would change. Of course, if the high paced person has an opinion that contradicts the data, quite often it's that opinion that will be followed as as well. amy: Right. hugo: Right. I am wondering, so if every leadership meeting began with one standing question to try to keep [00:49:00] decisions grounded in evidence and argumentation, what should it be and, and why? amy: One question that I would like to start a meeting with is what are we missing? What other options might we have forgotten altogether, right? And then beyond that is, okay, what data do we have? What do we not yet know and wish we knew? How might we find out? So starting with an exploratory mindset and resisting the urge to get to the confirmatory mindset until later, until we've really had higher quality conversations. mike: And so I agree with that. I think. And depending on the meeting, if it's a meeting where a recommendation, uh, is being made, or you're making a recommendation really concretely connecting the recommendation with the evidence or that you base it on, and then dividing those types of questions like, what am I missing here? What other data am I, we look at who has other perspectives and why, and really genuinely inviting them could be pretty helpful. hugo: Great. Thank you both so much for such [00:50:00] a wonderful conversation. I've learned a huge amount about how to approach not only data decision, data-driven DEC decision making, but how to organize our culture around it as well. So appreciate your wisdom and expertise and thank you for coming and sharing your findings with us. amy: It was a pleasure. Thanks for having us. hugo: Thanks so much for listening to High Signal, brought to you by Delphina. If you enjoyed this episode, don't forget to sign up for our newsletter, follow us on YouTube and share the podcast with your friends and colleagues like and subscribe on YouTube and give us five stars and a review on iTunes and Spotify. This will help us bring you more of the conversations you love. All the links are in the show notes. We'll catch you next time.