The following is a rough transcript which has not been revised by High Singal or the guest. Please check with us before using any quotations from this transcript. Thank you. === Lis: [00:00:00] In its best incarnation. I think it is a technology that has the ability to enhance and extend how we think, how we create, how we collaborate, how we get things done. But it's also ultimately a technology that is made to be used. By people, by us. And so I think the promise of AI can only really be fulfilled by understanding how and why we think and act the way that we do. And I think if AI and data science leaders are able to dig into that, I think we'll be able to solve adoption problems a lot faster if we can think about why are people using or not using this technology. And I also think. A knowledge of human behavior is going to help us improve how AI itself is built, how well aligned it is to human values and preferences, but also how well we're able to manage, how it shapes us and how it shapes [00:01:00] society, and to be deliberate about that as well. Hugo: That was Liz Costa, chief of Innovation and Partnerships at the Behavioral Insights Team outlining why the immense potential of AI can only be fulfilled by understanding well. Us Liz returns to high signal to discuss her team's new paper. AI and human behavior augment, adopt, align, and adapt. In this conversation, we explore why data and AI leaders must apply the lens of behavioral science to their work. We dive into the core idea that AI adoption is not binary. It's a spectrum. Organizations must move past shallow use to achieve deep integration. We look at why the early adoption habits we form now are disproportionately consequential. The AI era equivalent of the enduring, quirky keyboard, and how to diagnose stalled adoption by looking at issues of motivation, capability, and trust. We also cover how to guard against cognitive atrophy by being highly intentional about which [00:02:00] tasks we offload to AI and how to prevent the reinforcing of biases in one-on-one interactions. What Liz and her team call chat chambers, this is a critical discussion for leaders working to build, deploy, and align AI for maximum human and business impact. If you enjoy these conversations, please leave us a review, subscribe and share it with your friends. Links are in the show notes. I'm Hugo Bound Anderson, welcome to High Signal. Let's now check in with Duncan Gilchrist from Delphia before we jump into the interview. Hey Duncan. Hey Hugo. How are you? I'm well, thanks. So before we jump into the conversation with Liz, I'd love for you to tell us a bit about what you're up to at Delphina and why we make high signal Duncan: at Delphina. We are building AI agents for data science through the nature of our work. We speak with the very best in the field. So with the podcast we're sharing that high signal. Hugo: We covered a lot of ground with Liz. So I was wondering, Duncan, if you would let us know what resonated with you the most? Duncan: Super fun to have Liz back on the pod. You know, in Silicon Valley, we are always [00:03:00] thinking of value creation from AI as really an engineering problem. Is it easy enough to use fast enough, slick enough, and Liz comes at it from such a different angle. It's the sociology, that's the bottleneck, not the technology. That take almost feels contrarian, but also pretty clearly true. And it's so interesting to take that step back and think more about how we work as humans, how we think, how we think about how we think the metacognition. Let's get into it. Hugo: Hi there, Liz, and welcome to the show. Lis: Hi, Hugo. It's lovely to be here. Hugo: It's so great to have you back and you are the first guest we've had back for a second time, so welcome back. Oh, what an honor. Thank you. That's very kind. Absolute pleasure. How could we not, with the recent paper that you've put out, AI and human behavior, augment, adopt, align, and adapt. So before getting into the, the meat of this paper. It's one of the things I love about it is [00:04:00] it really frames how we think about AI through the lens of behavioral science, and for those who don't know a lot about behavioral science, I'm just wondering why should data and AI leaders care about behavioral science and what's the business case for it? Lis: Yeah, absolutely. It's a great question. So behavioral science is the study of how we make decisions and how we behave. And when I think about AI and generative AI in particular, it's such an incredible technology and it has so much potential to transform scientific discovery, but also to transform how we live and work every day. But in its best incarnation, I think it is. A technology that has the ability to enhance and extend how we think, how we create, how we collaborate, how we get things done. But it's also ultimately a technology that is made to be used. By [00:05:00] people, by us. And so I think the promise of AI can only really be fulfilled by understanding how and why we think and act the way that we do. And I think if AI and data science leaders are able to dig into that, I think we'll be able to solve adoption problems a lot faster. If we can think about. Why are people using or not using this technology? And I also think a knowledge of human behavior is going to help us improve how AI itself is built, how well aligned it is to human values and preferences, but also how well we're able to manage, how it shapes us and how it shapes society and to be deliberate about that as well. Hugo: Yeah, I love it and I'm really excited to get in into all the moving parts. And one thing that that we'll get to even, I used to think of no adoption versus adopted throughout the organization, and I knew there'd be a spectrum, of course, but the way you frame it as no use to shallow use [00:06:00] to deep integration and look at friction points along that journey is so, so useful. To step back, you've organized this framework A around four, four papers, which of course will link to the paper and it's all its papers in the show notes, but you've organized the framework around. Augment, adopt, align, and adapt. Could you walk us through the logic of why these four distinct ish areas? Lis: Yeah, absolutely. Maybe I can start by talking about why we wrote this paper in the first place. So as people who've listened to the previous episode of the podcast, know. The Behavioral Insights team has been using machine learning techniques for almost a decade now, so we've been using artificial intelligence in the way that we do research, and also more recently we've been using generative AI to. Really enhance behavioral projects. So doing things like using it to enhance qualitative research by using [00:07:00] AI interviewers, by experimenting with synthetic participants, but also using it to make behavioral interventions more interactive, more dynamic. We as an organization have been experimenting with how do you use AI to change and shift human behavior and to understand it as well. But what we really wanted to do in this paper is take a step back and think about a conceptual framework for what behavioral science as a discipline has to offer ai. And we've organized this framework around these four pillars because we really wanted to explore. The whole spectrum from, if you like, the micro to the macro implications of behavioral science for ai. So if I just step through the four pillars quickly. So in augment, we are looking at the development of AI models themselves. So how does a better understanding of human cognition [00:08:00] help us build more flexible, more intelligent AI models? And then we move from model development to model deployment. And when we are thinking about when these models get out in the world, when they hit real human beings. Firstly, what are the behavioral challenges that get in the way of adoption? And we know that technology adoption is fundamentally a behavioral challenge. So looking at what are the barriers to adoption, but equally, what does behavioral science have to say about how we can drive deeper, faster adoption? Then in a line, we are looking at how AI models can be better aligned with human behavior. So how does AI influence. Our preferences, our values, our behavior, and equally how are we influencing AI models? And then in adapt, we are looking like zooming out and thinking about the big societal [00:09:00] questions of how does AI shape our social norms? So how does it shape our norms and relationships with machines? How does it shape our norms and relationships with each other? And also how can we as. Human beings as citizens, how can we help shape the way that AI evolves and how it influences our society? Hugo: It's tough to choose which one to jump into first 'cause they're all so interesting and critical. I'd like to talk about a adoption to start and. In your paper, you're very clear that we're at a critical window before AI adoption patterns get locked in. So I'm wondering if you can tell us why that's the case and what that means for someone leading AI implementation right now. Lis: Sure. Hugo, can you look down at your keyboard and tell me what kind of keyboard it is? Hugo: It is. It is a classic, quirky keyboard. Lis: Great. The query keyboard was actually [00:10:00] invented in 1874. Wow. Which is wild, right? Yeah. It was invented in 1874. It was designed to stop mechanical type bars from jamming. It was designed for a technology that is. Largely defunct and has been ported into modern tech, whether it's touch screens or laptop keyboards. And over the past 150 years, there's been lots of versions that have been invented that are much better than the quirky keyboard. Mm. That are more ergonomic, that help us type faster. There was one particular one that was invented in the 1930s that was essentially shown to outperform the quality keyboard in almost every way. But we're all sitting here today with Ady keyboard in front of us on our phones because of the cultural familiarity and because there's a really steep learning curve to using a different keyboard and learning a different layout, it's lit literally our muscle memory that [00:11:00] keeps us tied to this kind of inferior technology. And I think that just shows you that these early patterns of adoption that are driven by early experimentation, early use cases of ai, they're actually disproportionately consequential because they're going to influence the way that organizations embed ai. They're gonna influence the way that norms evolve in terms of how we work with ai, particularly in workplaces, but also in our lives. And it means that the early choices about design, about privacy, about collaboration with AI are really important and really consequential. And in my view, those are questions that we are paying insufficient attention to in terms of. Thinking about how they might influence not just the next six to 12 months, but actually the [00:12:00] next six to 12 years and beyond of how we use and collaborate with ai. The extent to which we trust it, Hugo: that is. Both super interesting and scary, right? Just thinking about all the things we adopt and use, and I've never been able to type as fast as I've, I've liked to, and I'll definitely blame. It's not your fault, I'm gonna blame the whole 19th century for that one. So in terms of the adoption continuum, we mentioned this briefly before, but from no use to shallow use to deep integration, can you walk us through these stages and. Walk us through where most organizations are actually stuck and why. Lis: Yeah, absolutely. So when we started digging into the literature and the existing evidence on AI adoption, we were really struck that a lot of the discourse on adoption at the moment is binary. It's are people using AI or not? Are organizations using AI or not?[00:13:00] And that's an important question, but I think it's a question that we're somewhat beyond in lots of ways, because in fact, the majority of organizations, particularly in the knowledge sector, are using ai. The question is more, is it being used thoughtfully and effectively? And so we've sought to put adoption on more of a spectrum from no adoption, where organizations or individuals are, are not using AI tools at all. Through to shallow adoption, which is using AI for. Fairly rudimentary tasks. Mostly automation, so things like taking meeting notes or summarizing documents where it can be very useful. But it's quite a, it's quite a shallow, basic use of AI through to what we've called deep adoption, which is really using AI as a partner and an augmentation in an organization. So embedding it in [00:14:00] workflows. Being really thoughtful about how tasks can be tackled in collaboration with ai rather than just thinking about what can be offloaded and what can be automated. Hugo: Super interesting. And what are the main points of, of friction going from no adoption to shallow adoption, and then to deep adoption. Lis: Yeah, so I think there's, we talk in the paper about lots of different behavioral barriers and we talk about those barriers across three different dimensions, motivation, capability, and trust. So motivation, we're thinking about actually other people in the organization really driven to use ai. Do they understand? Where it's gonna be beneficial, where it's gonna be less. So capability. Do people have the skills and knowledge and confidence to use AI and use it well and trust? Do people think there's a couple of different levels to this? There's, do people think [00:15:00] that actually the outputs that they're going to get from AI are trustworthy? Are they accurate? Are they useful? But I think there's also a sort of deeper issue of. Do they trust AI systems generally, and I think that gets into some issues around identity, around the societal impacts of AI as well. Hugo: It makes a lot of sense and I'll link to our, our previous podcast for people more interested in, in going down how we think about designing systems to incentivize certain behavior choice architecture in, in Nudge and these types of things. But something I found fascinating, a lot of things I found fascinating in this work. Makes sense in the end, but you found people are more willing to use AI to prevent losses than achieve gains. So I'm wondering how leaders should be framing AI deployment differently based on that. Lis: Yeah, so this is a really interesting experiment. It actually is not one done by us, but one that we found in the literature when we were writing this [00:16:00] section, and it's a fairly small scale experiment. In fact, a lot of the work in this area is pretty early stage and experimental, and what we're trying to do in the papers is really highlight. Some of the interesting experimentation and research happening across the board, but in this particular experiment, the researchers conducted a randomized control trial with 500 participants, and what they did is they gave them an image that they had to tag with keywords based on. That image was showing, and they had a choice. They were either randomized to delegate the task to a human or to delegate the task to an ai, and they also randomized the quality of that assistance. So you either had. A very good person or a not so good person, a very good AI or a not so good ai. And then they tested two different ways of framing this. So one was a gain frame where they started [00:17:00] off with nothing. So it was a monetary incentive. They started off with $0 and then they were paid 50 cents every time that they got one of these image allocations, correct. And then in the other, the loss framing. They started out with an endowment of $10 and they lost 50 cents every time they got it wrong. And what they found was that in the gain framing, there was algorithmic aversion. So people preferred to delegate to a human than an ai. Even when the quality was better with an ai, they preferred to delegate to a human. And then what they found in the loss framing was that actually flipped around. So people preferred to delegate to an AI when they were losing 50 cents every time they got it wrong. And this also is consistent with longstanding findings in behavioral science, that we are more sensitive to losses [00:18:00] than equivalent gains. And again, as I said at the outset, this is early stage. It's experimental, but I think some of the inspiration that organizational leaders could take from Stu this study and studies like it are about how to frame the benefits and risks of AI in an organization. And I think what this shows is that AI could be presented to staff as reducing risk and helping them manage risk and manage loss. And that might be an effective strategy, whereas a lot of organizations are focusing on efficiency gains and what can be the positive side of ai, which is also really important. But I think this shows that having a balance of the two could be an effective strategy. Hugo: Absolutely. And correct me if I'm wrong, but. This isn't just, this is an amazing example of loss aversion at play, but what it shows is that loss aversion beats algorithmic aversion as well. Lis: Yeah. [00:19:00] Yep. In this case, it definitely did. Hugo: Fascinating. I am interested in thinking through, when we have AI as. I mean thought partners, or you use the term me medical cognition as well. I am interested in how this augmenting act actually works and you have a wonderful breakdown in terms of how we think about system one and system two thinking. So maybe you could remind us what system one and System two thinking are, and how this even relates to how AI tends, generative AI tends to augment humans. Lis: Sure. So a lot of behavioral science theory is based on Jewel process theory, which was pioneered by Danny Kahneman, the late Nobel Prize winner, and at a very simplistic level, jewel process theory. Uh. It says that we have two systems at play in our cognition. So we have system one, which is fast, automatic, intuitive, and it's the system that we operate in the vast majority of the time as we're [00:20:00] navigating enormous amounts of information and complexity in the world. And then we also have system two, which is slower, more deliberate. It's where we do our deeper reasoning. And the reason this is really important is that system one, whilst it's fast and intuitive, it's great at spotting patterns. It's also where we are prone to systematic bias. And a lot of the inquiry of behavioral science is about how those biases manifest and the impact that they have on our decisions and choices and behavior. Now in more recent research on system one and system two, again, like adoption, they have often been presented as binary. So at any point in time we're either in system one or we're in system two. And actually the more recent research suggests that these are also on a continuum, and that we flexibly shift our cognitive strategy from system one to [00:21:00] system two. Depending on the task ahead of us. So that's dual process theory. So the new generation of AI models use something very similar to system one and system two thinking, and they're built with that inspiration in mind. So many of them are incredible pattern recognition machines and operate a lot like system one thinking would operate. They're also able to mimic system two thinking in terms of deeper reasoning and cognition. And what we're arguing in augment is that actually we can take that further. We can take that inspiration further and think about how do you build an AI system that like us is able to be flexibly shifting along that continuum between system one and system two. And it is that flexibility to match our cognitive strategy to the task [00:22:00] at hand that makes human intelligence so incredible. Hugo: I love that. And something I also really appreciate about your work is that you take it even further than the current foundation models we have. Talk about perhaps the need for, or the, not the need, but perhaps the path that Neuros Symbolic a AI can help. And also the idea of having metacognitive controllers, which we've seen something analogous to that with JPD five recently. Lis: Yeah, exactly. And. GPT five has a lot of the hallmarks of what we'd call a metacognitive controller. And maybe I'll just step back a second to, to talk about what we mean by metacognition it. It really is that that ability to match your cognitive strategy to the task at hand, it's really thinking about thinking. Maybe, let me make that real for you before we go into GPT five. So one, I think of the best examples of trying to build metacognition [00:23:00] in people comes from Chicago, comes from the Chicago Crime Lab, and there they have a program called Becoming a Man. And Becoming a Man is targeted at young people, mainly young men who have been at risk of violence or have been involved in violence early on in their lives. And there's this brilliant exercise that they take these young people through where they have them in a room, they put them into pairs, they give one of them a ball. And then they tell the other person that they need to get the ball. So if you had the ball, Hugo, they would say to me, alright Liz, you've gotta get the ball off Hugo. And inevitably they end up on the floor wrestling over these balls and trying to rest them off each other. Hugo: Well, 'cause it's my ball, right? It's your Lis: ball. Exactly. And so you are fiercely guarding this ball that you've been given. I'm trying to get it off you. We end [00:24:00] up in a physical altercation over the ball, and then time gets called and they say to me, Liz, why didn't you just ask Hugo for the ball? And I'm like, oh, why would I ask him for the ball? And then they said, are you Hugo? If Liz had asked you for the ball, would you have given it to her? You've just been given this ball. It's not very important to you. So you're like. Yeah, if she'd asked for the ball, I would've given her the ball. And what they're trying to teach them in this moment is that at any point in time you have a scenario in front of you and you think you have an intuitive and a quick reaction to how to react in that scenario. In this case, it's to wrestle over the ball, but what they're trying to do is build a moment's pause to say, actually in that scenario you have. A lot more options available to you than you might first think. And if you can take a moment to think [00:25:00] about what are those options and what do you really want to do? What's gonna be the best thing for you? You can make a different choice. And that's metacognition in action. It's pausing and thinking about what are our options, what are the strategies available to us in this moment? And this is. What GPT five I think is aiming for, it's trying to be something that functions a lot like metacognition of, you get a question and it has, it seems to take a moment to think what's the right amount of reasoning that needs to be applied to this question to answer it well, but it's still at a really early stage, and I think it's too soon to say how good the metacognitive ability of GPT five or other models is. But we think it's a really promising avenue and there's lots of great work happening on this around the world from, for example, the Thinking Fast and Slow Institute, and I think [00:26:00] there's a lot of opportunity for AI foundation model builders to work with behavioral scientists to think about like how do we build the cutting edge of behavioral science and our understanding of cognition into the cutting edge of AI models. So that they are ultimately more flexible, smarter, and also more human as well. Hugo: That makes a lot of sense. The other thing I suppose about GPT five is it just doesn't do that. It actually allows you as a user, the choice. It'll say thinking in, thinking mode, or whatever it is, and it'll offer you the option to skip, think the thinking deeper. Lis: Exactly. And I also think this is a really interesting avenue of having though the ability as an AI user to, to really use AI as a thinking partner as well. And for me personally, this is how I use it a lot. I use it as a red team to say, what are the holes in this argument? Or how could this be better? But we at BIT have [00:27:00] also been playing around with. Building what we call reflective LLMs, so models that can prompt you and ask you to think through a particular problem or think about it from a different angle with the goal of coming to a less biased answer as well. Hugo: I love the idea of having LLMs that prompt you and I do think. The future of a lot of agentic AI will be LLMs or otherwise AI systems that come and say, Hey, I noticed this, or this is happening. Or even, Hey, you just got this email that you really need to respond to Hugo. Lis: Yeah, absolutely. And actually a colleague of mine, Tony Zen Price. Was talking to me the other day about an idea that I think is fantastic of having an LLM that could keep track of all of your conversations across different platforms, different different ais, and basically give you advice on how you are collaborating with AI and pull. [00:28:00] Pull the curtain on what kind of habits you are forming and what might help you get more out of your interactions and collaborations with ai. And I think that could be a hugely valuable product for a lot of users, particularly in a professional setting. Hugo: There's so much fertile ground here for so many different types of products. The other thing I love that you mentioned is red teaming your own ideas, and you have to do that because of course, due to the Chan SCO fancy problem, which. Perhaps it's gotten better, perhaps it hasn't. I think the jury's out on, on that for me at least. But if I give an AI something and say, what do you think of this? It'll usually say that, that's the most amazing idea I've ever heard, Hugo. Whereas I'd go to a friend with it. Yeah, I go to a friend with it and she'd be like, dude, you need to, you need some other ideas. But I, when you do prompt them to red team your own ideas, they can be highly sophisticated and very good at that. Lis: Yeah, absolutely. And a lot of listeners I'm sure will already be doing something like this, but getting [00:29:00] AI to role play and to saying to it, imagine that you are a top behavioral scientist or a top data scientist or a philosopher, or whatever it might be. And there's a lot of great theory underpinning red teams. They draw their inspiration from a military setting and they're really designed to try and break down. Group think and path dependency and to try and open your eyes to what are the different possibilities here and also what are some of the holes in what you're trying to do that maybe you individually or you collectively have, have chosen to block out or have inadvertently blocked out. And I do think with the current generation of models, it's a fantastic way to use them. Hugo: Totally. And speaking of. Blocking out things consciously or, or otherwise. The alignment section in, in, in your work warns about AI reinforcing biases in a feedback loop, which isn't necessarily [00:30:00] new. We've, we have technologies that do that, or, or already, but this is different in a lot of ways. 'cause it, it's personal, like on social media. Echo chambers at least are public in some sense, not in the sense that Broadsheet Media was. But I'm wondering if you would tell us a bit about. How you think about the reinforcing of biases in these intimate interactions with ai? Lis: Yeah, so when we were researching this section, a couple of things became apparent. So the first is that the influence between AI and humans is bi-directional. AI is influencing us, but also we are influencing ai. And what we are seeing is things that we call chat chambers. So. A lot has been written about how AI is trained on data. That includes a whole host of cognitive biases because it's trained on data that's produced by us as humans and is a reflection of that. So what's coming [00:31:00] out of AI models already has some inherent bias baked into it. Then when we interact with ai, we bring our own biases. To those conversations, and then they get reinforced. And what we're seeing is that can create the downward spiral where those biases become entrenched and reinforced in what's called a chat chamber. So that's the first part. The second part is that. A AI are powerful influences of human behavior, and we see that across a whole host of different settings. So again, a lot of people will be familiar with some of the studies that are looking at the way our language is evolving, like literally the words that we are using in day-to-day conversations and discourse are changing based on our interactions with ai. We are using the word delve a lot more, which. I, I think is sad that's now become associated with ai [00:32:00] because I think it's actually a great expressive word to, Hugo: I used to, I, I used to love M dashes occasionally, and now I'm, I love an M Dash. Yeah. Lis: But now there's a sense of you can't use those words, or you can't use that an M dash, because it's a hallmark of. You've written that with ai, which still has a tinge of, well, more than a tinge of, that's not a good thing. Like it's not a good thing to have to be transparently using AI in your writing still. And so we're seeing their powerful influences of how we communicate. They're also very powerful persuaders. So we talk about a study in the paper that looks at. How effective LLMs are at persuading us of both what is true, but also of deceiving us. And so we are seeing this powerful influence and we're only really just beginning to dig into what does that really mean? What does it [00:33:00] really mean to have AI systems. Be influencing our decisions and our behavior in such a direct and powerful way. And let me give you a couple of sort of real world examples of why that matters. If you are an AI company, thinking about alignment and how to align AI models to human preferences and behavior, you might be thinking about scenarios like. If somebody comes onto Claude or chat GPT or Gemini and says, who should I vote for in the upcoming election? There's lots of different ways that that conversation could go. It could say, let me, if we take the US as an example, it could say, let me point you to the official Republican and Democratic website so that you can check out your candidates and their policy platforms. It could try and have a conversation [00:34:00] with you to say, what do you really care about? What do you wanna see happen as a result of this election? And therefore, let me try and match you to a candidate who shares your values and is, and shares your priorities. It could just say, I can't have this conversation with you. And whichever path it goes down, it really matters. It, it really matters for individual choices and behavior. It also really matters for society. And so what we are looking at in Align is actually what are some of the strategies that we can use to build a greater alignment between our existing preferences and values and what AI is persuading us towards. And then in adapt, we also think about what does that really mean on an aggregate level and. How do we as a society shape AI systems that align with our [00:35:00] preferences? Brit large. Hugo: So if I'm an AI leader deploying AI tools in my organization. What specific guardrails should I be thinking about putting in place? Lis: Yeah, it's a great question. It's a really important question for organizational leaders. In the paper we talk about inference time adaptation, and essentially how you can adjust AI models so that they're less bias confirming, and at a very simple level, you can build models that are being used in your organization. That are designed to encourage or discourage particular behaviors or biases. So to make that real, so at BIT, we use Gemini. So you could build a gem that is, for example, less sycophantic and roll that out as the default model across your organization. And I think there's a lot of opportunity to be tailoring models in that way, and there's reason to think that's really [00:36:00] consequential for. The way that staff across the organization are making decisions, so. We also publish in the paper the results of a new experiment that we ran at BIT with about 4,000 people across the US and the uk. And what we did is we gave them four very classic behavioral bias scenarios. So things like the sunk cost fallacy or the decoy effect. And we then randomized them to look at that scenario either without an LLM. With an LLM that they had to click to engage with. So they had the choice of whether to engage with it with an LLM that they had to engage with or with the LLMI was describing earlier that was more reflective and helped them reason through the scenario. And what we found in three out of four of those scenarios is that AI does de-bias our decisions, but only where [00:37:00] we are forced to use it. And also the impact of it didn't endure beyond having the LLM assistant. So when we took the LLM away and we asked further questions, people's biases crept back in. So if you are an organizational leader, thinking about alignment and thinking about how to. De-bias decision making across your organization. You can really be thinking about how you tailoring the LLMs that you're using in the organization to try and support that kind of decision making across your employees. Hugo: We'll definitely make sure to share that paper and that experiment in the show notes as well. Something that's personal to me and to a lot of us. You raise concerns about cognitive atrophy and over reliance on ai. I for one. Feel that I have AI notetakers come to most meetings now. Mm-hmm. And it's not, I'd be, it'd be tough to pinpoint where and when, [00:38:00] but I'm almost certain that my memory has already, perhaps I was gonna say, suffered a little, that may be too strong a term, but I'm wondering if you can tell us a bit about how you think about cognitive, a atrophy and over alliance here. Lis: Yeah, absolutely. And a first thing to say is that. At the moment, it's really, it's a feeling that each of us have and an intuition, and actually a lot of the evidence is extremely early stage and experimental and pretty mixed as well. But anecdotally, people are saying that they're starting to feel less equipped and less able to do the types of tasks that they're delegating wholesale to ai. So there's two aspects to this. The first is cognitive offloading, and there's been some good research that shows that people are delegating some types of cognitive tasks, particularly summarization [00:39:00] search, pretty much wholesale to ai, and people are therefore spending less cognitive effort on those types of tasks. As a result of working with AI and that kind of cognitive offloading, it could be appropriate in certain scenarios. Like back to your example, Hugo, is it really a good use of your time and your intellect to be taking notes in a meeting? Or if you can have an AI notetaker take a really accurate incisive note of a meeting, is that not. A better model where you can then focus your energy and attention and cognition in being present and driving that meeting forward in the moment. So I think there's aspects where that offloading is good and positive. And then there's other areas where there's reason to think that if you do that too many times. It leads to what's called cognitive atrophy. [00:40:00] So really the diminishing of those skills in a way that means. We are becoming less able to do certain types of tasks, particularly reasoning writing unassisted, and I think there's a lot of different perspectives on whether that's a good thing or whether that's a cause for existential concern. I think what we talk about in the paper is. There's lots of different aspects of this. We talk in the paper about what's called the extended mind, and this is a relatively old theory in terms of AI research predates this generation of AI models. It's a theory that was introduced by two philosophers, and their argument is that we've always had these tools, whether it's writing or GPS or calculators that have. Taken certain aspects of our cognition outside of ourselves. They've [00:41:00] externalized our cognition and, but in most of those cases, it hasn't completely gotten rid of the underlying skill. So when we started using GPS, we didn't lose the ability for spatial reasoning for navigation, but we changed it. We changed the way we did that in partnership with GPS. And the same is true of calculators. It didn't eradicate mental arithmetic, but it did change kind of the way that we thought through numerical problems. And this is not a new phenomenon. Back in ancient Greece, Plato was writing about. Socrates fearing that writing would implant forgetfulness in men because they would cease to exercise their memory and they would just rely on written records and that this would mean our capacity for memory would, would diminish. So this is something that's been with us for [00:42:00] the history of humanity. And I, my most optimistic take of it is that there is good reason to think that AI can be an extension of the mind, that it can make us more creative, that it can extend the possibilities of human cognition, of human capability, but it needs to be used in a really thoughtful way. And we need to be as, as leaders of organizations, as AI and data science leaders. We need to be equipping people with the skills to be able to use AI in an augmented way that guards against that cognitive atrophy and is very thoughtful and deliberate about what gets offloaded to AI and what stays within ourselves. Hugo: I love the idea of being that mindful and that thoughtful and principled about it as well. I also love even the ancients were thinking along these lines. Yeah. In particular, Plato as well. I, my understanding is he didn't like ri [00:43:00] write the introduction of writing in one of his letters, he talks about one of the reasons. He wrote dialogues, not only was it a a, an important form at the time, but he didn't want to be misquoted or misattributed. He didn't want anything to be attributed to him, so he was like, I'm gonna write dialogues so no one can misquote me out out of context. I am wondering, when we think about these risks of cognitive atrophy and overreliance on ai, how would you encourage leaders to think about balancing, encouraging adoption while avoiding these risks? Lis: Yeah, so I really think it is about AI skills development. And again, some of the, the early experimental work points to what are the skills and habits that we can develop that guard against cognitive atrophy. So for example, the order in which you do things seems to be very consequential for creativity, but also for the quality of the final output. So it seems that. Thinking [00:44:00] independently for a period before you go to AI is more likely to lead to AI being an augmentation, a thought partner, and a better quality output. But it also preserves that ability and that skill to do the independent thinking first. Also working with your teams to help them develop. Really useful prompts and prompt libraries that help them get the best out of ai. Helping the team navigate where is AI useful? Where does it add value? And also where does it not? And again, there's really interesting emerging evidence about adding in frictions and pauses that might help people discern those moments where actually. Some independent thought or a change in direction would be useful in terms of the quality of the output. I also think something that we've done at BIT [00:45:00] is really allowing experimentation and playfulness with AI and really encouraging your teams to giving them the license to say, what do you find this useful for? And when you find a good use case. Share it widely and let's think about how we can scale that up. Hugo: I love that. And something else in, in your paper, the amount I hear that X percent of executives and middle management expect, expect 30% productivity gains, whatever these numbers are. And yet I sees actually, when they use AI a bunch, they may actually report. Productivity decreases. Part of the reason for that seems that they're not given s space to e experiment and play. Mm-hmm. And they're expected to use AI in addition to their current workload, which may be a bit too much already. And something you make very clear is the need to carve out space for this e experimentation and using ai. Ai Lis: Yeah, absolutely. And [00:46:00] and giving teams license and support to do that. And I think. One concrete thing that organizational leaders could take from there, if you haven't already, if you use Slack or Teams or any kind of platform like that, create a channel that everybody in your organization can join. So at BIT, our channel is called AI BI and let people play and let people experiment. And have a sense of genuine bottom up innovation across the team of how are people using it? What are they finding effective and useful? What are they finding ineffective and a drag? And have people share those examples of. Yesterday I did this, it saved me this much time. But actually I think if I tweak these three things, it would be even better. And then let people build on each other's experimentation and innovation. [00:47:00] And for BIT as an organization, that has led to an incredible vibrancy in the way that the team is using ai, but also, yeah, an incredible license to be creative, to be thoughtful about it as well. Hugo: That's such a wonderful piece of advice. It's time to wrap up, Liz, but I just, I wanted to mention you are partnering with select organizations in helping people with these types of things, right? Lis: We are. So we already have a number of partnerships, as I mentioned, where we are using AI across. Behavioral research projects, but across every pillar of this paper. So from how we improve model development itself through to how do organizations drive deep, thoughtful adoption through to what are the ways in which we can build strong, thoughtful alignment between AI and human values, preferences, and behavior. Through to how do we [00:48:00] manage the societal implications of AI and be much more deliberate and thoughtful in how our social norms are evolving across all of these areas. We would really welcome partnerships in order to test ideas, in order to scale ideas, in order to help organizations create value from their use of ai. So if you listen to this and any of these ideas. Feel compelling and interesting and value adding. Please get in touch. We would love to have a conversation about what we could do together. Hugo: Fantastic. And I'll put the details in the show note, but it's info at bi team for those listing. Exactly Lis: right. And if you download the papers, I have two co-authors on these papers. Dr. Michael Holsworth and Dylan Maru. This was ex very much a team effort between the three of us and all of our emails, uh, at the end as well. Hugo: Fantastic. Thank you once again, not only for your time, Liz, but all your wonderful work and sharing all, all the fruits of your [00:49:00] labors with us. Lis: Thank you, Hugo. It's been a real pleasure. Hugo: Thanks so much for listening to High Signal, brought to you by Delphina. If you enjoyed this episode, don't forget to sign up for our newsletter, follow us on YouTube and share the podcast with your friends and colleagues like and subscribe on YouTube and give us five stars and a review on iTunes and Spotify. This will help us bring you more of the conversations you love. All the links are in the show notes. We'll catch you next time.