The following is a rough transcript which has not been revised by High Signal or the guest. Please check with us before using any quotations from this transcript. Thank you. === Bozena: [00:00:00] A lot of, what's hard about speaking a language is just this anxiety and not wanting to, to embarrass yourself. And this practice with a bot in particular is very helpful because, well, you don't feel the same pressure and then you, you get going and then. It gets easier. Hugo: That was Bina Paek, VP of Learning at Duolingo on the role of AI in overcoming one of the biggest hurdles in language acquisition, speaking anxiety, and how AI can help. On the flip side, when we asked Bina about the biggest unlocks with respect to AI at Duolingo, it wasn't a technology at all. It was the human side of things. Bozena: One of the biggest breakthroughs that allowed us to move forward was actually retraining my team. To to do a lot of this work with ai. Hugo: Jay Knott was Duolingo first ever learning scientist and has spent the last decade pioneering the integration of cognitive science and statistical learning into [00:01:00] the company's product DNA. In this episode, we explore how Duolingo transition from an engineering led startup into a leader in evidence-based education, but Jayna shares insights on building a robust research function. Within a fast moving tech culture and explains why long-term learning outcomes often require different optimization signals than short-term accuracy or engagement. We discussed the evolution of AI at Duolingo from the personalized difficulty models of bird brain to the current generative frontier where AI characters provide low stakes in high impact conversational practice. We also talk about how BOA's team leverages agentic workflows to scale content, and why the next wave of personalization involves shifting from difficulty levels to thematic lenses tailored to specific user interests. On top of this, we also cover the counterintuitive finding that deterministic paths often outperform learner autonomy when it comes to long-term retention. If you enjoy these conversations, please leave us a review. [00:02:00] Give us five stars, subscribe to the newsletter and share it with your friends. Links are in the show notes. Bina is also hiring for some wonderful positions, and we've also included them in the show notes. I'm Hugo b Anderson, and welcome to High Signal, brought to you by Delphina, the AI agent that supercharges your analytics. Hi there, Bina, and welcome to the show. Bozena: Hello. Thank you for having me. Hugo: It's such a pleasure to have you here and so excited to jump in and talk about what you're up to at Duolingo and. What you've seen there over the past decade, and I don't think I realized this until just before we jumped on the call, but you've been at Duolingo for nearly 11 years now. Bozena: That's right. It is really hard to believe, but yes, I joined mid-year 2015, so it's been a long time. Hugo: Amazing. Congratulations on your decade plus work, adversary. Bozena: Thank you. Hugo: And also, I mean, it's almost unheard of in tech technology, so it [00:03:00] must be such a, such an incredible place to work as well. Um, Bozena: actually at Duolingo, I'll say this is not so unusual. People tend to stick around, so we're very proud of our culture. How awesome of a place it is that actually people stay for a long time. And actually even we have some. Boomerangs. People who leave at some point and then come back, they decided it was actually better at Duolingo. Amazing. Amazing. That's says something. Hugo: Well, you're speaking my language as I am Australian as well, so I am interested in, you were the first learning scientist at Duolingo a decade ago, so I, and your background is in linguistics, cognitive science. Your research was on attention, memory, and language. And of course all of us know Duolingo as an app. We're gonna go learn languages among other things. But I'm wondering how your background, and I suppose the research background of how Duolingo approaches things has helped you think about [00:04:00] designing learning experiences. Bozena: Yeah, so yes. My background is in linguistics, cognitive science. My research when I was back in academia was really on how people. Learn, especially how people learn languages. And I was fascinated by that and especially how it works really in our minds, in our brains, how we can take advantage of what people already know to teach them new things, to help them connect things, generalize. So I did a lot of little experiments teaching people parts of a language and then trying to see how to make that better and that turned out to be great. Background, great preparation for my job at Duolingo. As you said, I was hired as a, as really the first learning scientist, and so I had this amazing opportunity to really shape our approach to teaching. And the way it played out is that I was able to bring all that knowledge of the research I knew about the research I did myself, [00:05:00] and then getting people excited to try to implement it on Duolingo. And one interesting thing is that. My expertise was really on this kind of foundational research. So like generally thinking about cognition and not research that's maybe more applied like in a classroom context, like how people learn in a classroom. This has like another field that's more applied, what you can think about how people, how learners use digital products in the classroom. And so the consequence of this was that I think we were really. Able to take that foundational research and think about, okay, how, what, what does that mean for application in general? We are the ones trying to innovate on the application side and, and I think this really gave us a lot of creativity and ability to just try things that have never been tried before. And that kind of innovation mindset, I think [00:06:00] continues to be there in terms of applying. Learning science research to Duolingo. So one thing, one example is how we still rely a lot on this area of research that's called statistical learning. I don't know if you've heard about it, but it's essentially thinking about how humans learn patterns around us. Turns out that our brains are actually amazing at. Learning patterns pick up on patterns. That's how we learn all sorts of things. For example, that's how we learn our first language. So actually this area, this field really started by investigating how babies learn. Their first language, so that that was thought to be such a difficult problem. Researchers started figuring out how oh, babies actually pick up on the regularities in the language they hear. And then that, that started growing the research beyond language, all kinds of regularities, visual regularities, music map, and we started [00:07:00] tapping into that research, really trying to take insights from there. Okay. How is it that this statistical learning works? How do people. Just without even realizing, pick up on those patterns. And that's, that's what we are trying to apply in our product continuously. And I still go to conferences to really follow that research. Hugo: That's fascinating, Jane, I, and. In fact, what I'll link to in the show notes is there's a wonderful blog post on, on your blog by your colleague Cindy Blanco from a few years ago about statistical learning and in particular how children learn as well. And hopefully when we start talking about habit formation, this may come into play there as, as well. I am really interested. In bringing science into tech startups and tech companies. I'm wondering, as Duolingo ISS first learning scientist, what did it actually look like? Bringing science and research into a product team at a tech startup, and now your team has [00:08:00] grown to over 40 people, so I'm wondering what the trajectory of that function has been like. It's been eminently successful. Bozena: Thank you. But yes, it's definitely been a long journey. When I started, I will admit it was. Pretty hard. It took me probably about two years to to really build trust, to really be able to have impact, because when I was hired, clearly the company thought that they needed someone with this type of expertise. But when I joined. They really were not sure what exactly to do with me. They had their own way of just like working on the product itself and having their engineering solutions. There was a big focus on, on ai, on being really smart and just automating whatever we can automate, and it was really hard to plug into that and convince people that actually. Maybe there are some things we should reconsider, maybe have a slightly different approach. So that, that was [00:09:00] hard. But after two years of just gradually. Proposing ideas, being able to test some of them and gradually showing that, oh, actually there's payoff. From what I'm saying, we see actually wins in our engagement metrics. We see wins on the learning side. When we tried to measure learning outcomes, then people really started trusting much more. I was able to hire the first person, actually starting with just an intern. I got an intern and then we hired that intern full time, and then I was able to gradually. Hire more people. And so now we are much more integrated with the product teams because the way we work is the product teams are cross-functional, so learning scientists, learning designers work there alongside with product managers, product designers, engineers. And so people are there on the ground to really hash out what we should be doing, how we should be building [00:10:00] the different learning experiences. And people know what. Learning scientists do what learning designers do, but it's still tricky. For example, every time we hire someone, maybe more senior, who has worked in some other tech company, they've never encountered this role in general. They've never worked with learning designers, with learning scientist. And so it's again, trying to establish what, okay, what is this role? How, what do you contribute? So I feel like it's a continuous, it's work in progress and I have to keep telling people on my team that they are pioneers. They are there to innovate. They are there to really figure out how to influence. All their collaborators and explain things in such a way that will resonate with other functions. And so, and that is hard, and that is just something that, that we have to face every day as lots of ambiguity, lots of [00:11:00] compromises, lots of trade-offs, and that is just part of the role. So I feel like that just won't really go away, even though now we're. So much better integrated. Hugo: Duncan, I'd actually be interested in your thoughts on the, I've never thought of it like this, and of course there are lots of differences, but there are actually a lot of similarities to data teams, I think. Right. Duncan? In terms of. Becoming known as like value creators and not a cost center. Having to have a lot of dotted lines between teams as well and having to educate people on what you're actually doing. Awesome. Duncan: Yeah. A hundred. Yes. My kind of hard left in my throat when I heard you describe your experience early on at Duolingo Boje. Now, I actually was one of the first data scientists at Wealth Front, the FinTech company about a decade ago. And my background's in economics. And at the time one of the things I was bringing was actually some behavioral economics understanding to the firm and thinking from a behavioral perspective about how we should design our interfaces so people would make sound financial decisions and how we might offer financial advice through the product [00:12:00] as well. And similarly, the firm kind of knew this was a good idea. It was clearly like, thought, like made sense conceptually. But like how you actually stapled me into a product team with engineers and product managers and designers who don't really know what I'm doing was, was an adventure and ultimately I think valuable for both sides, but definitely an adventure. And you're right Hugo, I think that data teams are often also that maybe fourth leg on the three-legged stool, and it's not entirely clear how they should work with an organization until the organization builds the right muscle to actually have them be incorporated and either embedded or like completely married to a given team. But it's always a journey, especially with new leaders from organizations who haven't heard of that before and dealt with it before. I am curious about you and Ashley, if you could just mention briefly, like how do your teams integrate with the tech teams and is it like a kinda a matrix model or, I'm actually just kinda curious, like tactically how do you actually make it work? Bozena: Yeah, that's right. So we have a matrix kind of structure, so definitely the different roles, different functions with their leaders. So I'm the head of the this [00:13:00] learning, we call it learning and curriculum function. And then, but then. All the product work is cross-functional. So we have teams that are cross-functional, they're led by cross-functional leaders, so typically just a product manager and an engineer, but actually increasingly also there could be a learning designer for teams that are focused on teaching, on on learning, which helps a lot and, and then there are different layers of this, the structures and the teams that are grouped into what we call areas, and then areas are grouped into what we call pillars. And we have a language learning pillar or a new subject pillar because dual language has been expanding to, to, to new subjects or a growth pillar, monetization pillar. So where learning designers, learning scientists are, are in language learning and also in new subjects, and they are embedded in those teams that just own the different features. And the team, the teams themselves change pretty frequently because they are, [00:14:00] they own. Certain problems to solve or they own certain features. And so whenever they are maybe done with something we might restructure or when we want to pivot, reprioritize. So those teams are pretty fluid. What's really constant is your own role, your own function that you belong in Duncan: makes a lot of sense. I'm, I'm curious to actually dig a little bit more into the AI angle, which you mentioned earlier, and I think Abdu Lingo actually has a product experience that has almost certainly had AI from the very early days and some version of AI in the product because it was really, I think, designed around early personalization and conversation with a user, but would be curious if you could actually walk us through that. You've obviously seen this firsthand, like how, what was the arc of AI at Duolingo and how has that played out? Bozena: Yeah, so AI definitely has been a big part of, of the company from the very beginning. People, we had AI experts before we had learning experts, and the very first types of things that we [00:15:00] worked on was really in the realm of personalization. And so really thinking especially about personalizing the level of difficulty, the, or also playing a little bit with the sequence. How exactly we're showing you the content that we have? What exact, so the curriculum was the same for everyone, but then trying to pick specifically for each person, okay, what exact piece of content is really best suited for you? And we've developed this model called Bird Brain, which is our model of student knowledge. And that's been very powerful. We still use it. And so essentially we, what we do is. Look at, at the history of every user, how they've been interacting with the app, how they've been answering on different types of exercises, and based on that we make predictions about for any new exercise or how likely are they to get it right? And based on that we can [00:16:00] change the experience, personalize it. And that's been very successful because it's really, it has really allowed us to, to just give the right level of challenge for every person. Of course it's still not perfect. We continue tweaking it, but, well, different learners are of course, different people have different thresholds of how much they want to be challenged, how much they already know, how quickly they're learning and so on. And so that's, that's been really fantastic area to focus on. And then we also use AI quite a lot for assessment. You might also know Duolingo is not just the app. We also have what we call the Duolingo English test, and that's a, it's like a separate product. The high stakes test language test that we offer to people when they need to certify their English knowledge and, for example, use it to get to a university, [00:17:00] an English speaking country, or demonstrate proficiency for a job. So that test from the very beginning has been just, uh, based on, on AI and really innovating on how to design new items that would be really sensitive and really going beyond what's been. Established in the assessment field. So that's been really interesting that at the beginning it was pretty controversial and the more established assessment researchers and companies, they were very skeptical about our test. But over time we've really managed to demonstrate that the test is really rigorous and it is really now accepted by by many universities around the world. And so that's been also a big success in that we are also applying parts of. What we developed for the dual English test in the app to give us some signal about how well people are learning. So that was kind of like the second group. And the third, I would [00:18:00] say, is just a generative AI with a generative ai. Many new opportunities open up. One of them is content generation. Generative AI is great at just generating content, and so we've been able to accelerate. A lot just creating our courses. There's so much content that we want to add that we want to iterate on, and it would've taken us probably a hundred years to actually launch the amount of content that we have in the past a couple of years with the help of AI still keeping humans in the loop, but so much faster when using AI within that generative AI bucket. There's also new. There are new interactive features that we've been able to build, and so something that we launched was what we call video call with Lily, where you can actually practice your language practice congregation with a bot. In this case, this is one of dingo [00:19:00] characters, Lily, who's a like a non impressed. Teenager. Uh, so it's fun to talk with her. So the, this is something that we've been wanting to do for a very long time, but just there was no technology to really pull it off. And with generative ai that's now possible. You can have a, just a regular conversation with a bot. Also with generative ai, we've been able to give people much better feedback. Before it was. Hardcoded what we show you when you make a mistake. Now we've been able to really be much more flexible and also personalized for, for a given user for their mistake, tell them exactly why they made a mistake, give them some helpful tips. So that's also been a game changer and still iterating on all those things and coming up with new use cases. Duncan: I'd love to double click a little bit on the assessment piece there. Every company I've worked at has struggled to measure [00:20:00] real long-term effects outside of the app. What are we actually doing in the world for with our users? And I imagine in your world that's like exponentially harder because you're trying to figure out are you actually helping your users become better at the things they wanna become better at? How do you think about that? Maybe we can talk more about how you even put your arms around that problem. Bozena: Yes, definitely a hard problem and it's very hard to measure learning the same way you would measure things like user retention or daily active users. One reason why this is hard is just a timescale. Learning takes a long time and so it's difficult to. Have a metric that you just measure in like a two week window, which is maybe often what you can use for, to measure, let's say retention. And so that that's actually one big challenge that you actually need to wait quite a bit longer to see. Okay, are people learning? Learning is [00:21:00] also. Non-linear. So sometimes short term you can see, you might think that something looks like an improvement because maybe people's accuracy goes up. But it might turn out that actually longer term they learn worse than using some other method where maybe initially it just takes them longer to figure it out, but longer term, they could have take off. So sometimes it's really hard to know, okay, are we, what state are we in? Are people actually learning or is this just a short term? When, so that time window is the big challenge. So we generally, we try to do different things. One, the most rigorous thing that we do is actual research studies that are run by our learning scientists, recruiting people from dingo learners, and actually trying to test them with independent measures. So also not measures that we developed ourselves, but trying to use. Independent tests and just trying to [00:22:00] control for various things like what other methods they're using. Generally trying to recruit people who, at least self-report only using Duolingo. People who maybe I ideally from the beginning have been just learning with Duolingo. We try to be very rigorous about it and then run a variety of tests. Sometimes we just do snapshots of, okay, let's recruit people who are, who just finish a particular. Part of a course, let's say the first three sections of Duolingo Spanish course, and then just see where they're at, where they're scoring, and think about, okay, this is what we would expect them to be at a certain level. How are they comparing to that? Testing different skills. The easiest ones to, to test reading, listening. The ones harder to test or speaking writing, but we've been trying to test across all those different skills. Sometimes we do studies, um, that are even more rigorous, so we do a pre-test and post-test and [00:23:00] really follow people when they study on Duolingo for a certain amount of time. Usually well has to be at least a month. So usually those studies are even longer, several months. So that we really can see the effect. And that has been very helpful because we feel like we get really good signal on how well our product is working. And what's been interesting is that over the years as we've been just running those different studies, that we are also able to see how really the same assessments with different cohorts of people, how they that compares. And when we look back at, let's say, 2020. People were scoring a decent, there were solid learning outcomes. But then when we look at 2024, there's a big jump in how, what the learning outcomes look like. Again, at the same, finishing the same course and the same section and people are just scoring much higher at the [00:24:00] same point in the course. And over those four years, we made lots of improvements to, lots of improvements to our courses. To our experience, and so that, that gives us confidence that okay, those, those changes actually made a big, big difference. Of course, the big downside of all of this is that this is slow, so this is not something that we can use to inform. Really those fast iterations, this is something that we need to wait for and we try to run some faster studies, but again, it's, they just have to be at least a few weeks long. So that's still, it's not the, generally we go at a faster pace with other types of changes. So then we just try to get more different types of signal. We definitely leverage just basic engagement metrics as well, because that's, uh. Pretty, pretty fast signal on how well something is working, especially something like retention, because we can quickly gauge [00:25:00] whether some experience that maybe we're trying to introduce is frustrating or not. Are people dropping out? We look at time spent learning, are people spending more time? We look at the difficulty of the content that we're showing. Uh, because again, I mentioned that we personalize and so we, we can adapt to how much people can really handle. And so we can look at, okay, with this one change that we're introducing, are people starting to see content that maybe is overall on average more difficult or. Do we need to start showing them easier content in which it seems maybe people are getting, they're not learning enough, they're getting things wrong. So that's also a, a, a useful signal. That's pretty quick. And then we are, I mentioned that we are trying to leverage those different types of assessments that we've been developing for the, for the dwelling English test. So now something that's really exciting is that we use a video call with Lily. To actually assess people's proficiency because we can just analyze [00:26:00] those video calls and just get a quick signal on, okay, this is our estimate of just their proficiency in, in, in speaking, and again, as a proxy, their overall proficiency. That's very exciting because those video calls can be pretty short and we just need to capture. A few of those calls to really have confidence about the person's proficiency level. So definitely something that we'll be exploring even more in the future. Hugo: That's all incredibly exciting and I actually don't think many people know or appreciate the breadth and sophistication of the research that happens with respect to learning plus plus Duolingo. I mean everything from longitudinal studies to pre-post evaluations and everything you're doing with Lily at the moment. And I do agree, the type of high signal you can get in brief amount of time with those types of calls is amazing. I'm interested what you can share about just. I suppose, what are the most surprising findings that you've found [00:27:00] about what actually help people learn or what doesn't help people learn as well? Bozena: Yeah, lots of interesting findings and some of them may be not surprising. Okay. You need to spend a certain amount of time learning to, to actually get to better outcomes. Um, is it Hugo: 10,000 hours though? Is that No, I'm half judging. Bozena: No, definitely you can be much less depending on your goals, but something that I'll shares. One, one interesting result was actually around people's ability to communicate and them using, using video call, and we've seen that. It actually, you didn't need a very long time to start having very basic conversations because often people assume that it will just be months before. Before you can do anything. But we've seen that just after four to six weeks, people are able to formulate their own thoughts, their own messages, especially in writing that's a little bit easier, but also orally. So that's been, that's been a, that [00:28:00] was a pretty exciting result that we saw, Hey, this is actually working this kind of conversation with a bot, even if it's a. These are short conversations, not that many of them, not for that long. This can be pretty effective probably. It's just helping people just unblock a little bit and just going for it. A lot of what's hard about speaking a language, it's just this anxiety and not wanting to, to just embarrass yourself. And this practice with a bot in particular, uh, we think. Very helpful because you don't feel the same pressure and then you, you get going and then it gets easier. One interesting finding also was when we switched just the ui, the kind of navigation of the courses, so what we call the older dingo, what we call the tree, where you just navigated it a certain way. There was just some options to go horizontally and you can go [00:29:00] down and then we, we changed it to what we call a path, which is. More linear and, and so well, we made this change mainly to just help people navigate the whole experience, being clear about what we recommend they do next, and that actually had an impact on learning outcomes, which was interesting. We actually didn't expect it exactly. We thought it would be pretty similar, but actually giving people. Those stronger recommendations and making it easier for them to navigate the, the app that that helped. Maybe just another thing I'll say is we've had by now also several findings. How about how Duolingo compares to, to classroom teaching and maybe what's been surprising is that learning on Duolingo can actually match, or in some cases even be better than learning in the classroom, which. What's surprising to us because we thought, [00:30:00] well, we would've assumed that the classroom experience is still superior. And so it was exciting for us to see that, okay, we still think we, we have a lot to do to improve our experience, but it's already really working pretty well. If we can match the classroom and again, sometimes even be better. Duolingo Duncan: now teaches more than languages like math, music, chess, even reading too for young children. My, my kids use Duolingo to learn how to read. How has expansion into new subjects changed your approach to learning design and thinking about outcome measurement? Bozena: It's been so interesting to expand to other subjects because we had some intuitions about. How well we could just apply our learnings from the developing the language app, but we didn't really know, okay, how well would it work? And it's been really exciting to see [00:31:00] how well actually the basic principles do indeed transfer and we can use underlyingly, the same type of methodology across those different courses. So we started math, music, chess. We're definitely thinking about other subjects, so who knows what will be next. But in general, we're very interested in expanding, maybe doing generally science, physics, and we feel like, again, those basic principles really still work. Trying to teach you in a way that's, that's largely just learning by doing this is our whole methodology, like making sure that you engage with the material. In a very active way, in a very hands-on way. You just do exercises that really transfers to all those subjects. This, we found that this is very, a very engaging way of learning, where we just continue to, to making sure that you interact with the material very actively. [00:32:00] But at the same time, we've definitely seen that every new subject requires slightly different treatment, and so with each subject. I feel like we started from a, maybe a, like one point, and then each subject ended up diverging. And, and so for, for example, in math, like we, we really started focusing much more on just the, the breadth, the content coverage that that just started to feel like. Like a real, real requirement to have a good math course that we can offer many different topics and trying to really connect it to the real world so that it feels more, more relevant to learners. For music, we, we realize that, okay, this experience just has to be amazingly fun, even more fun than we, what we were trying for language. We are really experimenting a lot with even more just animations. Different [00:33:00] visualizations, just lot, lots of interactivity and lot lots of colors, lots of fun stuff. Chess is already a game, so that was also interesting trying to see, okay, it's a game, but how can. Build lessons that guide you through it. And so we came up with lots of bite-sized puzzles. What's interesting is that it hasn't just been the language that inspired the other subjects and what we do there, but also as we started developing. New things in the new subject. They started inspiring some changes on the language side. So for example, in chess, we've developed those slightly more explicit tips that one of our characters Oscar gives you. When you do your little puzzles, Oscar tells you, okay, here's this move. Move here, and here's what happens. They're very brief and often [00:34:00] very funny. So we definitely lean into humor there and now, and this is working so well. We like it so much that we are thinking about, okay, how can we bring some of. Um, that type of energy, that type of experience to language and maybe give better in the moment tips. So that's something that we're exploring now. Hugo: Fascinating. And I presume a big part of being able to expand so, so much and expand your offering has to do with something you mentioned earlier, which is the fact that generative AI allows lingo to scale content creation, create more conversational practice experiences and more, I'm wondering. What the breakthroughs were that made this possible. And I don't mean of course, and I think you appreciate this, but I don't mean like chat GBT or or, or that, or that type of stuff. I mean, for example, I know that you have agentic work workflows that allow doers to, to create more content. So I'm really thinking more in terms of the practical stuff that [00:35:00] allows your team to build. Bozena: Yes. Uh, definitely having the model in the first place. That was a big part of it, but really one of the biggest breakthroughs that allowed us to move forward was actually retraining my team to, to do a lot of this work with AI and really learning a lot of the prompting, the being able to really use, use the new tools and interacting with ai and, and so yeah, prompt engineering, starting to build evaluators and. Now, as you mentioned, using the Gentech workflows. So basically my team has developed completely new skills. Retrained, jumped really deep into how to really use these tools, and they've been, of course, we've been doing that with other roles as well. Everyone was learning how to use these new tools, but for my team, they became really skilled [00:36:00] at things that we needed around. Content generation or just building things like the conversational features because they were able to combine their expertise in learning and teaching with now articulating it with the two for the ai. This is really interesting to see because they were the ones who were able to articulate, okay. This is what we need. This is what an, what an effective experience requires. This is what this piece of content, this type of content needs. Okay? It needs one, two, and three. They were able to create specific rubrics for the evaluators. Something that others engineers or others working with AI would not have been able to articulate. And so that was a, that was very powerful that they were able to do this. And it changed, changed the role that they, they're playing at the company and it really enabled fast, fast [00:37:00] development of just content generation of, of the conversational features of the feedback that we give and so on. For scaling and just content generation in particular. There were also breakthroughs that were more operational and technical, so we actually rethought course creation just from first principles. There was a certain way we were building courses before that. Appropriate when we were doing it maybe more manually or we really it, it was custom made and then we decided to just redo the whole process and try to. Batch things in a different way. Try to find different ways of just grouping things such that content generation was more effective and we could scale much faster by propagating changes and more broadly. So there's just a like very different conceptual thinking about how we should create courses. We, we also [00:38:00] just invested a lot in our own internal. Infrastructure, building workflows, pipelines that would really let us have this scaling machine and being very intentional about where we keep humans in the loop and where we don't what something that we're still figuring out the whole process, but we are iterating a lot and that's been very powerful for language. That's been very powerful for math, where we decided to just actually scrap all the content that we had before. And just redo it completely by, by just using, using ai and that again, required just building the whole, the whole infrastructure and just rethinking, okay, how do we even build courses? Hugo: Awesome. And I love so much that you index not only on technological and tech technical breakthroughs, but workflow breakthroughs and education and training breakthroughs, how much actually aligning everyone on what skill [00:39:00] sets are needed and and how to work with these wonderfully wonderful new technologies. I also love that you mentioned iteration, and I do think. We've seen the rise of agent skills recently, text files marked down with a bit of YAML front matter, and having documents like this, which you can actually iterate on with agents is kind of a form of continual learning in ai. And I actually, some of the best ways I've seen this evolve so far are you kind of do. Continually iterate on your workflow and having ag agentic systems such as Opus 4.5 or Gemini three now that are powerful enough to help you iterate on, on, on these things as a thought partner is actually mind blowing, having them as constantly evolving systems, I think. Bozena: Yes. Yes, definitely. Hugo: I'm also, I mentioned this earlier. A not insignificant part of Duolingo success is motivation and habit formation. I'm just wondering what the most important lessons you've learned about keeping people engaged in, like learning over long periods of time and having good habits are, Bozena: yeah, lot, lots of [00:40:00] lessons. I would say the biggest is that. Um, it's that motivation. It's not just something that, like a layer that you sprinkle on top. It really is something that you, you really need to think about intentionally as just part of the bigger learning design. And we, we explicitly. Avoid certain extremes. Like we don't want to make it too game-like and just lose the learning. And we also don't want it to be too academically optimal. There. There are definitely some things we could do on the learning side that we know could be very effective, but, but then becomes kinda unpleasant and people can burn out. So what really what we're trying to do is stay at this intersection of what's good for learning and what's, what's engaging. And that, I think that's been very powerful to really think about it always together so that it's not like, oh, let's build something. That's effective, and then try to make it fun, or let's build something that feels super fun and then we [00:41:00] will figure out how to make it useful. That just doesn't work as well. So instead, really what we found is helpful is when you just from the beginning, try to think about what is going to give us both. And I'll say just another lesson is that. This habit formation isn't just about streaks and bit of trying to get people to, to do a lesson every day. It's more than that. It's really about building the right experience that will feel like learning. Learning feels natural for people. For example, we've seen over and over again that what will lead to better learning, better engagement is. Shorter lessons that are perhaps more frequent, they are definitely easier to sustain that habit because you, you can complete a lesson that that's five minutes long, much more easily than actually having to sit through something that's 15, 20 minutes long.[00:42:00] And so just trying to find. Ways to, to fit into people's schedules. That's just very important. Duncan: Awesome. Well, for final question, it's really exciting time, obviously in the world of AI and everyone is imagining how personalized tutoring could really dramatically change how we learn and what the world could look like with. Better learning technology in the next next coming years would be curious, like when you think about the space of the intersection of technology and science and product and design and AI and what, you know, you said in such an amazing vantage point to be thinking about this mega space of learning technology. What gets you most excited? What gets you outta bed in the morning and really energized to to tackle the future? Bozena: I'm definitely very excited about where we're at and with the new technology, just enabling so many new possibilities. Personally, what I'm most excited about is actually personalization. [00:43:00] Even though personalization is just something we've worked on forever. I feel like with the new technology, we can just get to a completely different level of personalization and get so much smarter. Really understand our learners and really be able to, not quite yet, but hopefully in the future to generate experiences on the fly that really match people's needs. This is just something that's been difficult if we, personalization is tough, but there's so many different types of personalization that I think we can consider in the future. For example. You could imagine that we teach you the same curriculum, but really with using different topics that you might be interested in. Maybe one person is more interested in, in sports and another person in in in arts and so on, and we can, for example, teach you [00:44:00] language and certain grammatical points using. Very different subjects. Right now, it's more determined that, okay, we're teaching you this grammar with this topic and that, that's pretty much what we do for every person. But now with the capabilities to just really generate infinite content and be much more responsive to, to every person and have a memory of what a person, how a person reacts to different types of content, we can, we can get much smarter about how we. Adapt the whole experience, not just the difficulty level, but again, just like the topics, the characters we use, the tone that we use the, and I think this will really help us be even more targeted and more effective to meet very individual needs of each learner. Hugo: It's such an exciting time for personalization learning in general. Getting the right information and experiences to the right people at the right time, and it's [00:45:00] incredible that you know, and inspiring how much you've built for this at Duolingo and also to study what's effective and what isn't. Thank you bna, for such a wonderful conversation and for coming and sharing your time and wisdom and expertise with us. Bozena: Well, thank you so much. It's been a lot of fun. Duncan: Thank you. Thanks so Hugo: much for listening to High Signal, brought to you by Delphina. If you enjoyed this episode, don't forget to sign up for our newsletter, follow us on YouTube and share the podcast with your friends and colleagues like and subscribe on YouTube and give us five stars and a review on iTunes and Spotify. This will help us bring you more of the conversations you love. All the links are in the show notes. We'll catch you next time.