The following is a rough transcript which has not been revised by High Signal or the guest. Please check with us before using any quotations from this transcript. Thank you. === andres: [00:00:00] We have cargo. We have a traditional passenger, a long haul, short haul. We have our answer to low cost, and we also have an offering for high-end travelers that want luxury. We have a very broad offering. We have 1500 trips per day, right around 100 million customers. We were really small in in the data team, and now we've grown by eightfold. The team is huge now and encompasses. Almost all domains in the company. You have everything you have from material science to finance, going through marketing and pricing. So it's really a lot of different domains working in the same problem. So you have a lot of space to grow, right? To bring in people, experts in optimization, experts in forecasting and experimentation, right? In marketing. That was hugo: Andreas Bucci, chief Data Officer at latam Airlines, one of the world's largest carriers. In this [00:01:00] conversation, we explore how he built an experimentation culture inside a century old industry cutting test cycles from many months to just weeks. We talk about the surprising places. Data creates value at an airline from detecting fuel savings of only a few kilos per flight that compound into millions to reducing fraud in one of the highest fraud regions of the world while keeping loyal customers on board. We dive into why and how LATAM approaches generative ai, like software engineering, not research projects, and what happens when the biggest constraint shifts from models and data, the human decision making itself. This is the High Signal Podcast brought to you by Delphina, the AI agent for data science, produced by Jeremy Herman and Duncan Gilchrist. If you enjoy these conversations, please leave us a review. Give us five stars, subscribe to the newsletter and share it with your friends. Links are in the show notes. I'm your host, Hugo Bound Anderson. [00:02:00] Let's jump in. Hey there, Andreas, and welcome to the show. Thanks for having me. Such a pleasure to have you here, and I wanna get into all the wonderful things you're doing with data machine learning and AI at latam, but you have such an interesting history and career journey. So I'm wondering if we could start by you walking us through your journey from working at Uber to leading data at latam Airlines. Sure, andres: sure. So, of course, I don't wanna do time travel and go so, so far back. But my journey in data and technology starts with entrepreneurship. I think over, over a decade ago, so me and some colleagues that, some of them I knew from way back, we started thinking about, okay, Chile is really developing technology and there is a space for companies to do like more tailored software and be more engaged in the process. And we founded this startup that was mainly focused on creating iOS and web apps for banking and [00:03:00] education. And from that we segued into offering analysis as well, like some, like very early machine learning in that time. I think that probably Python was the strong suit there, but we like still scaling, really scaling data. I, I think Hadoop was. Popping up. Right. So this is really way back. And I went through the Death Valley, but I didn't survive with the startup. We had a good time and I think I, right after that, I joined Uber, actually, and I went to San Francisco. I was Uber for four years. Yeah, I know. Come from there. So we go way back to pre IPO, Uber. Which was super scrappy in Uber. I joined in operations actually like launching cities, which is a completely different gig than like heavy data science. But it is scrappy you. You work with a lot of data that I think that one of the things that I take from Uber is that I saw the magician behind the curtain when it turns out to like really understanding digital [00:04:00] organizations at different levels. But when I went to San Francisco, that's a completely different thing. Like impressive for somebody coming from Latin America with really traditional companies looking at Uber, being this. Physical digital product and see how everything is managed from this scientific point of view. I, I think that I take a lot of learnings from there. And then I think the IPO date was like 2019 Duncan. That's right. duncan: Yep. andres: Yeah. Okay. So ending 2019, right before the pandemic, I came back to Chile and I joined Somac. So Somac for people to know Somac, probably a lot of people is a Home depot of Latin America. It's huge, actually. It's the largest. And that was super fun because I really liked kind of the home improvement thing. I like, I think a lot of people enjoy that. And working in data there was super fun. A lot of experimentation I brought there. I also, some former colleagues from Uber had this startup for [00:05:00] experimentation and I brought them in and we do, we did a lot of fun stuff there, experimenting in stores, which is very challenging, but super fun. And then I joined latam. And latam is huge. It's super fun and I think it's completely a stop up and the challenges are huge. And yeah, I can talk for hours about the time. It's super fun. duncan: You know what's so inspiring, Andreas, about your journey is it's this kind of mosaic of different kinds of experiences that probably really make you uniquely well situated in a business as deep and complex as latam. Maybe actually, for listeners who aren't familiar, you can talk a little more about what latam is and more about kind of the, the business itself and the types of challenges that you solve. andres: Yeah, sure. Okay, so first, I'm a bachelor in business, by the way, so, and I do data. Like hands-on and now leading teams. So, okay, so latam is, I think that's the ninth most valuable airline in the world, right? [00:06:00] And. It was broke. I think that we went out of chapter 11. We, we ended that process in 2022 or 2023. Right. So it was a really difficult time pandemic. A lot of airlines went through this, but I think latam is really pragmatic and performs amazingly. The teams are really good as full of smart people, really good at their job. And latam is the largest airline in Latin America. We have cargo. We have of course, a traditional passenger long haul shorthold. We have our. Our answer or a proposal to, to low cost. And we also have an offering for high-end travelers that want luxury or comfortable, feel comfortable. We have a very broad offering. We have 1500 trips per day around one 100 million customers or a bit short from. So it's a country. It's enormous. Fascinating. hugo: And I'd love to know, Andreas, what the state of the data function looked like [00:07:00] when you joined, and what were the first opportunities in low hanging fruit? And without leading the witness too much, my understanding is you joined mid 2022, and so to set the scene that was two years into the global pandemic, which would've been an interesting time for an airline to say the very least. andres: Yeah, actually this was a conversation with my wife for some background. So Somac was considered a basic needs company in Chile. This meant that stores would remain open, et cetera, and there was a lot of online, uh, commerce opening up, and I think Somac had really jumped into that wagon really quickly. So it was doing great, right? Given the difficulties. Of course, there were a lot of difficulties, but it was doing well. And then a day I come to my wife and say like, Hey, so I wanna move to latam. And she was like, okay, isn't that airline like broke? Right. I was, yeah. But [00:08:00] they have an amazing plan. Like met the people there, they have the right mentality. They wanna achieve something that's never been done. Yeah, I wanna join. And she was like, okay, are we sure? Yeah. So we decided, so actually moving to LA Dam was a discussion in my family. My wife's really supportive about this, so thank thanks to her. But yeah, for me it was the right move when I got there. We were really small in in the data team, and now we've grown by, I would say. Between an eightfold, something like that. The team is huge now and encompasses almost all domains in the company. Another quick note there, so for people not familiar with airlines, you have everything you have from material science to finance, going through marketing and pricing. So it's really a lot of different domains working in the same problem. So you have a lot of space to grow, right? To bring in people, expert in optimization, experts in forecasting and experimentation, [00:09:00] right? In marketing. So it's easy to grow and to extract value. But yeah, so that's the state of data of latam when I got there. Okay, so expanding on that, one of the characteristics that makes that term stand out, I think, is that for some reason or another they decided long ago to keep the data. They were like, we might want to use this at some point, so let's store it. I think that we are like the third largest consumer of data storage, uh, in Google Cloud in Latin America. So that's to say a lot. You have me within the list, so that sets you pretty high. The bar, right? So me leaders like the Amazon of Latin America, I don't know if you know that, but they're also huge and that leaves you really, really good groundwork. Done to start monetizing the opportunities. So that's when I landed in Laton. The state was, we have a lot to work with. We have a [00:10:00] lot of kind of raw material, raw talent, but the operational model was a bit off. And that's where I got 70 people now, or 600 let's say. So really staggering growth. duncan: It's so cool to hear how much impact you've had in like kind of a traditional, a very large kinda traditional type of business. Maybe you can talk a little more about actually some of the specific kind of important ways that data ML AI deliver value in latam. andres: Yeah, sure. Okay. So the first thing is that experimentation was something that was not really common, right? And that's, I think that's probably. Something that's a common factor between companies that are not digital yet or are in the kind of a digital journey. They really don't understand the value of experimentation yet. So we did push for this to happen, the change, and we created an organization that was part in charge of the experimentation program and that opened [00:11:00] the door because now we started building like a huge stock of hypothesis. Right. And we knew that there was some volume some places, and that started piling up. Operations cost billions of dollars in airlines. So just a single digit improvement in percentage is a lot of money, right? The same with sales. So it's a huge problem. And when we had this, we started growing. Like across the board when we had the hypothesis kind of stuck. And the reason is you usually have a lot of, like you'll, you'll hear in Latin America, at least that chief data officers last for two years around, and then they need to switch companies. And I think that the reason is it's difficult to deliver value really quickly as you grow kind of the groundwork. Right? So I had the benefit that a lot of the groundwork was already there when I got to latam. But the operational model was really aggressive. [00:12:00] Like we are going to create teams across the board. Or in a lot of domains in the company, and we're going to be very oriented to capturing the value monetizing quickly, and then we will reinvest those gains as risk capital to grow further. We can increase the minimum standards of what data scientists, data engineers, analysts, capable of doing in the time, and that worked. It worked fairly well. Experimentation is only driving more and more value. Adoption is higher and higher. People understand the value of experimenting. We are becoming very sophisticated in some stuff and some kind of high notes of, I would say, what we've done. I think that you have a lot of very important places where you can extract value in airlines, but a larger concept is when you get $1. Inside your company, like we make a dollar, then it's up to us if we keep [00:13:00] a larger, smaller portion of the dollar at the end because efficiency, operational efficiency is key. So that's a huge space. Normals from maintenance, like actual operations, day-to-day operations, all of those, if they work well, the difference between like inefficiency and security wise, it's astonishing. We are very safe airline. Like it's one of the, I think it's, it actually is the first objective in the whole company. Like safety first. Right? And data really helps there because like when, when a plane lands. I think it's 1.52 terabytes, like enormous amount of information and going through that data in a structured way, exploring what one might, uh, want as insights, et cetera. That's. Really difficult and no way to do it without kind of modern data stack. And on the other side, you have kind of sales, like you decreasing the price or making better prices, making the right offer, bundling, [00:14:00] ancillary sales. All of those are really complex network problems. There's a lot of network effects. Having experimentation there is key, but it's really difficult. And Duncan, I think that you can speak to this for hours, right? You had the experience firsthand at Uber. duncan: Yeah. I find actually building a cultural experimentation is way easier said than done in practice. Often non-scientists and non technologists really like making decisions quickly and, and like moving fast and not necessarily. Being careful to set up an experiment and have an AB test with a control group and then measure and sometimes be wrong about their hypothesis in a public way. So actually, I'm curious, like how did you create that culture? It sounds like it's really worked. How did you start to build that? What processes, what techniques did you execute on? andres: I think that at first I had the benefit of being the new guy, right? So people trusted my opinion, I think. Okay, we, we made the investment to bring in this guy at least hear [00:15:00] me here, mouth for a year or so. That helps. Yeah, that helps. So they were willing to take the leap, but of course they wanted results that I'm as a whole, right? Not particular people. And it's funny that you pointed out, because it, it feels exactly like you say. So you would go into like more senior leadership meetings and you present experimentation as like, this is a huge enabler. If you wanna go fast, right? You start quickly, you implement quickly, you find out what's going on. If things are right, if they're wrong, if you are like, you should, we're gonna let this be for a couple of years and the technology is better, and then we're going to rather invest here, move the teams there, and that is something that you enable through experimentation. But then you go to the meeting like, okay, so we are trying this and until we get really strong signals given blah blah, it's going to be blah, blah, blah, blah. Three months is what, wasn't this supposed to be quick? Yeah. Meaning it's quick. [00:16:00] You have to be very technical and you get better right as you try and more and more experiments. So that shrinks really quickly, but it doesn't go to zero. You need a couple of months, sometimes, maybe weeks, sometimes, right? Depends on the amount of data you have, how clean it is, but you have to make sure that people understand for a couple of kind. Sprints, that this is actually something that delivers. And I think that's the really tricky part when you come with this dual message or contradictory message like, this makes you faster. And no, you have to wait until we get significant and some power. Like, okay, this is not what you said would happen. Yeah, I think that was the trickiest part. But once you have some. I'm gonna quote my CEO. Here he is. Okay. If you wanna go build a Ferrari, I don't want you cruising by the shoreline. Hey, take a look at my Ferrari. No, no. You're gonna go into kind of the month. I want you to be in the podium and you need [00:17:00] to do this fast and we need to see the evidence. So this, we started with revenue management, actually, which is, I think pricing in network effect scenarios is incredibly complex, that you have the problem that an airplane has different pricing. For different categories of seats, right? So you need to also bake that in. So an airplane, you can really look at specific pricing. You need to price by routes and experiment by routes, but then also you want representation, right? So there is this huge mix and we're bury really talented people that understood kind of state of the art. Some people from booking actually working with us. So we really brought that Some people from Uber. We really brought in heavy hitters and we managed to tackle that actually like shrinking experimentation times from months or maybe 12, eight months to two months, three months. And if you get earlier signals, you can actually really heal experiments that are pointing towards really ill [00:18:00] places like, this is not working. No way it's working. And then you just. Going to another, like next hypothesis, fine tuning stuff, et cetera. So it started working and there you got like revenue management really convinced on this, and that's a very technical team. So I think that kind of snowballed towards other teams. Then we went into operations, but again, like. If you wanna explore more things, you can talk about the operations part, like for five minutes straight, just laying out the problem and how technical it is. But yeah, if you wanna go on that, I happily explain it. duncan: That's a really beautiful example because you obviously honed in on a super high value set of use cases for the business that are highly measurable and presumably when it goes well transparent in the business results, like you see revenue go up in revenue management when done well and that's it makes it the case easier to keep going. andres: Yeah, exactly. So revenues one, fuel consumption is the other, duncan: right? andres: So that's also where are we going to drive this Ferrari next time, right? [00:19:00] So the fuel part, um, it's tricky because you, of course, final decision. Decision is always made in the cut. Like the captain has the high final saying of how much fuel, who's on the plane, how much load, right? And that never goes away. It's always there. But if you take a step back in the aggregated. Number, like small differences all add quickly, right? So let's say that you put on extra fuel just in case you need to divert to a different airport, which happens, right? But how much extra fuel is really required, right? That's also optimizable, actually optimizable with. Huge amount of safety, right? So there's no way that you are going to run into any risks. There is a big gap there that you can optimize. But once you figure out this optimization or how high the airplane goes, how low in specific places where you have tailwind, headwind, densities, a lot of variables, right? How do you really tell [00:20:00] that you eat better when the difference. Per trip or flight or leg might be marginal, right? And marginal can be, I don't know, 20 pounds, 50 pounds, right? If you have 1500 flights per day, right? And that adds up really quickly, like a lot. So we ended up creating this system that experiments within again months, uh, which is a very, really short time span, and we are able to detect differences of 12 kilos. So 12 kilograms is like 25 pounds, right? In a route. Right? Statistically significant, really high replication power. Really good technology. We haven't published anything yet on that because it's still something that we are figuring may be a bit secret sauce, but we also have this PhDs program if you wanna, I know that you're interested in this one, so maybe let's defer the talking point to later in the conversation. Yeah. But that's another example of like [00:21:00] huge impact. hugo: That's an incredible impact. And one thing, particularly when thinking about experimentation, my understanding is LATAM uses zero based budgeting. And I'm interested particularly with a culture of experimentation where value may not be obvious immediately. How do you demonstrate ROI and secure continued investment in all your data ML and AI initiatives andres: risk? So it's rather risk investment. So first to explain like what. How zero based budgeting feels like. So it's actually like you have certain level of confidence that there are some things that bring in value and are necessary, but you still discuss them. Right? But once you are over that tier, then you go to another tier, I'll say this, okay, this, this things work, but maybe they won't scale. We're not certain, et cetera. So those have a different level of risk. So you need a different set of approvals. To get them in your budget and then in the long tail you [00:22:00] have risk capital, right? So mostly capital investments that either are kept or are included in your operational expenses in the long run or not. And most of our advances in experimentation and other technologies, general TBI, is one of those start through capital expenses, right? And that means that you need to diversify. So usually. I think that one big mistake that traditional companies make when it comes to these kind of technologies is not recognizing that there is no real, like big cost in making a mistake in terms of I need to maintain the mistake. No, if you are early, like, you know, throwing away a ton of code doesn't really impact your bottom line, right? So if you're quick in iterating, you actually can bring in the risk, like really low, right? So you have that on your behalf. If you have a bigger. Team and you have some kind of like broader talent, like not that heterogeneous, like having really good people in a lot of places, [00:23:00] then you can diversify. So you usually think more of a program, right? So you might have the program with the first attempt in revenue management, right? But you are understanding that your investment looks forward to the next six months, right? Where you might get 1, 2, 3, maybe four attempts, and probably one of those works. Two of those work, but they pay for the complete tab. And once you have that evidence, it's not about if it would work, it's okay. How do I take this into a place where like the risk return balance is a bit better so that I can include it in my budget with fewer conversations. Not so much like risk capital, but now we're okay. This is something worth exploring. Not there yet, but once you have those steps, it's easier. Quote unquote, right. It's still difficult. It's a lot of time, but you get familiarity. Like finance starts thinking about, okay, risk capital in technology looks like this. That's fine, right? [00:24:00] But again, you get the high bar always. So it's a lot of work, but I like zero based budgeting because it forces you to do things this way, and it also opens the door to this risk capital approach. Yeah, that's how it feels, duncan: at least. Speaking of high value use cases, you talked a little bit about revenue management, about kind of fuel optimization. I think something you also talked about previously with us was around fraud detection and how valuable that has been for you all. Maybe you can talk a little more about what that's looked like. andres: Sure. Okay. So I had. Different experiences, not only in latam, outside of latam as well with fraud. I think that one metric that I'm not really proud of in Latin America is that I think that we are second to Southeast Asia maybe in the volume of fraud that, uh, represents e-commerce. The number is tiring. I, I don't remember the last number, but one, I think that one. And people can, while listening to the podcast can actually look for this. I think that the [00:25:00] number is around 30%. The estimations maybe 20%. Yeah. But if, if you only see 10%, what, how many false negatives are you actually, are you actually talking about like you really don't know? In the case of credit cards, I think it's easier, uh, in, in the sense that the system is really well built in order for you to know that there was a fraudulent transaction. But in the moment. It's really difficult and with chargebacks, et cetera, it does affect you, right? Even if you, uh, uh, get like zero responsibility on actually covering the cost, let's say that, which is not the scenario, but let's say that's the scenario. You still have a seat outside of inventory, so it's still money that you lost, and that's a huge problem. So you have external vendors that provide services that I think are consistent and they are serious companies. But usually in, in the smaller set, once you get the prediction of this is fraud or it's not fraud, [00:26:00] and it's a huge space for improvement and, and in this market where there's a lot of fraud, you usually get. More false positives that's on the safe side, on their side. And why? Because the false negatives show up in the balance sheet. So they're not stupid. They're actually playing on their behalf, which is fine. But when you start looking at the data, I actually prevented paying customers that want to be loyal to my brand. I told them, no, I don't wanna accept your credit card or whatever. And that's fair, really unfair. So that's where we spent a lot of research, right? Understanding how we can improve the false positive rates, right? Uh, and that's tricky also because you need to be smart about how you experiment this. You can do some kind of. Counterfactual design that's robust and intelligent, but experimenting is really not something you want to do. Broad scale, [00:27:00] not only the financial impact. Once you have once, once you see that there's evidence that you're actually losing money, right? Either be it by not selling or by actually paying for chargebacks, for instance. Once you have that evidence, it's something that you need to be careful with experimenting, right? There's, I think that's even like ethically. Right. Leaving somebody, you, you never know. Somebody needs to fight tomorrow, like urgently because something happened and it just ticks all the boxes. So fraud. Right. And we've invested a lot of effort in making sure that that's not something sustained. And I think that we've been served fairly successful in this. And the difference in, in money, right? If you wanna measure it that way, is in the, like the. A lot of digits. Millions and millions. I can't really say those numbers, but it's a lot of money. And I think that the, it's money, but I think that the good thing is that we are making that additional effort to make sure that we are [00:28:00] safe. We are, we're like, security is number one, of course, but we are also very fair and transparent with our customers and we try to go the extra mile there. The other experience I had was with Uber, actually fraud was a huge thing. I think we are way past my NDAs, so I think I can, I can speak more in details and probably things are completely different, but at some point for you to get an idea, Duncan, the p and l for emerging market in Latin America was the size of how much money you could save in fraud because of the give gets and the promise, the volume of promise, right? So you're talking about tens of millions of dollars, right? And it was ridiculous. And you see then that there is this huge. Tech industry that's behind their very legal industry. That's smart and adapt. I think that there was some American actor that said, I want to say Woody Allen that said organized crime spends very little in office supplies.[00:29:00] So Yeah. Yeah. When you're working fraud. Yeah, but that's a huge. duncan: Wow, those numbers are mind blowing. hugo: Yep. Absolutely. And it's also so heartening once again to hear about so many business problems being solved and so much ROI on with classic ML and, and, and good data and, and high quality data and all of these things I am interested in. How you are thinking about generative AI these days, and perhaps we could start by thinking about the most effective ways you've deployed it so far, and then what your thinking vision wise for the future of generative ai andres: latam. Okay. Yeah, big question, right? So I think that this is still organizational are still very dizzy on how to tackle this. And if as technology progresses, I don't know if it becomes any easier. You get more evidence, but adapting this to how a big company operates, it becomes really challenging, right? So I think that you need to do an educated guess initially of how to approach this. And one of the things that we did well, I think, [00:30:00] is we looked at ourselves and understood how we would want generative AI to play out, right? So we didn't want high costs. Right. We wanted marginally decreasing costs. Actually, like as we progress in our generative AI journey, things should cost less and less so that. Helps is that it doesn't really, you, you are not carrying like this huge bag of mistakes, right? You are okay with throwing them out, right? And that becomes increasingly easier because these systems become increasingly more complex, right? That's one. The other one is in Latin America, there is a shortage of talent. I mean, there is talent, but I think that the distribution curve is. Nothing to be compared with in south in, in San Francisco, of course, but also if you go to countries like Brazil or if you go to Europe, there is a completely different distribution of talent. In Europe. You have an overflow of PhDs and masters Because of how education works there in Latin [00:31:00] America, you have, it's imbalanced towards the demand part wanted more than theirs to actually grab in the market. So we needed to deal with that. And one way to tackle this is, okay, you probably don't have so many data engineers, data scientists, machine learning engineers, but there's a pretty solid market of good developers like software engineers, solid software engineers. So we thought, okay, how can we. Software engineer, this new AI branch. And actually what I think that we've pinged about this idea offline with Duncan, like where is built versus buy going to land in here? And I think it's going to look more like software than how like AI itself. For some specific cases you probably want like. Your AI technology is really close to a chest, but I think that the predominant one is going to be, let's think about processes, let's think about operation, let's [00:32:00] think about applications, which sounds more like software engineering. So we took that approach. So what we did is that we have our proprietary, uh, machine learning and data operations platform, and we just extended that, right? So now anybody in the time has access to opinionated ways of creating. Parsers or chatbots or more sophisticated agentic applications, but you don't have access to anything, right? You can create more sophisticated applications, but you usually have this kind of cookie cutter approach. And the thing about this is that software engineers love cookie cutters, right? At least the ones that want to be productive, they have to get started from the version one when somebody just delivers it, right? And it picked up. And from that onwards, we are just trying to keep up with how the technology progresses. Now we have rolled out our integrated suite of VS. Code called code, [00:33:00] where we have actually the same approach. Like you talk to the the ID and the ID has an opinionated way of creating your first version. Of course that that restricts you from some specific stuff you might want to do, but that covers 95% of the problems and it makes things go faster. We integrated that with chat DVD, any single model that we want security to approve, right? We are really regulated. We are not only an airline, but we are listed in NYZ, so we are very. Regulated. Uh, so security has to approve everything, but now they just deploy these models, these approved models, into the environment. So that also is easier. Cost control is easier, everything. But again, we made an opinionated decision about tackling this as software rather than ai. Yeah. So the examples, I have a ton. You wanna hear some fun examples there. So [00:34:00] one of the things we realized is we started investing in three main areas at first, right? So personal productivity, which be kind of chat, DPT, and we actually have chat DPT licenses. They're our partner, but we also have Google a lot of stuff, but we use kind of developer tools and personal productivity tools a lot, right? Then you have. Process optimization, let's say AI plus automation in processes. And there is a huge value pool there for I think every company. And then you have AI embedded experiences, right? So better experiences in, I dunno, the website or through your app or in the contact center. And they're completely different. I mean the same technology, but approaches are different. So one that's super fun to explore is automation, right? Because the opportunities are huge. So one of the things that we have problem automating is, uh, KPIs, which is weird, right? [00:35:00] But sometimes you get like really confusing feedback from customers about changes that you are really not able to see. For instance, in the app or the website, and you deploy something and everything looks fine and nifty, but in reality it's just going south slowly. But it is right, not visibly. So what we did is that we automated the complete process of extracting KPIs from conversations. So we get a ton of calls. And written feedback, and that's a huge volume of unstructured data. So one group of data scientists created this AI that went over your conversations and feedback, et cetera, with an instruction. So say again, I'm interested in creating this KPI that kind of points out blah, blah, blah, blah, blah, and it just figures out a plan and say, okay, I'm gonna go and find this data. It just goes right. And then you have like concrete metrics about, let's say performance [00:36:00] in the payments flow. And then it signals, oh, there's a problem here. And, and you look at this and say, go, this is a huge time saver, rather than using traditional pipelining processes. So that one's. I think it's really cool. The other one is processing, uh, payments. So you usually airlines buy a ton of stuff. If you go to an airport and you find, I dunno, like this little kind of division thing with a cord, right? So there is a big chance that's not property of the airport itself. It might be property of a third party that has operations and serves that. Operation service to the airport and to the airline. So there is an ocean of companies providing services. So you get a lot of invoices, a ton of invoices and invoices are not structured. There's no way to connecting that to did we actually spend that? What's the driver? What does the contract say? All unstructured. Also super fun. Ai, when you really narrow [00:37:00] their performance and their decision paths are really good at this. So that's super fun. Like you get a huge amount of data and then you see like structured decisioning and really good insights. Again, surprising stuff like Duncan, you've worked in tech for a long time. If you go back, let's say five years, a team that would build this would be composed of, I dunno, NLP experts, uh uh, vision experts. Like topnotch machine learning engineers and data science and AI experts, and I probably set you back, I know $2 million a year. Right now it's central, the dollar duncan: really amazing, right? A change in what kinda small teams can do is dramatic with ai. I'm actually curious to maybe double click a little bit on on that, kinda knowing how much the world is changing now and what you can do with what kind of generalists can do with the right tools. What are the biggest challenges in scaling data work today [00:38:00] for you, and how are you addressing them? andres: Yeah, I, I love the question because it's actually down to the point of kind of what's the next step, like GPT five, GPT six, now cloud is going to answer, right? And you have the same site from Google, and then the open source community is going to answer and everything is just improving. And then you see, okay, so 2026 is a much harder challenge, but where. Where is the bottleneck? One thing that I said before and probably double clicking would work is like I mentioned, you have personal productivity, automation and experiences, right? So right between personal productivity and automation is delegation. Right where you are thinking of a collaborator or, or an AI that you would happily add to your org chart, right? You would recognize that, ah, no, that this AI does this right, does it really well, right? But what's the shape of that? That's tricky. So if you [00:39:00] try to expand that in order to solve it. You have two approaches, I guess one is thinking about, okay, so what, I dunno, what does a chief data officer do? So I can probably lay down like five or six big things that I do and one of those, like each of those have like specific tasks and I can chop that out. Pop up, right? And current AI through personal productivity and probably connections to. To ecosystems like we connected Chat BT to our data ecosystem. That formula works really well to solving the specific tasks. It scales up, it's easy to observe, like you get good debugging, et cetera. And I save time. Actually, we have a ton of stuff. We have ais that create presentations following latam styles and everything. It just go through thousands of potential slides and it picks the, the best story, blah, blah, blah. And you have the visual hero really quickly, and that's a huge time saver, soon to be rolled out. Massively. But that's one example. But [00:40:00] then you have the other approach saying, okay, but there are some things that are really sophisticated. I, I don't have enough talent. Even if I try hard, I won't have enough talent to cover that ground. So for instance, I gave you example of safety. I think that's a huge good example, right? So you really want to be thorough in every single step of a flight. 'cause you wanna catch everything early, right? And I think that we are really good at that. But there's always better in that space and there's no end. Like you rather go all the way, but that's technically really impossible. But if you go with AI and you try to tackle that and maybe other stuff with that, you'll see that the bottleneck becomes decisions. So what does this mean? I can, if I choose to delegate decisions to the ai, I can just let it be and just say, GBT five launched. Right? So if you look at all the demos of long standing processes, all delegate the decisions completely. Yeah, just do this and they [00:41:00] wait it out and it's a good result, great. But if you give more complexity to the problem, then you really want to start looking into the decisions, right? So you're probably start saying, okay, I like what you did, but explain it. I want to decide if it's a good idea or not. So who becomes a bottleneck? Us humans teams, we are organized in a way that maybe is not the most effective way to do that. Maybe you want the decision makers being in charge of Theis or being the main users of the AI and not one hierarchical reporting line below them, and I don't know how that's going to play out, but I definitely know that the current shape of organizations and the definition of your role is not. Traditionally is not the correct one to actually tackle this decision problem, which is exponential. It doesn't scale through this traditional approach. Yeah, [00:42:00] so I think that's the weak point in the chain. I love that hugo: framing that. The biggest bottleneck now isn't the data, it isn't the models or the modeling. It is actually the human a aspect of decision making. Particularly in a world where we're talking about models this, models that, agents this, agents that, and all those things are incredibly important of, of course, but recognizing, putting it once again in perspective, it's about how we as humans. Collaborate with the machines and models and impact business with respect to that. And I know that's something Duncan and the team at delphina are thinking about and working on a huge amount. In particular, helping data scientists take the drudgery out of all the things they do and delegate that to the machine so they can do, data scientists can do higher level work and deeper work. And forgive me if I've paraphrase you slightly incorrectly, Duncan, but that's that, that's part of the general theme, right? duncan: Yeah. I think there's often. When you have a new tool, you realize, oh, the thing I was doing yesterday, I no longer have to do. You're like, for a second kind of concern. What am I gonna do next?[00:43:00] But then you often realize, I think that you can actually elevate the work you're doing in a significant way and use more critical reasoning and do much less of the plumbing that that today occupies a lot of time. hugo: Yeah, and we actually, we had an episode a while ago with Tim O'Reilly of O O'Reilly fame based on an essay he wrote called The End of Programming. As we know it. And part of that message, and I'll link to that in in the show notes, is of course certain things are gonna be quote unquote deprecated in certain skills, but there's gonna be an explosion of what software and data actually actually mean. So Andreas, I'm wondering it'll be time to wrap up in a minute, but in terms of this bottleneck now being decision making and not the data or the models, what are you most excited about for your work and your team's work and the future of data ML and AI at latam? andres: I think that I'm lucky to be alive in this time, right? And in my position, one thing that, that I think is, it's a, a page of a different [00:44:00] book that I took with me. Like I just comparing notes with other people, like leading the industry, not this one, but ai. And what I saw as a common pattern that really helps is connecting the dots, right? Mm-hmm. To think about how far it dots. I can talk to you, like I can speak to a single moment in our research and all my trips to San Francisco and my conversations around a single moment where I was like, okay, I know where this is going and how it's going to be more complex, right? So this happens within, let's say, two months of time. So we started. Giving access to chat DBT to our data ecosystem, right? That's really difficult because of permissioning and a ton of stuff. But once you figure that out, AI can perform a lot of operations in your data ecosystem. Such like, just like a data engineer, data scientist. And we started talking about different problems and there was a disability with, with [00:45:00] route optimization for cargo, right? And I was say, yeah, I have this route, I figure stuff out. This kind of open-ended questions just to see what happens. And it came up with some reasoning steps and it realized that there was potentially an optimization a, a space for optimization. And it figured out that this route was overpaying in comparison to different alternatives. Let's say one of the kind of intermediate points by 500%, which is a lot when you talk about like cargo displacement in this case was cargo. And of course the team solved this right away, but the first. Question was like, this is probably wrong, right? There's no way this is, ah, the world's like, no, didn't work. Et. But then they said, one of the leaders in cargo, but I still wanna check if this is actually true. And they pulled the thread and they realized that the AI had covered a lot of ground really in it took a while, but he figured out [00:46:00] something that was. We weren't able to figure out. For me it was like, okay, there's 0.1 was this is really useful. Point two is it's better than us in some aspects like concretely, not just in the benchmarks. And at the same moment, I had this visit to, to open AI and I was lucky enough to be there in the room and someone was talking about, I think this is the time like around chat DBT. Four before O or the o's and, and he was talking about kind of the future of the stuff, and I got to ask a question luckily, and what I asked was, okay, so you're talking about the impact in the companies right to us. Like we buy stuff from you, but you're introducing a new tool to the society, right? So you're gonna affect my customers as well. I wanna hear your thoughts. And he had said something, he said, the last human being to be smarter are at any point than an AI has already been born. So there is a person [00:47:00] now living that as they progress in their maturity, cognitive maturity, et cetera, et cetera, is always being to be less capable than the best performing in ai. And I think that's evident now, right? But if you connect those both, what I figured was like, okay, so next five years I'm gonna start hiring people that's right off the right of college or within the first years of experience. And those people will be overshadowed completely by ai, but there is a chance of that happening, right? But I still have the decisioning problem. I still have the culture problem. I still have kind of the ethics problem, which can't be left out of the equation. That's when I realized there is a huge gap that organizationally, technically, technologically, philosophically, needs to be bridged and nobody's gonna do it for us. And I think that's why I love latam. Like people understand this and we, it sounds like an abstract challenge, right? But if you think about it, it's not, it's [00:48:00] very. It's right around the corner. The signals are there. Yeah. That's one of the things that I get goosebumps, right? I know people that have been working on this. I know the people from Uber ai, I remember the team and they were working on this kind of stuff. And then you see like the speed of progression, right? And you, you, this, this is insane. I saw the change in ticket management when we were at Uber and it just disappeared from like overnight. Right. And there's something like that big going to happen and people are going to shift their talents towards something that, I don't know what it is and everything is going to work incredibly well, but those are the things that we need to figure out. Yeah. So again, like very deep answer, but again, from my. From my position concretely doing stuff and I need to buy stuff and I need to get ROI, otherwise I'm not, otherwise I'm not funded like from that position. It's still something that I look at and say, this is very concrete. There is something there. Concrete. hugo: Yeah, you [00:49:00] are absolutely right and I like, firstly, I totally agree that these are very practical questions, not just theoretical and abstract and really appreciate you elucidating all the practical considerations and how we can think about getting. All our people working well with all the fantastic new technologies being built. Andreas, I just wanna say thank you for such a wonderful, thoughtful conversation, and so inspiring and exciting to hear all the amazing things you're up to at latam as well. Thank you. andres: No, I'm really honored to be invited. I know that the list of invitees you have is actually astounding, so I feel really honored. Thank you, Andreas. hugo: Thanks so much for listening to High Signal, brought to you by Delphina. If you enjoyed this episode, don't forget to sign up for our newsletter, follow us on YouTube and share the podcast with your friends and colleagues like and subscribe on YouTube and give us five stars and a review on iTunes and Spotify. This will help us bring you more of the conversations you love. All the links are in the show notes. We'll catch you next time.