he following is a rough transcript which has not been revised by High Singal or the guest. Please check with us before using any quotations from this transcript. Thank you. = === eoin: [00:00:00] All of my degrees are in computer science. I was an algorithm designer by trade, but I feel I've almost got a second education in economics. After working with some phenomenal economists at Uber, one of the things that I really learned to appreciate in my time at Uber is that the difficulty and nuance and measurement, not so much in that you can get your measurement wrong, but that if you're not careful, you can get the sign of your measurement wrong, so that if you're not very careful about. How you set things up to get a measure on something, you can end up doing the opposite of what you should be doing. I think many of my teams at Uber used to be quite frustrated at me, but I would block a launch on something, even if the metrics look positive, if we couldn't explain why. So I needed to understand mechanically why is this doing something hugo: that we believe is good? That was Owen Omani talking about how easy it is to get measurement completely backwards and why positive metrics don't mean much if you can't explain what's really going on. In this episode of High Signal, I speak with Owen whose career spans some of the most interesting real world applications of [00:01:00] algorithms and data science in the last decade. We start with how he helped redesign New York's City bike network through smart incentives and overnight rebalancing. Then we move to Uber where Owen spent eight years leading data science teams across rider, driver matching surge pricing, shared rides, and eventually the entire delivery marketplace. Today he's at Lightspeed. Bringing that same systems thinking to venture, using gen AI and structured and unstructured data to understand what makes great companies scale. We talk about measurement, experimentation, modeling, messy systems, and why sell the problem, not the solution is the through line in all of it. If you enjoy these conversations, please leave us a review. Give us five stars, subscribe to the newsletter and share it with your friends and colleagues. Links are in the show notes. But before we jump in, let's just check in with Duncan from Delphina, who makes high signal possible. So I'm here with Duncan from Delphina. Hey Duncan. duncan: Hey Hugo. How are you? hugo: I'm well. So I thought [00:02:00] maybe we could just start by you telling us a bit about what you're up to. At Daina. duncan: At Daina we're building AI agents for data science and through the nature of our work we get to meet with lots of leaders in the field. And so with the podcast we're sharing the high signal. hugo: Awesome. And we just shared a clip with Owen, and I know you think about a lot of the things that, that I chatted with Owen, so I'm wondering what resonated with you. duncan: Owen and I have been good friends for nearly a decade. We were partners in crime throughout my five years at Uber. And the issue, the measurement issue he raises here, cuts deep. Some people say the fundamental question of rationality is, what do you think you know? And how do you think you know it? I think about that a lot. As a tech leader with the flavor, does the tech work and how do I know? In lots of settings, that's a deceptively hard question. Basic metrics like conversion can mislead you Sometimes, as Owen highlights, a clean looking AB test can be worse than useless. It can actually send you in the wrong direction. And getting this stuff right gives you wings. Missing the mark can be a [00:03:00] real anchor. Let's get into it. hugo: Yeah. Fantastic. Hey there Owen, and welcome to the show. Great to be here. Hug. Thank you so much for the invite. It's a real pleasure. And you've worked across so many different parts of, of data and ML and AI that I'm excited to get into the all the moving parts of what you've done. You've moved from optimizing city bike operations in NYC, working through marketplace challenges at Uber to your current role at Lightspeed. And I'd like to go through this journey with you. So maybe we could just start with. Like how you got to city bike and, and what you worked on there and it seems like such a wonderful microcosm, a lot of the work you've continued to do. So maybe with that lens. eoin: Yeah, sure. I guess to go back a little bit further, I grew up in Ireland. I did undergrad there, did CS degree, did a lot of work in combinatorial optimization parts of ai, and moved to the US to start my PhD at Cornell, where I was lucky enough to work with my PhD advisor, David Schmos, on a lot of work around algorithm design.[00:04:00] And in 2013 with the rise of Cornell's tech campus in New York, he had gotten an intro to some of the folks working at Citi Bike in New York, which was at the time, just about to launch. And shortly after they launched, we started working with them, really trying to understand how I. Algorithms, machine learning analytics could help them optimize the system. And at the time, it was wildly successful, more successful than anyone had ever anticipated, and the system was undergoing a huge host of challenges. Things like massive amounts of usage causing parts of the city to completely empty out of bikes and others to completely fill. Fill up with bike. So something very frustrating if you're a Citi bike user is not being able to get your bike back and running around trying to find a spot to park it before your meet. I hugo: must say, as I lived in the city at that time, and I do remember, I used Citi bike extensively and still do, but you even offered incentives to go the opposite way, like you'd offer me extra credits or a bit of my subscription to cycle a bike essentially upstream. eoin: Yes, that's something we, [00:05:00] David and I, along with a friend and colleague, Daniel Fre, who's now faculty at MIT, put together this incentive scheme and worked with the city bike team to get it implemented. And I have to say it's been a little shocking to see how much it has been adopted. And there's some really interesting New York Times articles in a short documentary about some people that spend six to eight hours a day just gaming this incentive system, trying to move as many bikes are in the city. But I think on the back of that has been tremendously impactful. I think a good case where, you know, offering an incentive to have a small shift in behavior can really generate meaningful change for the system and benefits and alleviate some of this issue about system imbalance. hugo: Without a doubt. So I'm interested in what your like overall mandate at Citi Bike was. What was the team like when you came in and how you got stuff done? Because a lot of our listeners really wanna know the best ways to achieve business impact with data, machine learning and ai. Right? And the, this is such a wonderful example of being able to do that. eoin: Yeah. So I think when we started working with them, it was a little bit [00:06:00] chaotic. I don't think anyone was prepared for the. Massive amounts of demand that they were driving. So really it was about coming in, getting plugged into the data and helping however we could. So some of the first things we did is help them understand how the city should look before the morning rush hour. The idea being you have overnight traffic is light. You can actually do things in the city. You can truck bikes around, how should we best prepare the city for the morning? And really it's about breaking it into stations. You fill up stations, you empty out. And stations, you want about half full to have some stock in there. I think a couple of things we really learned there is there's a temptation as someone that really thinks about algorithms and math, to have a very fine-grained algorithm, which is gonna say, okay, go here. Pick up three bikes, drop four off there, pick up seven next. And really we simplified it. That instructions needed to be really easy to follow along the lines of drive here, build the trucks with as many bikes as you can find. Drive here and dump them all out. So beginning to, to think about modeling the problem, reflecting [00:07:00] the difficulty in actually getting these things implemented in the real world. That was one of the first things we did is. One, helping them understand how the system should look for the morning rush hour. And two, building some very simple tools and dashboards to help them understand how the system looks and then what the gap was at the ideal solution to let the operations team be able to understand where they should be sending the trucks to go pick up and drop off hugo: things. I love it and I, there are several things I'm hearing in there that I just want to spin out a bit. The first is, you went in there speaking their language, right? It wasn't data, this algorithm that. It was like, Hey, what problems do you have? And Oh, look. What and what leverage do you have? You can do things at nighttime right before, before rush hour, and I love the idea of not coming in with some sophisticated algorithm that you and I will probably love. Something I'm hearing in there that I'd be interested in your thoughts on is almost some sort of what I'll call an MVA, A minimum viable algorithm in order to get your system and flywheel going essentially. eoin: Yeah, I think that's it, and I think one of the things I've learned many times [00:08:00] over the years is that your algorithm can be. Beautiful and complex and elegant, but in reality it has to work. And the second you put something into the world, there's gonna be a whole bunch of things you'd model. You'd think about complexities. You didn't factor in your design. So really, we used to have a saying in marketplace at Uber that things should be as simple as possible and no simpler. So really it was about understanding where things were today. What is something quick and dirty we could do that would use the information you have available to make the system better? Do that and then just keep going or keep iterating. hugo: Mm. Are there any particular products you can speak to or, I'm actually just interested in what the lifecycle of a data product or ML product, it's something like Citi bike would look like and also how. The people working on it in interact with it. So we talked about spinning up something quickly, seeing how it works. How do you then think about evaluation and look what type of dashboards you build as you go further into it and how people can iterate on it. eoin: Yeah, so that's a great question. And the first is [00:09:00] getting visibility to what is going on, what good looks like, getting everyone to agree with that. And then just tracking how you do versus good. And for these very basic systems where we were scrambling to get the system into shape, scrambling to get as, uh, system as balanced as possible before the rush hours. Really, we didn't think too carefully about measurement and impact measurement on these things. And again, I think it's a case where. In some sense, there's a crisis. You need to do things. You need to move quickly, and you should be doing things that meaningfully move the needle. I used to have teams in Uber that would work on newer products and one of the common failure cases there would. For a very new product or a very new market, people would be really overthinking how much they were gonna measure the impact of a decision and setting up ab tests and holdouts and things. And I used to tell them that, okay, we're like swinging for the fences with this product. If we don't see a giant spike in the time series, we should drop the idea and move on. We're too early in the journey to begin micro optimizing. We need to be taking really big, aggressive swings at things and very much [00:10:00] a city bike. It was just about going from zero or minimal visibility to strong visibility. And then trying to put some things in place that, that we had confidence, we were gonna make it better for the users. hugo: Makes perfect sense. And actually, we've had several conversations which I'll linked to in the past on this podcast. One with Ramesh Jahari, who's at Stanford. And you, I don't know if he, if you ever worked with him, did. I eoin: worked with Ramesh at Backpack at one time with Uber and a huge fact. He's fantastic. hugo: He's incredible. And so from his. History at Bumble, at Uber, at Dropbox, and a whole variety of places where he's helped people build out really large scale experimentation platforms and thinking about what it means to be an experimental org organization, but more generally. But we talked about how even pre-data driven experimentation business leaders, good business leaders, were wonderful experimentalists. If you are telling me Henry Ford wasn't a great experiment, I've got other problems with that guy. But if you're telling me he wasn't a great experimentalist, I'll call you a damn fool in in quotation mark. 'cause clearly he knew how to do a lot of these things and. Similarly in, in what you are describing. What I love is that when doing data-driven, hypothesis driven, e [00:11:00] experimentation, you don't necessarily need all the sharpest telemetry, all the wildest experimentation techniques. You're at the start, you're swinging a big axe, seeing what lands and then moving quickly, particularly with a product like this. eoin: Yeah, I think that's absolutely right. I think a trend I've seen many times over the years is there's a tendency across the industry to, to. Almost abdicate responsibility to data in cases where you rely heavily on KPIs and experimentation and they drive all decision making, and I think it removes you from this way world where you have information, it's critical that you look at it, it's critical. You look at the changes, you understand the decisions you're making, but that you gel that with the sense of where you want to go and why you're doing it, and the underlying strategy of what you're trying to achieve. And it's uncomfortable. It's much more difficult and it's quite stressful, but to have this holistic view of decision making where you're factoring in all of these inputs as opposed to totally ab advocating your launch decision. See if some metric went up statistically significantly or not, hugo: without a doubt. And of course [00:12:00] to that point, there's statistical significance and then there's real significance. You get enough data. And you do an experiment, you'll find something statistically significant. But whether it's, and I worked in biology for so many years, right? So you can get statistical significance in genomics, but whether it's meaningful is another thing. And to that point as well, I had a boss once who always said to me, Hey Hugo, feel free to bring the three things that I want to help, uh, hypotheses, I want evidence and I want arguments. And he said. Arguments, evidence and hypotheses can include data, but I don't want data. If it supports one of these three things, bring those. And that's something I'm really hearing in there as well. eoin: Yeah, I mean, to give you a concrete example, I think many of my teams at Uber, it used to be quite frustrated at me, but I would block a launch on something, even if the metrics look positive, if we couldn't explain why. So I needed to understand mechanically why is this doing something that we believe is good? hugo: Yeah, eoin: and I think particularly at the moment, there's so much complexity in many of these algorithms. You can very easily have something that for reasons of noise or bad [00:13:00] samples or polluted experimentation looks positive. But if you don't understand mechanically why it's doing what it's doing and why, that gels with where you wanna steer the ship, and you're only signing up for problems in the long term. hugo: Totally, and I'm really excited to move then now move to what, what you, what you did at Uber and how your trajectory worked there. I do think to your point, we are operating in complex systems and so understanding mechanisms is incredibly important now with a lot of the stuff happening in the world and with a lot of the AI tools we're using now. And so I can't remember quite the mathematical differentiation between a complex system and a chaotic system. But part of me feels like we're operating in like a set of interleaved chaotic systems now as well. eoin: Yeah, absolutely. The world is not a simple place. To your question about Uber, so I worked with city bikes through the end of my PhD and was lucky enough to get to write my PhD dissertation on the work there, which was, it was really rewarding and one of the things I found there that was incredibly exciting kit. Be able to take all these tools and techniques and approaches I learned in academia [00:14:00] and apply them to something in the real world, and I was spending a lot of time in New York. It was cool to see the impact of our work in the city. I then post PhD in 2015, made the jump from two wheels to four and joined Uber working on the marketplace team. Started out working on the systems that paired riders and drivers, so trying to understand how we could do that more effectively. Extending that to the shared rides teams. So. In, I believe 2017, we relaunched Super pool to. Express Pool where we had folks wait and walk to to enable a more liquid marketplace. Spend some time running the surge pricing team. You're welcome everybody for that one. And then a lot of the consumer facing stuff on the mobility side of the business, which really took me through Covid and Uber was a unique place to be in in Covid. It is a little bit terrifying to see that much of your business drop off a cliff over the course of a couple of weeks in in early 2020. I hugo: can't imagine. And I wanted, we talked about crises before. I do want to get into that. Of course, the other thing. As I'm, I think I moved back to Sydney during Covid. I was living in New York West. [00:15:00] Seamless is what you do. Food delivery on. Yeah, on on average in Sydney. Uber Eats actually has significant market share, so that's something that people still, so I suppose Uber has several different types of marketplaces, right? eoin: Yeah, it does. Uber Eats really exploded over that time period and in, I believe, 2021 I moved over to to work on Uber Eats marketplace. And for my last chunk of time at Uber, ran the science org for the delivery MBE at Timber. So all each marketplace, search, feed, and ranking, as well as some of the growth verticals like convenience, grocery and white label delivery. So I'm hugo: interested in, in what you did at Uber. How did you go about making sure that your function affected change? And one thing I, I think one thing we get to skip here, I ask a lot of people that question just in in general life as well, to be honest. But people on the data side of things, the first thing they say is you need executive buy-in. And of course I think that's probably something we can, on data functions more generally. So I think we can probably skip, skip that, given that Uber is so data native, but perhaps there's something in there as well. [00:16:00] eoin: Yeah. I think one of the things that was really exciting is when I joined in 2015, Uber was going through this period of hypergrowth where it had figured out the engineering systems, the lights were kept a Friday night when things got busy, and the system was at a scale where everyone realized that I. The more we can improve the efficiency of the system, it can pay massive dividends for us in terms of growth, quality of service, and just a better experience for all participants in the marketplace. There was a big understanding that we needed to get really smart and disciplined about how we run the marketplace and how we set up the systems to make this thing as efficient and effective as possible. hugo: Awesome. So eoin: what were some of the big things you worked on hugo: at Uber? eoin: It's a long list. I've spent eight years at Uber, so I've touched many parts of the business. But with the early days, some of the big changes we did were on the matching side. We shifted from real time or on demand matching where a request would come in and we would just snap, make a decision and execute a match in the system to a batch matching [00:17:00] system where we actually waited and we. Let requests gather for a small time window, maybe 10, 20 seconds, something on that order, and then executed an optimization across the network as a whole, as opposed to optimizing free cheeser as they came in. If you were to start a ride share business tomorrow, that's small. The scree matching will do pretty well for you. But at scale, it just begins to leave a lot of optimality on the table. Mm-hmm. And having that extra bit of time to, to gather and make decisions to optimize across the network. Meaningfully brought down ETS in the system, which means drivers can spend more time on trip, which means more time earning and in aggregate riders spend less time waiting, so everyone is happier. You're shifting the balance of the marketplace to folks in cars more, which is what everyone wants. hugo: Super cool. I, I'm also interested in, so I think a lot of people in data science have a lot of like analytical scientific skills, techniques, but they may not have all the skills that, and ways of thinking that people who can excel in marketplaces have, and I'm actually thinking [00:18:00] like Amazon for example, and Uber have hired a lot of economists. So there, there are certain things like thinking about. A system at equilibrium when you throw a neural network or something out and then you are perturbing the system when, when you do experiments. So I am wondering for people in this line of work, are there these types of deeper skills and ideas that you think people need to have? eoin: Yeah, that's a great question. I think on the surface, yes, and I think I'll try to pin down a couple of examples on that. I will say my, all of my degrees are in computer science. I was an algorithm designer by trade, but I feel I've almost got a second education in economics after working with some phenomenal economists at Uber. I think as you said, right? Thinking about the CSM in equilibrium, rethinking carefully about measurements. I think that's one of the things that I really learned to appreciate in my time at Uber is that the difficulty in nuance and measurement, not so much in that you can get your measurement wrong, but that if you're not careful, you can get the sign of your measurement wrong. Hmm. So eoin: that if you're not very careful about how you set things up to get a measure on something, you can end up doing the [00:19:00] opposite of what you should be doing. Now I encounter that many times, and that can manifest itself with that network effects in experimentation. Uber's a great example of that. If you just split the population in half and do an AB test, one part of it can impact the experience of the other, and you can end up literally flipping the sign of the true effect of your change. And also just over time, people. Both riders and drivers engage with Uber on different time horizons and you can see things in, say, rider behavior in the short term that maybe don't quite manifest in the long term. So understanding that you need to be thoughtful about the long term impact of your decisions as well is also something that, that you spent a lot of cycles on and is something I really took away from my time there. hugo: Awesome, wonderful examples. You did mention, of course, that you were at Uber not yet in Uber Eats when the Covid pandemic hit. So, uh, if anything you could tell us about what that experience was like and the existential shift you would undergo as an individual, but an organization I. eoin: Yeah, I distinctly remember sitting on my, uh, work from home setup at the [00:20:00] time, looking through the marketplace org's roadmap for the next six months. 'cause of course we thought it was gonna be over in six months at that time. And then realizing that we can't really do any of this 'cause we have no idea if it'll work. The market is in an entirely different state to what we anticipated. And we really thought carefully about should we reprioritize? Should we pay down tech debt? Should we be building capabilities? What should we be doing in this time? The other order term things we did. If you think about managing a marketplace, really it's about like small course corrections all of the time. You're trying to keep it around some equilibrium that you're comfortable with that matches the organization's business goals. This is an event that pushes you into completely unknown territory. I. Many of the decisions we had made to increase market efficiencies no longer made sense in this new regime. So really, we actually looked through all of the configurations we had in the system and started doing some very large changes to try to make the market work in this sort of much sparser, much less reliable world where far fewer drivers were driving, far fewer [00:21:00] riders were riding. So things like increasing the max dispatch distance, making sure that was sensible or all things that we did to try to. Just make sure that the service continued operating and operating as well as we could in this new regime. We're also being cognizant that if we put new technologies in the marketplace, it's gonna be pretty hard for us to know if they were good or not. hugo: Yeah, I, I appreciate that and once again, it sounds like getting back to basic, once again, using data and algorithms, machine learning or all of these things, but getting back to Fundamen fundamentals and fundamental questions about the business. eoin: Look at the core you're running these marketplace, you have a responsibility to participants to do it as well as you can. And I think often. There's some beautiful math you can do. There's amazing machine learning you can deploy, and I think that can, the excitement there can sometimes cloud the youth that really at the core, you need to make this marketplace as effective and as efficient as possible, and that needs to be core of all decision making. Totally. hugo: So a lot of our listeners are data and machine learning and AI leaders, whether that's. Chief AI officers or team leads or, or [00:22:00] CTOs working across a lot of different industries. The reason I'm framing it like this for you is a lot of people want practical advice on what they can do to affect change in their jobs today. And I'm wondering from your time at Uber, what advice you'd give to people in industries that maybe aren't as tech and data native. There are probably some lessons you've learned at Uber, which are really only applicable to pretty serious tech companies, so I'm wondering what are some more general lessons that you could share? eoin: I think at the core to be, uh, maybe overly simplistic, but solve the problem, don't sell the solution. duncan: Mm. eoin: I think there's a real tendency of many technical folks to, to blend those together and think about the problem they're solving through the lens and how they wanna solve it. Mm-hmm. And I think really over my time at Uber, the stuff where we had the easiest buy-in and the most clarity was where we had crystal clear sense of what problem we were solving. Why, what solving it look like. And then we have good buy-in from all of those things. I used to spend a lot of time with my leadership team at Uber really obsessing over for everyone to recursively, go down through the organization of [00:23:00] what problem are you solving? Why are you solving it? Why are you convicted? It's the right problem to solve. What to solving it look like if you solve it, how will things be different? And bifurcating that from the question of how you're gonna go about solving it and really getting clarity and buy-in about this is a thing we wanna achieve and that. We're all gonna get on the same page that if we achieve it, good things will happen. And then you can go about the second question of, okay, and here's how we're gonna go do it. hugo: I love it. And would, for example, Uber Pool be an example of something that was solving a preexisting problem? I. eoin: So, you know, UberPOOL was launched in 2014 before my time at Uber. hugo: Mm-hmm. eoin: One of the things we realized is that it had undergone massive growth, but was not as efficient as it could be. So in 2017, we actually took a step back and we really thought about what is the core of the product. And it was about having a product that is sustainably low price for our riders. That offers a great experience for riders and drivers, and we actually redesigned it from the ground up to say, okay, with this efficiency and sustainable low [00:24:00] prices, the core. Pillar here, what does it look like? And that's where we added waiting and walking. So specifically, we had riders wait for up to a couple of minutes to allow us to have more liquidity into the market so we can execute better matches. Better matches mean more time with two riders in the car. That means less detours, less stops for the driver. And we also had the riders walk maybe a block or two to avoid one ways and difficult turns for drivers and things like that. And that's one where we articulated this vision that we need to get this product being sustainably, low cost, and a really good experience, and got buy-in around those pillars and then figured out how to do it. hugo: I love it and there's so many interesting things emerge once you con consider this. So I was actually living in the northeast in New York and a bit in Connecticut at the time, and walking a couple of blocks in New York in December is very different to doing the same in Los Angeles, eoin: right? Yeah, absolutely. We, we had a lot of pushback from some of the local teams on that. And yeah, I tried to do some these Pacific walking. One of my favorite memories when we were. Like we had the walking distance a little bit too high, [00:25:00] and we had a case where we had some guy, I dunno, two o'clock in the morning in San Francisco, walk about three or four blocks from a bar, got in the Uber and it was promptly told that he needed to get out and walk the rest of his way to his home. Wow. hugo: Amazing. And in fact, I'll link to this, but we, I did an episode of High Signal with Kiara Otto at Harvard Business School, who she actually did a case study. Of UberPOOL. It's a Harvard Business School case study. I think when Duncan, where Duncan Gilchrist at Delphina was involved and she talked about the different in the case study around the different geographical aspects of what you are working on. eoin: Yeah, absolutely. I, I am actually a big fan of the case and Duncan and I flew to Harvard to be there when they taught us the first time. Amazing. And I think there's, um, it's a great example of a case where, this is an interesting one. You're launching a new product. It's very difficult to experiment. You can't launch it, to have a population, you won't have the liquidity you need to launch it to a couple of cities. And it was a very stressful time about, we're rolling this out. Are we confident it works? The last thing you want is a false negative because you built it the wrong [00:26:00] way or had some errors in your system. And I think this was a great example of we had a lot of information flowing in it. It signs were pointing to where we wanted to point them, but also we had a strong amount of conviction that this was the right thing to do and we needed to make these changes. Super cool. hugo: And I love that we've framed a lot of this conversation around marketplaces, but also something you've spoken to is the need. To see what the market's into and what market desires and market demands are, and solving actual problems in, in, in the market, and being problem focused, not solution focused, which I think dovetails very nicely into what you're up to now at Lightspeed, particularly with the venture capital work you are doing. And in particular, correct me if I'm wrong, but one way I think about venture capital is you wanna find. Problems in the market, you wanna match problems in the market with founders who have potential solutions, right? eoin: Yeah. So, you know, I joined Lightspeed after my time at Uber, and I've been there a little over a year and early. I joined Lightspeed with the goal of helping make the firm as quantitative [00:27:00] as possible, and also helping the firm scale Lightspeed with. I've been growing for the past couple of years into, from early stage to right across the late stage. The teams are all across the world, and we wanted to build a system inside the firm that for every investor we add, for every portfolio company we work with, and for every dollar we manage that on the whole, we all get more effective. And I think this has been a really interesting time to do this with the rise of Gen AI technology Venture operates on vast amounts of unstructured data, and we have this new toolkit that allows us to structure and begin to reason with it. And me and my team are really focused on. How can we build the systems that leverage our scale and history as a firm to show up knowledgeable and prepared and have the right information in front of folks at the right time to be you? Make better decisions, be better partners to our companies and on the whole be more effective. So hugo: what motivated you personally and, 'cause I'm just interested in people's career journeys more generally. You've been working in like serious data science marketplace [00:28:00] dynamics from two wheels to four wheels. I love how you put it. So what motivated the move to to venture capital? eoin: I think one of the big things was that I wasn't entirely sure what the job would look like. I've been kinda running a large org at Uber for a while. There was many opportunities. I was talking to companies to go do that somewhere else. What struck me about joining Lightspeed was one, the people there are amazing. The folks I work with every day are incredibly high caliber. I spent a lot of time with the leadership team and really was excited about their hunger to try to. Be more effective and then lean into being data driven and with a caveat that they were open about. We'll have to see where that goes and see what we can do. And I think what was really appealing was that I would go there with maximize learnings. I thought I would learn the most possible about the venture capital industry, get exposure to a whole host of different type of companies, very different from the consumer marketplace stuff I'd worked on previously. And it's been fascinating. It's been fascinating to get exposure into all these different business models, all these different technologies and a sense for. How these companies run, what's [00:29:00] important and where the broader hugo: tech industry is headed. Super cool. And I love that you framed in terms of learnings as well and what you are a able to learn, because I think that's so important for a lot of us in the space, particularly with everything happening now. I of course, Lightspeed before you joined wasn't data unsavvy, but you are taking it to the next level. So I'm wondering, and without giving away any se secret sauce, what are some of the things you're excited about achieving at Lightspeed Owen? eoin: Yeah, so one of the things I'm really excited about is Lightspeed has a storied history in the valley and has seen some incredible companies from early stage right through to IPOs and acquisitions. And one of the big things we've done is really tried to understand how we can scrape and structure all of the data we have about these companies growth paths to have this amazing data set of really allowing us to quantitatively say what actions looks like. Help us understand how our companies are operating, where the best companies operate at, and then allowing us to both be very disciplined about understanding what best in class looks like, but also be excellent partners. We're showing up [00:30:00] to, to chat with the companies. We're working with data about how others have done and what great looks like as opposed to a vague sentiment that these numbers should be higher. hugo: Awesome. And one thing, and I'm really excited about, I know that you in invested in Anthropic and Claude code is super fun. I've actually just been having a lot of, a lot of fun with Claude code and then also cursor using agent mode with Claude 3.7 max as well. Yeah. Which super supercharges things. And I know you, you're a technical guy, so I'm just interested in, I presume you've experimented with a lot of these things. eoin: All of these, right? I, one of the fun things about being at lights speed is it's a much smaller team, so I get to be much more hands-on than it was in my time at Uber, which is, it's a really fun time to do that. These models have made taken away so much of the drudgery of the modern day engineering, so it's a ton of fun. My current stack is a combination of cursor backed by Cloud three seven and also cloud code. I pop over a desktop. Both of them are open, I would say both just starting products, plot code. I feel like I'm barely scratching the surface of on it's able to do, but. Just the ease of which it makes things [00:31:00] like handling different branches, deployments, commits error, finding linting, it really just takes away so much of the overhead and really allows me to focus on what I want to be caring about, which is what we're building and how we're thinking about things and looking at the outputs. So I think it's an incredible piece of technology and I'm really excited to see where philanthropic, you're gonna take it, but on the hold on this space. There's so much cool stuff popping up. It feels like almost every week. It is exhausting keeping track of it. But I was talking to someone recently who was uh, complaining that some model didn't quite get something right, and I started to take a step back and say, Hey buddy, like these things are ludicrously good. Call yourself three years ago and tell 'em what you're capable of. You won't believe yourself. hugo: Tell me three months ago. And to his point, sure. There's lots of things I can criticize. They will. Err on the side of generating. When I asked for an MVP generating a thousand lines of code in sub, sub direct, they'll do sprawling code bases, but we've just unleashed turning as Peter Wang would put it, silicon and melted sand into [00:32:00] something that mimics the history of cognition. So if we're complaining about that type of thing, just imagine like in the seventies complaining about something about the computers that were being made Then. And not actually thinking, wow, how can we leverage this to do all of this fun stuff? You'd be kicking yourself after. Right. eoin: Absolutely. And I think across the industry we're just scratching the surface on all we can do with these things. I think it's gonna be a really exciting couple of years, and it's the pace at which they're getting better, the amazing use cases that are coming up. I find that myself using it in daily life, tens of times a day, I. hugo: Without a doubt. It's not hundreds and in and things to your point, like things happen every week. Multiple things happen every week. Also, I don't even know what things are capable of. It was last year that I discovered, somebody told me about court artifacts. So I went into court. I was like, Hey, can you make me an artifact? It was like, oh, sure. I, and I was like, oh, what can you do? I was like, I can do this, so I can do that. And I said, list me 10 things. I, it literally built me like a react front end. I'd never programmed react before. Now I know a bunch of React and I'm very proficient in in Python and a couple of other [00:33:00] things. But knowing a bit of Python allowed me to, it supercharged me to build stuff with Claude in a way that I just felt far more expressive in what I could do and learn in real time and chord. When you use it in the browser, as I'm sure you know, renders artifacts in the browser, if you are watching and listening, don't know this. Go and play around with it now. And that's old news, right? eoin: Yeah, and you have the cloud MCP, which is, feels like it's going viral on, on X every week about cool. The ways people are using it and tool use. I think, again, we're just getting started with what these systems can do. I, hugo: I couldn't agree more, and I do think MCP, so that's, for those who don't know, that's Philanthropics model context protocol. It's popping off on LinkedIn, on x, on blue sky, all of these things. But it does provide at least a very strong first approximation to kind of. APIs for LLM powered stuff and it's not quite a APIs, but they're protocols which allow, for example, I'm building information retrieval agent, discord bot at the moment and it makes it super easy to hook my LLM into in, into Discord. And actually the whole point is that you [00:34:00] can then provide feedback and develop an evaluation framework in Discord. So give it emojis and it logs them to a. SQL Light database. So that type of stuff. Incredible. And of course, OpenAI has jumped on top of MCP and Sunda pitch. I last week, tweeted, Hey, should we adopt MCP? So this is pretty serious business, right? eoin: It's an incredibly exciting time. There's just such cool stuff being built, and the only, only almost intimidating thing is you're limited by your own imagination. hugo: And time. Right? And and time. This is something and noise, man. Once again, if you go to LinkedIn or wherever it is, there's so much noise coming in and it seems like the world is model focused again. And of course the way you get a defensible moat is through your own data. So figuring out how to leverage all the cool things but not get distracted by the new shiny objects. It's very like there are forces at play that show us diamonds in the sky. So I am interested though, particularly in your line of work, and this is a question I get all the time, is. So the way I'll frame it is I kind of said in our tooling space, we've moved from a complex environment to a chaotic one. You'll recall [00:35:00] when you were Citi Bike and Uber, I was confused. I was like, oh, the PI data stack's too much. Which visualization tool do I choose? Yeah, so much so that Jake Van Delas at Picon in 2019 or 2018 in Portland. Oregon gave a talk called like the Python data science visualization landscape, and he presented a chart and then a talk around all the tools, right? So even at that point there was a plethora of tools one could use, but it was more obvious what the stack was. Complex environment, but manageable. Now with all the updates and everything happening ev every week, I don't even know what I should be looking at all the time. I do think advice that I give everyone and myself is whatever you're looking at, try to be solving your own problems with it as, as well, as opposed to doing stuff in the abstract. But I do wonder for you how you think about even figuring out what's useful to spend half a day playing with and, and what isn't for your work. eoin: It's a great question and honestly something I struggle with, right? There's one could spend infinite time at the moment learning, trying out [00:36:00] the new tools. I think at the core, something I, I've really experienced amount of times over the years is if I want to go learn something, I have to do it, doing it, using it to do something. So for me, that's about can test that one of these tools and I can throw some toy problems at it, but at the core to really understand if I like it and how I think about it, I need to go do something practical with it. For the models. I'll try things as they come out, but it's always a balance between explore versus exploit and do I sit on top of the models and tools and stacks that I'm quick and efficient with, or do I try to branch out? I've been trying to use more of the sort of agentic model on things to see how that pans out. And as you said, it's interesting, it's very powerful and eager to please, but left unchecked so you can have a very large, messy code base. So it's about threading the needle between those and also in day-to-day life, I'm trying to use it more and more. I'm a big fan of deep research. It's an amazing product. And yeah, I try to use it all the time to, to offload mental bandwidth on researching something or thinking about something, and I found it phenomenally useful. hugo: I [00:37:00] couldn't agree more on all of those points. You mentioned that something that happens in the world of venture capital is you have masses of unstructured data that you can extract signal from. Hopefully now that process of going from unstructured data to signal, particularly with the amount you have in in your line of work is highly non-obvious, but. I've found, and I think a lot of us have that generative AI when used mindfully can be leveraged to extract a lot of high signal data. So I'm wondering whether generative AI is something you're leveraging in the disrespect at work. eoin: Yeah, absolutely. One of the things we think about with how we use it is that for many of the use cases, we can apply gen AI to these unstructured data sets. It is something that we could have done with humans in the past, but it just would've been too resource intensive to do. It would've taken too long and too many people and been too difficult. So it really. These tools give us the ability to do months work of human work in a matter of minutes. And that can be trolling through vast amounts of information from startups, the market research, [00:38:00] trying to understand trends, commonalities, patterns, and things like that. And we really believe that if we can do this well, we can have some really interesting perspectives on where the broader markets are headed, what emergent patterns we're seeing, just on the whole be more knowledgeable and better informed. Super cool. hugo: I, I'm wondering how you think, and maybe this isn't necessarily part of how you think per se, but how you think about making this type of stuff reproducible? I'll give you an anecdote. That's actually a collective anecdote that I've just seen happen so many times over the past couple of years. If I'm working with a client and they have some customer data and we want to get out summary statistics and some visualizations, we'll ask an LLM to do that and it'll do an okay job right now. If instead we say to the LLM, here's the data. Tell me some interesting things that are in it. It will legitimately surprise me on on, on average and extract things. So I'm wondering the way that it works isn't so reproducible as opposed to the faller templated example. I'm wondering if this is something you've come across [00:39:00] or how you think about these types of things. eoin: Oh, again, it's a good question and I think I. At the core, it's about to go back to this pattern. What problem are you trying to solve? Are you trying to solve the idea of going through all of this information and extracting information from it? And I guess the question is, if you gave it to 10 analysts and have them spend two weeks on each of it, and do you expect them all to come up with the same answer or do you expect 10 different answers? I think if it's a latter, when you run this 10 times and get 10 different ounces, you'd probably be comfortable with that. There's probably value in all of those. I think that lack of determinism is something that we'll have to get a little bit more comfortable with. The potential for hallucination, I think trying to get the models to ground things in examples to point to where they made certain inferences to explain their reasoning more. Or all tools one can employ. But I feel again, we're very early days in figuring out how we can use these and I think. There is a lot of expertise and experience to be built in making them effective. I think some of the work on like multi-shot prompting is super interesting and using [00:40:00] that as a way to get performance outta these models while avoiding fine tuning and things like that. Maybe for some use cases, you need to go down the path to fine tuning 'em or post-processing or something like that. But with the core, I think one, we have to be thoughtful that it isn't the core stochastic process, but we should be thoughtful about if that's acceptable to us and what railings and structures we can put in place when it may not be. hugo: I love it and I love that you mention multi-shot prompting. The way I think about it is people are like, oh, I can't tell it how to do something once, and it does it immediately. And I'm like, what type of human. It does that. Right? And when you get to know someone, most of the time you have a conversation back and forth to figure out what each of you actually means. Right. So I, I think that's very important to recognize. The other thing that I think is super important that I found very useful is getting LLMs to engage in self-criticism and self adversarial critiques can be super powerful. Yeah, absolutely. eoin: Yeah, absolutely. hugo: I, I also love that you keep coming back to the fact that we're so early, and this is an analogy other people have used, but one that I, I quite like is think about when. We [00:41:00] discovered as a civilization how to generate electricity. Right. We didn't have a light bulb then, and we sure as hell didn't have an electric grid. Right. I think Edison formed the innovation lab or whatever became General Electric to figure out how to harness this stuff. Right. And then someone came up, they came up with the light bulb and realized then maybe they wanted networks of such things. So I think thinking about this moment as electricity prefiguring out all the ways to harness it is a helpful mental model in some ways. eoin: Yeah. I think that makes complete sense. And I think it also just. It really hits on to what an exciting time it is because things are gonna undergo so much change and there's gonna be so many cool new ways to use the technology showing up in all aspects of our lives. hugo: Yeah, absolutely. I do wonder how you think about you are very native with as native as one can be with these technologies. Actually, none of us are gen AI native because it is. Actually such a strange thing to interact with, and it isn't software in the way you'd think of software. They're horrible calculators, for example, which you wouldn't expect of something that's stochastic and flip [00:42:00] flop. You shouldn't expect to be a good calculator, but we have that expectation of software. Your less technical colleagues, how do you encourage them to attempt to leverage generative AI in in their work and lives? eoin: It's a good question. I really, it's about putting good tools in people's hands. So across the firm we, we use an enterprise version of clo, which is phenomenally powerful. And then you're stunned the whole, uh, a huge fan of the product. Mm. You can create projects, share them, collaborate through them, have shared knowledge bases and projects and review. We've given this to the firm, encouraged people to use it and really encourage them to share where they found it useful. We've found that the most powerful driver of folks being excited about it is not me lecturing in them and how some new technology works, but just seeing examples of colleagues and peers getting value out of it and then realizing that, oh, I should do that too. hugo: Totally. And also starting to understand how they operate. 'cause Su Willison originally. Referred to them as a weird intern 'cause they'll do all this stuff and some of it may suck. And then you just say, do [00:43:00] better. And somehow it miraculously Yes. Does better. Yeah. Or eoin: my current favorite and think harder, please. hugo: Exactly. Exactly. But then you run into wacky 'cause they are, as I think you used the term people pleaser. They're wonderful people pleasers. They're also gonna like weird gaslight in their own way. So I was actually building out a slide deck and a Google doc earlier today. And working with chat GPT on it. And it was like, Hey, let me prepare the Google Doc for you. And I knew it couldn't, but I was like, okay, let's see. And it gave me a link to a Google Doc, which didn't exist, and I was like, Hey, what's up? And it was like, oh, I'm sorry, I just thought you wanted a helpful assistance. So I was mimicking what that journey would be like. eoin: Yeah. Yeah, I've definitely caught some of the models. If we can't get a database call to work, just dumping fake data in there so something comes back hugo: Totally. And so that's one. One advice. A piece of advice I usually give to people adopting these technologies are if you need something reproducible that adheres to business logic, I. Perhaps embedding your LLMs in your business logic is, and having tests and guardrails around it is useful. That isn't to say, don't try an MVP like [00:44:00] and proof of concept, which does a whole bunch of wacky stuff, but don't expect it to behave like enterprise software. eoin: Yep, absolutely. Again, I think we're very early on the journey. Totally. hugo: I'm wondering. Just given your experience across a variety of industries, how do you manage generally organizational expectations around adopting new technologies without falling victim to hype driven decisions? eoin: I think that's a good question, and I think my time at Uber, there were were many times where. There was some amazing leaps in progress being made in machine learning, and I think some folks fell into the trap of thinking, I should just, you should get a whole bunch of stuff for free in our system. And I think at the core, it was about having this crystal clear view of the problem you were trying to solve, all the questions around it, but then really understanding why those problems were difficult and what was important to be able to have something that you could put in market, not just once, but time and time again. To give you an example of something that, that we encountered a lot, [00:45:00] indu, is there would be a real temptation to have a very complicated machine learning system be in control of some there. And you could build it, you could implement it, you could test it. It could look quite promising. But one of the questions we would ask is, okay, cool. How will we know if it's making bad decisions? And that is a bit of a speed bump for many approaches because once you put in these entirely black box models, it can be difficult to understand if it's making bad decisions. If it's making bad calls. Again, if the ground is shifted underneath and it's no longer operating in a sort of subspace that it's experienced or it's trained on, or it's well calibrated on, and we would. Really try to have a clear sense of why problems were hard and what we would need to solve them, and then mash that against the technologies to just have transparency about what we could expect, how much headroom we believe there was, and what the attributes the solution would need to have for it to be effective as we just understood the problem.[00:46:00] hugo: I love it and this once again speaks to an example you gave earlier of, Hey, we can do certain things in batch. We don't need to be streaming stuff all the time. And doing real time inference if we can batch it. Batch isn't the sexiest thing, but it's so effective. And in fact, when I worked on Meta Flow, some of my colleagues who they build the open source at Netflix, they told me that a lot of even the Netflix recommend, recommend system does a lot of batch stuff 'cause it ends up being more effective and more efficient. eoin: Yeah, absolutely. Right. It's again, care about the problem, right. And index on the problem, not the solution. And I think in the industry, there's folks get excited about technical things. They love technical ideas. I do too. But I, I think you gotta be ambivalent to the hammer you're using and just be really thoughtful about it. The approach you're taking is gonna get you the, the most effective solution. Totally. And, but hugo: in your like previous work at Citi Bike and Uber, in terms of. Marketplace matching, right? And thinking of Netflix in terms of recommendation systems. So a lot of these [00:47:00] problems, it's obvious how to apply machine learning to them in order to scale businesses and and impact, right? In venture capital, venture capital seems more resistant to scaling automation, and we've hinted at the role generative AI can play there. But I think, and I'll paraphrase, but last time we spoke, you mentioned. Venture capital has historically resisted scaling due to its reliance on individual expertise. So I'm wondering how you see the potential of Gen AI to, to change this dynamic essentially. eoin: Yeah, so the way I think of it is the Gen AI alone is not gonna be sufficient, right? I don't believe there's gonna be a single agent that is meeting founders and funding companies and deploying capital, but I think the combination of bringing on incredibly strong people. Having focus on working together as a team and like really strong process around how everyone operates. Sort of that I think is almost orthogonal to gen ai. And I think these two things can have multiplicative [00:48:00] effects on, on how effective folks can be in venture capital. So the idea that you have amazing tools and systems that are looking across all the informational, flowing to your firm, servicing it to the right people at the right time to help them make better and more informed decisions, have better on awareness, where the market is, what their colleagues are doing. And that combined with amazing investors and amazing staff inside the firm really compliment each other and we're really hoping that it can, can make us much more effective. hugo: Totally, and I think still speaking to the point that you require like serious domain expertise in whatever you're using generative AI for, like I mentioned that I learned some react using generative ai, but it then gave me a code base that I couldn't really introspect into half the time. Whereas if I'm working in Python or building gen systems using API calls or even using Alama locally, and Claude Code tells me to do something, I, I know what's up there essentially. eoin: Yeah, I've definitely generated some reacting. Some of the engineers I work with when I show up, whether they're like, Paul wrote this for you. Totally. Um, again, [00:49:00] to that point, I think we're early on the journey of these technologies. I think we're all trying to learn how to use them most effectively, and I think they're changing so quickly. It's really exciting to see where they're gonna end up and how people. Find ways to be as effective as possible hugo: with them. For sure. So I am interested, I think already you've given the great advice and several examples of like always be problem pro, problem focused. I am wondering for like mid-career data practitioners and data leaders, if you have any other practical advice on where they should focus their learning or development efforts just to remain knowledgeable and essentially relevant in the rapidly evolving landscape. eoin: I think, you know, want to learn, play with these geni tools, figure out how you can be more effective with them. They're an amazing partner in day-to-day, be it writing docs, visualizing things, brainstorming ideas, writing code. So that's one. I think that's just key to everybody. I mean, interviewing people at the moment, I'll ask them like, how are you using these technologies to be more effective? Mm-hmm. And you get really interesting answers about. [00:50:00] People use them as personal tutors, coding assistance, all the different ways people factor 'em into their workflows. So that's one piece of advice. The second very strong piece of advice is this is a great time to go get really technical and stuff. Everyone has a world-class tutor in your pocket about this stuff. So really go pick a topic, go deeper on it, be it a branch of machine learning, software, engineering, system design, measurements, econometrics, you name it. Now is the time to go learn new stuff. I think it is one. Just a, always a good idea to bolster your core technical skill sets, and two, I think deeper technical expertise and then the ability to leverage. Those expertise to have impacted scales through the use of Genis is gonna be where the next next couple of years are. hugo: I couldn't agree more and I love the idea of using it to have a deeper dive into something you may not be an expert in. And actually something I've been playing around recently is getting Claude 3.7 sonnet on max mode in Cursor agent to generate tutorials for me. I literally, I wanna learn this thing [00:51:00] and develop a plan and get it to build out small MVP tutorials to help me learn things and people can even. If you don't know the command line, you can get cla to generate a tutorial of the command line for you. Elastic education, everyone? eoin: Yeah. Or my favorite one is, uh, it'll, I'll try to teach it the subjects and it will ask me questions. Wow. And then tell me where I'm wrong. Amazing. I haven't gone that I know what I'm doing later today. Um, yeah, I do. I do recommend use one of the voice models up in your AirPods. Go for a walk and have a conversation with it about their technical topic. Without a doubt. hugo: We're gonna have to wrap up in a second, but there's so much happening in the space as we've discussed. I'd love to know on, not even on a professional note or, although I think it will be involved, is. What's the most exciting thing for you at the moment, Owen? eoin: That's a hard question. I got a couple answers on this. Is that one, our kids are almost four and almost two, and they occupy a tremendous minds share. But it's really exciting to, to see them both grow up, become people, engage with the world more and be excited about things. So that brings me joy every day. Beautiful. And also on the [00:52:00] professional front, we are building out a really strong team. I think we've really laid the groundwork and Lightspeed to do some really exciting things with the data we're sitting on top of. And I'm really excited about the next year and what we can, what we can bring to the firm and how we can make everyone more effective with it. Awesome. hugo: And I do love that you mentioned your children, uh, as well. And I am interested if we can converge these threads in some way and I'm, 'cause we both worked in machine learning, I am gonna ask you for an active prediction and there'll be wide error bars on this, but. With all the work you're doing, I'm wondering from your own perspective, what do you think will be possible in one, two, and or five years? Or is it too tough to say? I. eoin: I think back to how we thought about self-driving when I was at Uber in the early days, and then I think we're likely in similar boat, is that I think at the moment everyone is drastically overestimating what will change in the next one to three years, and I think we're drastically underestimating what will change in the five to 10 year time horizon. So I think by the time at Uber we were like, oh, 20 18, 20 nineteen's [00:53:00] gonna be tons of self-driving cars on the road. And there we're not. But I take it wamo multiple times a week across the city. Man, I don't even think about it anymore. So I think to, to weasel out of giving you a concrete prediction, I think we will be underwhelmed by the change in maybe the one to two year time horizon versus what our imaginations may see feel possible. But I think we'll also drastically underestimate how different things will be in five to 10 years. hugo: I love it. So to paraphrase, short term, bearish, long-term bullish. eoin: Look, sorry. Super bullish on both in general, but yeah, super bullish on both. But I think the, I think kids will still be going to school. I think we'll be using a LS in a day-to-day life in many places, but I'm not sure the world will look massively different, but I think five to 10 years it could look incredibly different. Totally. hugo: Well, Owen, I just wanna thank you once again, not only for your, your time and, and generosity, but all of your expertise in bringing your wisdom to a new audience. So appreciate you, man. eoin: Alright. No, it was an absolute pleasure. Thank you so much for hugo: the time. Thanks so much for listening to High Signal, brought to you by Delphina. If you [00:54:00] enjoyed this episode, don't forget to sign up for our newsletter, follow us on YouTube and share the podcast with your friends and colleagues like and subscribe on YouTube and give us five stars and a review on iTunes and Spotify. This will help us bring you more of the conversations you love. All the links are in the show notes. We'll catch you next time.