VG-Eric-SciPy-final - 12:8:2025, 11.52 am === eric: [00:00:00] Data scientists have been operating in notebook land for the longest time. They're not doing production scale stuff necessarily. And so for them, the skills gap is in taking their exploratory, let's say now they're building with gen ai, they can't necessarily take it out to production. So there's a skills gap with respect to not just the simplest things like CICD and CICD deployment and secrets and that sort of stuff, but also like monitoring. Stuff in production and the mindset of exactly how to babysit software that's running that a software developer, which is the next persona, that is the thing that a software developer really knows how to do, but they're not used to dealing with like stochastic systems. hugo: That was Eric. Who leads research data science in the data science and AI group at Moderna Therapeutics. Eric and I were having an early morning breakfast about a month ago at SciPi, the scientific Python conference, talking about what data [00:01:00] scientists, machine learning engineers and software engineers each need to know to build and maintain AI powered systems and the different skills they bring to the table. Eric was like, dude, we've gotta start recording this. So we did. You may hear us greeting friends as they walk by and even talking about the frittata we were eating, but in between bites, we got into ephemeral software. Why curiosity about data matters. How big orgs can make space for experimentation, and how non-technical teams are prototyping their own tools. I'm Hugo Bound Anderson and welcome. Two vanishing in gradients.[00:02:00] eric: I'm here with Dr. Well, I'm here with Dr. Hugo. I, Anderson, Tacoma. Right. So like just back to what we're saying, right, like is that knowledge less prevalent? The knowledge of how to deal with LLMs, how to program. 'cause you're mentioning people from Netflix, from Amazon, the big tech companies. This is where the innovation is happening. But yeah, I would imagine for example, that like people at Meta, they shouldn't need to take this course, but if, or people at Amazon, they're building their own LLMs as well. They may not necessarily need to take the course, but they're taking it. So then what's the motivation that you see for them in wanting to come and take the course? Where's the knowledge gap that you're providing added value on? Yes. It's, I mean this in a good way, not in a bad way, because I love the [00:03:00] course. It's a great question. I think there's a skills gap and knowledge gap in different dimensions. For a bit more context, Eric and I were just talking about the type of community we're building and how excited I am. hugo: People from all walks of life can come and build immediately now. And if you have curiosity and stubbornness, right. You know, these are the two things that go a huge amount of the way. And then I mentioned that we have people from Netflix and Amazon and DoorDash and Instacart and like big group on and like massive tech companies. And so the question is, do people at these companies have the skills and knowledge to build and maintain and iterate on LLM Power software? Right. Right. Yeah, I think we're decided to smoke Salmon fri Taras. Right. That's it. That's it. Cool. Pardon me? Uh, split, if that's okay. Yeah. I'm still waiting on an espresso. music: Yeah. Yeah. Cool. No, I just want, yeah, no rush. Just wanted to make sure. Okay. So there, there is the skills gap. Yeah. And so, I mean, your question is. Very good because let's think about who we're talking about. So there's a huge influx of software engineers Yes. Who wanna [00:04:00] build LPO software, right. And they don't necessarily know this new software development lifecycle, which involves. hugo: Essentially incorporating the disorder, entropy, chaos, organicness of data from the real world into software, right? Yeah. As opposed to shipping something more, more static. Yeah. So that's the first type of person. But then think about the data scientist on the other side, who knows how to. Think about data, perhaps iterate on some product, but doesn't necessarily have all like the production and software skill sets. Yeah. Yeah. So I think for both those people, it provides a lot of value. Ah, okay. The question then is, right, just wonderful person, is the machine learning engineer or the person who's built ML powered. Software, right? They're uniquely suited to. So once again, we've discovered that the machine learning people are the people who have the skillset, or let's say the mindset skillset slightly changes now for several re I mean, we've got quote unquote, prompt engineering, prompt alchemy. We've got different versioning. Concerned. We've got significantly different monitoring challenges to classical [00:05:00] ML software monitoring as well, right? 'cause of all the, not only the natural language stuff, but tool calling, process, calling, all these types of things, right? So you're right. The people at Netflix and Amazon, the ones who've been really shipping hardcore ML stuff for years best suited. But even now, the ability to build real. AI powered systems as opposed to merely own components and these types of things, I think has changed the game completely. So there's a kind of a systems thinking that we can now really dive into and add value for a lot of people in the course of community. The other thing, of course, is that we don't even have the right bloody abstraction layers tool-wise, right. The mindset of thinking I need to hand roll my own custom JSON viewers with annotation tools, right. And that type of stuff. So I hope that helped in a few different Yeah. Dimens. So if I can maybe summarize it where the skills gap are. Right. The data scientists have been operating in notebook land for the longest time. eric: They're not doing production scale stuff necessarily. And so for them, the skills gap is in taking their exploratory, let's say now they're building with. [00:06:00] Gen ai, they can build the prototype in a notebook really quickly. They can't necessarily take it out to production. So there's a skills gap with respect to not just the simplest things like CICD and CICD deployment and secrets and that sort of stuff, but also like monitoring stuff in production and the mindset of exactly how to. Babysit software that's running that a software developer, which is the next persona. That is the thing that a software developer really knows how to do. But they're not used to dealing with like stochastic systems. Right. And not only stochastic systems, they're used to like very formal over time. Yeah, exactly. They're used to like formal requirements. Basically. Most software developers nowadays build form apps, CRUD apps where you have forms that you need to fill of some kind, and now suddenly. LLMs can fill those forms for you, and they, but they are stochastic, so they're not necessarily deterministic. And so verifying the outputs of those systems is.[00:07:00] A more important skill for a software developer than it has ever been, right? But like most software developers don't come in thinking maybe too hard about that problem, like you've got form validators and the likes, but it even brings correctness. What is the nature of tests like? We've had software developers come in and say, oh, your tests aren't passing a hundred percent of the time. hugo: What's wrong? And I say, no, it's fine. If they were passing a hundred percent of the time, you're not writing the right tests. Right, right. Exactly. And also, I loved your point about monitoring challenges for both personas. I mean, there are really two iterative observability loops. One in development and one in production. Right. That's actually why I like, I love working with Stephan Crouch on the course who was Stitch Fix and built Bur and Hamilton. Now he's working on agent infrastructure at Salesforce. Yeah, he just brings. All of that hardcore ML ops software stack. Yeah. Stuff. So ops and infrastructure and mindset for data powered applications. Okay. Right. Gotcha. And of course, like you, I bring a lot [00:08:00] of like the data ML stuff. Right. But once, I think, once again, this is a point that all we're talking about are data powered products. That's all we're talking about. Right? Sorry, we were talking about data. What again? Data powered product. Data powered products. Yeah. Whether it's ML or ai. All that type stuff and it's really messy data now, but that's what we're talking about. Right, right, right. Exactly. Exactly. Yeah. I do think the other thing people really love about the call, and of course there are other personas like product managers and UX people who started coming in who've done like a Python bootcamp or something like that. eric: Yeah, getting a huge amount out of it. But I think the other thing is, I mean, all the guest speakers such as yourself and guest workshops, we have people who are just on the bleeding edge. That type of stuff I think people could find online. And of course all the credits we give for people to build immediately. hugo: Yep. All of that stuff people could find online. But it's also about like packaging in terms of bringing stuff back on through the adoption curve. Oh, right. Having myself and Stefan and the community. As curators of these are the things right. You can learn, right? People are saving a a lot of time, right? Why paying [00:09:00] for this, if that makes sense. Yeah, exactly. Yeah. Okay, so it's basically you're playing the role of what professors do they curate the syllabus. The internet is the University of today. I like that analogy and, and I, I really like that. Let's think it through as, I love colleges, I love camp. Like I've spent most of my professional life in academia. In academia, right? I dunno if you know this about, my dad was an English professor. He's retired now, specializing in American literature. I spent my childhood in English department in a university, right? So I've literally spent a lot of my life in universities. One of the reasons I went to data camp and originally though, was I think I'm being a bit provocative, but universities in some ways a historical quirk of what we needed to build to educate people from around. The country, wherever we were from the provinces and people to one central location. Yeah. 'cause we didn't have something which could send information around the globe. Yeah. Right. And we sat there in broadcast mode. 'cause this was the most effective or most efficient means of distribution at the time of information. Then we, and don't get me wrong, I think in person stuff is [00:10:00] incredibly important. I think group work is incredibly important. But my ability to impact and like these data camp courses, like the first ones had like 4 million students worldwide or something like that. Yeah. And of course not. I mean we're just have to go to sci-fi 'cause we love teaching here. Right. But I do think my only point is that these things are home modern equivalents of colleges. Yeah. I also think having spent, as I kind of half joke, too much time in academia. I love that we get to reach people and impact people and teach people who. Are working at the moment as well. Yes. And have carved out time to learn while working. Yeah. Honestly, think nearly every workplace should have a day a week of learning actual dedicated to learning. I actually think now with ai, it's probably one of the most important things. Organization. That's actually a cool point, right? Because like I've interviewed colleagues who are auto didactic, right? eric: And the way they're using artificial intelligence. Tooling to learn new topics is fascinating. They're able to, [00:11:00] in like, how do I put it? It is way more seamless now to incorporate learning into your day than it was maybe a few years back. Yep. Just because the ability to have what David Vena calls a galaxy brain at your fingertips is number one, it's so accessible, of course. Those who are the best at autodidactic learning, they've also learned the skill. They're also putting in practice the skill of verification on the fly, right? So they're not blindly trusting an LLM output. They're taking what's given back from an LLM and actually applying best prior scientific judgment. They're all scientists, so. Applying judgment, logical thinking and that sort of stuff to whatever is being output. I can give an example from my own experience, right? Like I'm not trained in analytical chemistry, so there's some analytical chemistry, basic terminology that I'm not very familiar with. And so way back, this is 2023, when Moderna had first access to [00:12:00] the open ai, API. And we were all on GP 3.5, not even the most powerful, right? But I was starting to ask questions about undergrad level analytical chemistry, which I knew would be part of the training set I was able to apply. I was learning like what is percent A and what is percent B in a chromatography column, right? Like it's not obvious to someone who is microbiology and biochemistry trained. What percent A and percent B should refer to totally. But I was able to get a first pass definition from GPT-3 0.5, and then following that, I would ask my colleagues who are sitting just a few rows down within the same floor, Hey, Ted, is this right? Right. Is that what percent A and percent B is? And that was my way of verifying the outputs or going on Google and like double cross-referencing an online textbook from a reputable university. Right. Like auto didactic learning can be done on the fly. Right. Great History of autodidacts. A lot ended up as [00:13:00] scientists, a lot ended up as dictators as well. hugo: Perfectly clear. What was part of the difference there? Autodidacts would then go to the community and really talk about it is one of the ways you. End up being a researcher and scientist. Right. The other thing I find really lovely in that story is you cared about what the columns of data actually were. Oh, yeah. And this is actually a serious issue, right? The amount of people who came into ML because they thought it was sexy and they'd go to Kaggle competitions or whatever it is, and the focus on the hot models and not actually thinking through or working through what your data is, because you can normalize all of these. Oh yeah. And of course, feature engineering plays into that, but that's something I've found in. My consulting work and education work when I'm helping people figure out if their LM Power software is working properly, and you know this, but I say to them, have you looked at your data? And they're like, what do you mean? Yes. I'm like, okay, what are the failing modes? And they're like, I don't know. And I'm like, let's look at the traces in a spreadsheet. Give it plus one or minus one 50% of people say to me. Can I just get an agent to do that? And this, [00:14:00] there are several problems with this. The most serious for me is the lack of curiosity about what your system is doing. The lack of public awareness around this is the way to actually figure out the lack of intimacy with the organic product you are building. I use the word organic in quotation mark. Like these things shift constantly, right? So there is a sense of the organic to them, and that's once again, why. The people with data mindsets when paired up with people who very good at infrastructure security. All of these things, I think, are the perfect people to build these products. Mm-hmm. As it happens, machine learning engineers have enough of both generally, even though titles don't mean that much as opposed to your classic software engineer or data scientists and the scientists on the other side. Right. But I do think this, you can teach all the software stuff. You can't teach an interest. In looking at data, but you can't teach that curiosity or like that's something deep within us, isn't it? Like that's why we became scientists now for a [00:15:00] word from our sponsor, which is well. Me, I teach a course called Building, LLM, powered software for data scientists and software engineers with my friend and colleague Stephan Raic, who works on agent force and AI agent infrastructure. At Salesforce. It's cohort based. We run it four times a year, and it's designed for people who want to go. On prototypes and actually ship AI powered systems. The links in the show notes. Yeah. I sometimes wonder the fundamental mindset shift that needs to take place for someone to become more of a data person. eric: Is it that they have to care enough about the thing that they're building and be invested enough and then that just unlocks it? Or is genuinely. An innate trait. Is it genuinely so intrinsic to a person that no amount of extrinsic factors can shape that switch? Amazing question. I think there's a particular persona that it's fine for and that's like data nerds. hugo: That's us, dude. Our people like we [00:16:00] just look at that. Stuff. Right? Our tribe, we get excited about it. Mm-hmm. And once again, that's in particular one of the reasons you and I worked in research. We also found things that we're very interested in researching. But to your other point, for people who aren't like data geeks, I think exactly. They should be building things that they're deeply interested in. Mm-hmm. So let's say you're working. 'cause at its core to be deeply motivated to build something, you also have to be sufficiently deeply dissatisfied. With the state of the world in some ways it could be that the state of your knowledge, yes, you're curiosity driven, so the state of your knowledge isn't good enough right now, and you're deeply dissatisfied with that. eric: So you and build as a way of learning or it could be, there's actually a gap in like software that I genuinely hate. Like I hear complaints about say finance reporting software and the likes. And so people just go and build their own clone 'cause they're deeply dissatisfied with it. Right? Like, how do we wake up? How do we awaken? That simultaneous, but also at the same time, sorry, just to [00:17:00] preface it, you can't be so deeply dissatisfied with the world and not grateful for what we have, right? So how do we awaken that sense of dissatisfaction? I think dissatisfaction is one way. I think you can be deeply satisfied, right? hugo: Also find things that you are very excited about for a potential future, right? And so I do think. It's, I'm gonna call it a matching problem. Now, of course, I don't think there's a necessarily, I don't think a technological solution is what we're looking for, although it might play a role. I mean, David Grabber has this book called Bullshit Jobs, and it's essentially, it's an argument that increasing numbers of people worldwide are working what they would consider bullshit jobs. Hmm. The definition is one in which they think they're making zero or negative impact. Ing. Yeah. Nice to see you. Nice to see you. Yeah. Yeah. You know Eric. Hey, Eric. Yeah, know. Sorry. No worries. It's all good. Hey, man. Oh, you're busy. We are just casually talking. Decided to mic up. Yeah. Eric was like, we should do a podcast now. So [00:18:00] see you soon. Pixie, they're doing the pixie tutorial today. Oh, okay. Got, I dunno if Jacob's doing it. He might be, I think Matthew's might be teaching it, but, and so yeah, dissatisfaction. Yeah. Yeah. That people are working jobs that they think are, are bullshit and it's a taxonomy of these jobs as well. He does single out middle management a bit too much because I don think middle manager can have a function. Dude, I'm a middle manager. I'm trying to minimize the amount of BS that I do. Well, I, I think it's great where I got the concept of middle management is glorified bouncers for the executive bouncers. Like the dudes in a nightclub, they just don't, they don't let you in. It's like, Hey, stay away from suck people. Stay away. Resource doesn't wanna see you, bro. But I do think what we wanna do is ideally help people find work where they can be deeply passionate about. So I was chatting with some people recently who they're working in clean energy and they've got these maps of where solar panels, electricity grids, and that type of stuff. I was chatting with them about building a map you can talk to. Right? Yeah. So like speech to map, right? Yep. Where's the region, which [00:19:00] has this type of rainfall and these grids and these panels and that type of stuff? The people I was talking to there like love looking at their data because they really care. About clean energy. Mm-hmm. Right. So they found that, I think you want that and the other thing you want is for incentives to be more well aligned. I think sadly, a lot of people who build are incentivized their direct career path. Is measured by how much they ship, not how much they maintain. Yeah. And the idea of even decoupling shipping from maintaining in AI powered software doesn't make sense in the way it does traditional software, because maintenance side of things are just so much more important now. Yeah, the maintenance. The maintenance is the shipping because. As we know, you don't even all the time or a lot of time know what to measure before you ship, right? Right. You've got an idea. The only thing I wanna say, I don't wanna get too like big picture, but this big problem with education in general. I understand what happened in the 20th century, but like schools are set [00:20:00] up in broadcast mode. We rarely go to students and children say, Hey, what interests you? I think this happens in the home, not in the school, but figuring out what is in a young human that can help them manifest their love and authenticity and. Contribute creative spirit in the world and find what the needs of the world are. So it's a matching issue there, as opposed to having everyone like production line workers, which I understand the 20th century needed. That's actually the other point with respect to curiosity. When startup land, we've always said early stage startup, you want to have a lot of generalists. Explorers who we use the analogy of like camping and exploring, where they'll set up a quick tent, jump across. The river, chop down some wood, make a fire move on. Then you get to a stage where you have some settlers. Mm-hmm. Who can help build a medical tent and a hut and a kitchen and maybe things you want. Different types of personas. And I think once again, 20th century very good at creating settlers, people who work for large organizations. Now with ai, we need a lot more people to be continuous explorers [00:21:00] actually. And to my point of people being given a day, a week to learn, I think we need. More time to explore as well. Otherwise, we won't adapt to this rapidly changing landscape. And that's also why I think individual entrepreneurs and small teams have been wonderful at adopting AI and changing quickly. And large organizations haven't right at all in terms of their own internal processes. So I'm actually very excited for the future of. Small teams. Yeah. To really, yeah. Quote unquote disrupt, I'm sorry to use that word, but we are in the West coast. Yeah. You don't oft have to be sorry about that man. Because like the disruptive innovation. 'cause I've been listening a lot to Clayton Christensen's books, the Innovator's Dilemma, the Innovator's, DNA, then there's competing against, and then of course he has one more philosophical one, which is like what will be the measure of your life, I think is the title of the book. eric: That one I read on first, paternity leave. But disruptive innovation is like all about the processes that are set up right? Like every process is tuned perfectly [00:22:00] to get the results it was designed for. So the results that you see today are perfectly produced from the business processes that are in place. So if you're not getting experimentation and innovation, it's because we don't have the business process in place to have experimentation and innovation. If we set up a process where only one team can do innovation and experimentation, that's also going to generate a result where all your best ideas are coming from. Perhaps one team that are not necessarily matched to jobs that need to be done, like Jobs theory is another of Clayton Christensen's Big ideas. And they're not matched to the jobs that need to be done because those folks are not, who are doing the innovation work. They're not the ones who are dog footing the tools or dog footing the product, right? They're not like intimately involved in the development of whatever solution that needs to be developed for a problem that needs to be solved. And that's, I think, a really cool thing you're seeing and therefore also stating is that [00:23:00] small teams, their business processes are set up or maybe the lack of any are set up for innovation, whereas large teams have like a whirlwind of stuff that need to be maintained and continued to. That's gonna stop them from that kind of innovation. So then the way for a large firm to out innovate is to actually set up an explicit business process that says you don't have to work five days a week. You have to work and learn for one, but, and not even like five or four days a week. I, I actually hate thinking about it, this in terms of time breakdown. It's really mental capacity breakdown. No. Right. Because now we're in creative mode, so who cares about the time? Right. And time will fluctuate. Mm-hmm. But it's really. What's your mental breakdown in your head? That is where it gets that space to explore needs to be given by the processes that are present within an organization. hugo: I totally agree and I think one place I've seen something type of processes done well. A few of my [00:24:00] friends who work at Better are like in a large organization and have all the challenges and also positive things that you get in large organizations. A few of them are in kind of. Small parts of meta, which essentially they've been given the mandate of a startup and they've got all the resources they need internally. Mm-hmm. So they're shielded from the large organization and they can make decisions around most tools they adopt and that type of stuff. So they're given the freedom of a startup in a large organization. So that's as an example of that. I think the other thing that actually Hugo, I wonder, you know about the Bezos API mandate? Yes. I actually thought about it. Several times when you were just talking about, because that seems to be a way to help foster in a large organization like that concept of you standardize the interface by which people exchange information. And allow them to just decide how they want to implement it. eric: Yep. That seems to be one of the ways, to me, one of the more obvious ways to, it's a bit CPS as well, isn't it? Actually? So firstly, [00:25:00] I agree and I'm of two minds about the big mandate. 'cause I do think in all the ways we've been discussing, it's amazing. It's also as part of like the cybernetic nightmare, like it can both be liberating and then there's a vision. hugo: Where we're all just plugged into different parts of the API and lose our humanity as well. Yeah. But if they're things that help humans don't divorce us from our human nature, I'm all for them. Yeah. I think the Bezos dream, especially what we're seeing in, they call them Amazon fulfillment centers, Eric, and they, they fulfill the customer, but like what happens in those places? Right. That's not due to the API mandate. That is something else. It's the same philosophy, Lauren. I think that prizes capital efficiency and computation, right, which are, are very important things, don't get me wrong. But when developers, of course, are deeply protected in ways that factory workers aren't, but I, I do think it's the same philosophy that would dispose of workers if APIs could do better. Having said all that, I also think the other thing about creating this space to learn to explore another word is to experiment and we should look at organizations. Like Netflix, like Atlassian, [00:26:00] like certain parts of Google that have developed a deep culture of experimentation. 'cause remember experiments, not to you and me, but experiments seem like a bad idea in some ways. People say like if 5% of the things that you do. Impactful and 95 on focus on that 5%. And I call bullshit on that because how do you find out what that 5% is without doing experiments? Right. So experiments don't necessarily deliver in the short term. That's why I used to, we needed journals of negative results in science. Yes. But negative results are incredibly useful for the long term health and success o of an organization. And what people also say, 5% of models make it to prod or whatever, and we need to get more. And I'm like, no, if you've done. Experiments quickly and five of them end up in prod. You are winning. Right? So the idea of having a culture of experimentation and allowing people that sense of freedom and liberation while still delivering, I think we can learn from those types of cultures what type of processes and cultures can then allow us to. Have room to explore AI and we need to, man, [00:27:00] because the developments are happening so quickly, like I do some executive coaching for AI strategy, and one thing I'm trying to just help people with is a mindset shift of I can show you what all the tools Yeah, should, what you can do with all tools today. But tomorrow it may be a totally different story. So actually need to re orient. Everything. Right. And not to be tools first, but ways of thinking first. Right. Mindset. That's mindset first. Yeah. Right. It's a meta shift. My mindset. Mindset. There was one relates, for me, mindset is a bit of a propaganda term. eric: Moderna has the Moderna mindsets, so I try to rephrase it for myself just so I don't sound like a Moderna robot. Right. Moderna bought ways of thinking. It's actually, it was funny, just completely tangential. It reminded me of like a talk that I did at Data-Driven Pharma recently. We hosted it at Moderna. And I needed to go onto the guest wifi. And the guest wifi as always has a landing page. And the landing page is the Moderna [00:28:00] homepage. And then I was like, okay, that's propaganda in front of everyone. Amazing. But I think what was really cool about that one was like, that was a top one, which I was trying to encourage folks in the life sciences space, if you're a data scientist. Software developer, software engineer, et cetera. If you're some kind of technical computational biologist, you're some kind of technical person within the life sciences. It's time to start building your own tools. The barrier for building your own tools is much lower, and the way that I wanted to communicate it, I was like going back and forth on how to do it. Could have made slides. Right. But that would just be a boring monologue at that point. Right. So do you wanna show people you wanna build with people, right? Yeah. So what I did was quoting Andre Carti, where he said, making slides seem so difficult now that we know that cursor exists and cursor for slides should, I went and built an LLM generator. An LLM bot, A llama bot that generates slides using structured [00:29:00] generation, generates markdown slides, and did it live in front of everyone. And then the slides were live generated from the blog post. That is actually the talk that, the message that I want to give. I generated the slides and then after that finally presented off the LLM generated slides. It's super cool, man. Aw, so cool. Yeah, just remember what was that tool we used to use where we could. Thank you. Can we get the check as well please? Text. What was that tool we used to use? Ah, this is back in the day, man. Nearly a decade ago where you could build slides that had executable Python in them. hugo: Reveal, yeah. Reveal it. Was that guy Jupyter Notebooks? Yeah. Who was the guy who, cool guy who? I think he was South American, but worked for Continuum. Anaconda maybe lived in like the guy who worked on it. Oh, is it him? Oh, no. Piri. That would've been PIs script. No, you're talking about, oh shit. What's his name? I love him. Hakim. No, that's Reveal. I can't remember either. But yeah, reveal js with With, yeah. Reveal js with, what is it? The [00:30:00] slides mode for Jupyter Notebooks, right? Yeah. Yeah. So I am actually interested in how you use AI assisted coding all over the place. Gimme like top three or bottom three use cases. eric: Ah. Menial work. Claude get, like for example, Claude go and reformat this. All of my examples, which are in this dot high file, let that cook in the background. A second big one was recently blogged about it, which is, let me generate like. One hour builds. Basically one hour builds. I saw on SAP Conquer, you can upload a receipt and it'll just fill out the expense details for you. And I'm like, I can do that. So I got cloud code to do that for me. And then with the weirdest tech stack ever notion for database Fast, API for backend, HTMX for front 10. Amazing. Right? Like a reveal. Sorry. Next JS app with Postgres is boring, right? 'cause everyone does it that way, right? [00:31:00] Mm. So I'm like, what if I put the corners of the tech stack together? Could it do it right? And it could, right? That was fascinating to see. So that's the second one. And then more broadly, that second one illustrates just building tools for myself, menial work, building tools for myself. To up my own productivity. Right. And then there's some lightweight analyses that I can, well, no, those are the big two. 'cause even the lightweight analyses, I end up just doing tool building for myself first, and then using the tools. So I think those are the big two, right? Menial work. Mean you work in like tool, small tool builds. These examples are so nice for several reasons. One is they tear apart the coding conversation, right? hugo: In terms of like the people saying, I'm gonna displace Salesforce with something I vibe coded in a weekend, and other people saying no. Right? These two examples, like make that conversation almost irrelevant I think, because I actually showed two fundamentals. Very important use cases and the building tools, both the building [00:32:00] like single use or multi-time use tools for yourself, speaks to the point that what we're doing with vibe coding or software composition or AI assisted coding, it actually increases what we're doing is increasing the surface area of what's possible with software. Yes. And the definition of software. And once again, I don't wanna get too big picture, and I have talked about this before. Software is something which building software historically has required a lot of like really good knowledge workers to build. It's super expensive to build. Yes. That means is that you need to get a certain customer base of a certain size in order to fund the building and make it profitable. And what you need to do then is cover a lot of edge cases 'cause you've got such a customer base. So we think of software, consistent, reliable software as something which needs to solve a lot of edge cases. Right, purely for that historical reason, because we've had a lot of customers who will undeniably have a lot of edge cases, right? So now the fact that we can create just in time software or ephemeral software for ourselves and our teams that we [00:33:00] don't need to do these things doesn't mean it's any less useful. And I think I actually see a sprouting and it's like, and I went to the Amazon Rainforest and once again, I don't wanna get too like. Tech organic, but it feels like a space where we've used the term Cambrian explosion and crap like that before, but yep. It does feel like a space where software is gonna take on all these wonderful, we're in a beautiful yeah. Garden where we're just sowing the seeds and we're about to see so many amazing things Sprout. eric: Yeah, yeah. Yeah. I like the point you made about ephemeral software. That's a good one. And also like SaaS is hard. Like all the SaaS doomsayers, they've got a point SaaS was supposed to solve. For common repeatable problems. Now what's gonna happen when we can do bespoke builds? Right. I think that is something to really think about. Right? Lemme just check here, where do I put this? And of course I'm half joking about some of the best things or some of the ways SaaS companies keep their a RR going isn't necessarily through product, it's through vendor locking, so, [00:34:00] right. That's what I don't like. I definitely don't like the rent seeking practice. Yeah. But if it's like value giving. That's awesome. And what have you seen in Moderna for, let's say, non-technical or less technical people using AI assisted coding to build prototypes and whatever it may be. Does that happen much? That happens quite a bit. Sorry, hold on. Let me just get this receipt done correctly. So total, put it there. And put my signature there and then quickly take a photo so that Conquer can do its thing. I built a tool to do part of what Conquer does, but it only works with my own Notion database. Okay, there we go. In terms of prototypes, like stuff that we built, keep walking, talking as we walk. Yeah, let's keep walking and talking. I think this will this great. Absolutely work. Okay, here we go. Fascinating about this format is like all of our. Other micro interactions with the external world are still captured too, so it's very organic. Okay. So stuff that we've worked [00:35:00] on, I've got colleagues who built, I actually have colleagues who built a a bot for q and a about. Finance and it wasn't even built by the data scientists, it was built by the finance people. 'cause we gave them tools to do that. They just did basically rag on the finance documentation and put it in chat. JPT enterprise. And now I have the ability, recently I just tested it. I have the ability to go in to that GPT, that custom GPT, and instead of needing bother other finance colleagues to field q and a, I could get my answer in minutes. And it's verified answers with. Quotations from the original documentation, which I can double check and cross verify. And instead of having that question weigh on my head for an entire day while I wait for a very busy colleague to respond, I'm done with my question in five minutes. Right? And so it's such a productivity gain, such an efficiency gain. Incredible. And then there's other, [00:36:00] can I just ask about that example? Yeah. Down the hall. Okay. You, thank you. How does she know? I, I walked by once, so I am interested When your colleagues built that and you used it, I love that you get sources and you can verify what, was there a basic evaluation process or was it really There had to be. hugo: There had to be. How did, actually don't know how they built it. Yeah, okay. How, what the process was. But I can't imagine if they did do tests, they would've checked that the most common questions that were asked of, uh, the finance team, those most common questions would've been part of the testing process. And it's, and once again, this speaks to the earlier point of people who care about the product and the data. Of course, people working in finance will make sure, yes. And of course it helps that it's regulated, right? Yeah, that's right. Exactly. Okay. Alright Hugo, I think we should, so maybe pause. Yeah. But just want us, we're here at SciPi. That's right. And you're about to teach building with LMS Made simple using Llama Bot. Yep. And I'm your [00:37:00] ta. Yeah. Awesome. This is gonna be fun, man. Awesome. Chat man. Thanks for tuning in everybody, and thanks for sticking around to the end of the episode. I would honestly love to hear from you about what resonates with you in the show, what doesn't, and anybody you'd like to hear me speak with along with topics you'd like to hear more about. The best way to let me know currently is on Twitter. Vanishing data is the podcast handle, and I'm at Hugo Bound. See you in the next [00:38:00] episode.