The following is a rough transcript which has not been revised by High Signal or the guest. Please check with us before using any quotations from this transcript. Thank you. === tim: [00:00:00] So now fast forward to ai and in some sense everybody goes, oh, this is this new thing. It's making programming go away. And I go, I don't think it's making programming go away. It's making programming much easier so more people can do it just like a compiler or an interpreter made programming a lot easier because, hey, you didn't have to write this huge assembler program to tell the computer exactly what to do. You expressed some higher level wishes in the form of a structured program, and this compiler magically turned it into machine code, right? And I go, why is this different? If it's clawed, that's magically turning your English into Python, which is turning your Python into machine code, which is, and then in turn turning it into a bunch of electrical signals. Come on. It's transformations all the way down. We've just added another layer to the stack. Each time that has happened, more people can access the technology, more people can do cool things. And it's true even [00:01:00] in the application space. I think of a young friend of mine who's showing me his film that he created and he did the film, he did the music, he did all the, all these tools that just made it possible for one person to create this thing that you used to be. You needed an orchestra, you needed a film crew, and suddenly what happened? More people are doing it. And it also didn't mean that we didn't have people making big budget productions with big film crews, but it meant that you now have millions of people who can make a living, entertaining other people with video and music that they create and share on YouTube. I guess I'll, I'll just say the future giveth and the future taketh away, but in general, so far in, in computing, it's giveth more than it's taken away. hugo: In this episode, I'm speaking with Tim O'Reilly, founder of O'Reilly Media, and one of the most influential voices in the history of technology. We're now at [00:02:00] another inflection point. AI is changing what it means to program, to build, and even to learn. Tim argues, we're not witnessing the end of programming. We're witnessing the beginning of something far bigger. And just like in past revolutions, the real breakthroughs won't come from today's dominant players. They'll come from the edge. Tim shares a sweeping perspective on how programming is evolving in the age of ai, drawing parallels to previous computing revolutions from Unix in the early days of the internet. To today's LLMs, we explore how AI is reshaping software development, why decentralization matters more now than ever, and what it really means to build the future rather than buy it. We also get practical what this all means for developers, for organizations trying to adapt and for anyone thinking about how to learn, teach, or build AI systems responsibly. If you enjoy these conversations, please leave us a review. Give us five stars, [00:03:00] subscribe to our newsletter and share with your friends and colleagues. Links are in the show notes, but before we jump in, let's just check in with Duncan from Delphina who makes high signal possible. What's up, Duncan? Hey, Hugo, how are you? I'm fantastic, and I'm so excited for everyone to check out this episode with Tim. I'd love if you could tell us just a bit about what Delphia does and why we do high signal. duncan: At Delphia, we're building AI agents for data science, and through the nature of our work, we talk to lots of experts in the field. And so with the podcast we're trying to share the high signal, hugo: and as you know, I'm excited about every episode we put out, but particularly this one with Tim O'Reilly, with everything you are doing with AI Agents for data science, I presume a lot of it resonated with you. duncan: Tim highlights how enabling these breakthroughs can be, which is obviously exactly right and super exciting. And the version of this where it takes away all of the boring, formulaic work. [00:04:00] I, I don't think anyone ever really liked working on assembly language, but there's gonna be interesting retroactive rationalization there, I think, which is that if you look back three years ago, I wouldn't have said that Python coding was formulaic. And so it makes you wonder, what work are we doing today that will be considered formulaic in the future? Without a doubt. And it's funny you should hugo: mention that because something I chat a bit about with Tim is I do a lot of work in education and a lot of the time I had to teach people how to use APIs and now I'm liberated to teach people how to build AI systems. So it is really a new zone and a higher level of abstraction where we get to do a different type of work. duncan: Love it. hugo: Awesome. Let's jump in. Hey there, Tim, and welcome to the show. tim: Oh, glad to be here. It's fun to talk to you again. hugo: It's always a pleasure and this is the first time we're chatting with something we're about to put out publicly, so it's super exciting to do that as well. It's such an exciting time for computation, for programming [00:05:00] and and ai. So I'd love to start by just finding out what's most exciting for you at the moment. tim: Part of it for me is if you look at my career for the past 40 years, I've been in the in the business of figuring out what's new and exciting, talking to the people who are making it happen, and then figuring out how to help other people learn what they're doing and. There are exciting periods in that career and there are semi boring periods, and this is one of the incredibly exciting times. That's just the basic report from the front. You know, it reminds me like when we started, it was just the beginning of the spread of industry standard operating systems. Yeah. Literally, on the one hand over there, there was the commodity DOS operating system. We weren't part of that, but there was this thing that we fell into, which was Unix, which of course became Lennox, and it was effectively [00:06:00] was the, the PC had one operating system for commodity hardware, and Unix was starting to be, this is the 1980s. The same time was a, a sort of a commodity software layer for many different kinds of hardware and so they were almost like mirror images of each other. Mm-hmm. Anyway, we happened to fall into this. World of Unix and it was just, it came out of a research environment. So there were, maybe there was a research paper or two about some particular program or the theory of why they did what they did, but there wasn't very good documentation. And so we could just throw a stone in any direction and hit something, go, let's cover that. And, and then the same thing happened with the internet when it came out where it was just like there was this whole new world that was just exploding onto the scene. And anywhere you look, there were things to talk about. And then of course, as these things mature, you're getting putting finer and finer points. You're updating the information about things that have changed a little bit, [00:07:00] but it's not like new every day. And now we're back to this new every day kind of world. But there's a bigger thing that makes it exciting and, and this is a pattern that I've observed every time in my career that. Something makes it easier for more people to use computers in a more powerful way. And I actually think about it. Uh, I actually, the, the way I talk about it is in some ways the entire history of computing can be seen as an effort to bring computers closer and closer to the way that humans express themselves. So if you think back to the very beginning, we were actually encoding a program back with the very first computers that we built into the forties, Enoch, and so on. You were literally making circuits to express a computation. It was a one time computation, and then you would set it up to a ballistic computer. You're trying to do a [00:08:00] calculation, you're setting up a bunch of circuits, and then we come up with the idea of the generalizable computer, but it's still programmed at a very low level. We're having to speak. You know, again, when I came into the industry, my, the very first manual I ever wrote was an assembly language programming manual. And it was like, move data from this register to that register, perform this Ari medical calculation on it. It was just pretty low level stuff. And so then, you know, you get to this next wave of higher level languages with the, and, and particularly interpreted languages, which I think brought programming much closer to ordinary people. Again, back before the microcomputer era, literally there was a machine room, air conditioned, big machine sitting in there. It's priesthood, nobody else, it was batch. Nobody else really could use it. And suddenly it was to use Bill Gates phrase, it was a computer on every desk and in [00:09:00] every home and powering. That was. A set of interfaces. And it was partly the simpler programming language is like basic, but it was also simpler command line interfaces. In the case of eunuch, the eunuch shell, the actual command line they used to talk to the computer was also a programming language. Very simple one. And you could, we used to talk about this in the days when Pearl was the king of the hill in, in, in scripting languages and so on. , it was just this progression that you had in UNIX, this beautiful progression from you. I, I just asking the computer to do one thing, move this file, rename this file, and then you go, Hey, if you just add these little scripting languages and you know, the scripting constructs like a for loop or a whatever, or a, a wild condition, you could actually do that same action again and again so anybody could go Oh wow. So I could actually put these words into sentences, so to speak, in paragraphs. And that's programming. And then of course you had text processing tools like said, and a, and then Pearl tried to put all that [00:10:00] together, all the power of programming with the shell, but regular expressions. And you ended up this effectively building a language that made it, certainly for me, I came at this from text processing, so it was perfect fit for me. And I, but it, it's that ease of use. And then along comes the web. Another 10 years later, I wrote a paper in 1997 called Hardware, software and Info, where, which was about this idea that we had an industry dominated by hardware in the case of IBM, an industry dominated by, uh, software with Microsoft's dominance with Windows. And now the web was bringing something that I, I first called info where that was a term that didn't stick, but the idea was that we're now building interfaces out of human readable documents and the inversion between, say, Microsoft Word, where you had. A little bit of human language embedded in menus to the web where you had basically you were calling programs by embedding them into human readable documents, was this [00:11:00] huge step forward in, in the, in bringing computers closer to humans. And of course there was this huge explosion where pe 'cause anybody could now create an interface. It was just a document. And, and, and a link could just had, there were some simple things you could point to another document and that was the original function of, of links. But once Rob Kool invented, CGI, the common gateway interface, which let you connect a link to it, a backend database. We had dynamic, you know, websites and we had all that kind of stuff. Suddenly it was a programmable surface. So now fast forward to AI and you go, oh my gosh, we've just brought it so much further. We can just talk to an AI and it can figure out how to build it. It can talk down to a program and in some sense. Everybody goes, oh, this is this new thing. It's making programming go away. And I go, I don't think it's making programming go away. It's making programming much easier so more people can do it [00:12:00] just like a compiler or an interpreter made programming a lot easier because, hey, you didn't have to write this huge assembler program to tell the computer exactly what to do. You express some higher level wishes in the form of a structured program, and this compiler magically turned it into machine code, right? And I go, why is this different? If it's clawed, that's magically turning your English into Python, which is turning your Python into machine code, which is, and then in turn turn it into a bunch of electrical signals. Come on. Its transformations all the way down. We've just added another layer to the stack. Each time that has happened, more people can access the technology, more people can do cool things. And it's true even in the application space. I think of a young friend of mine who's showing me his film that he created and he did the film, he did the music, he did all the, all these cools that just made it possible for one person to create [00:13:00] this thing that used to be you needed an orchestra, you needed a film crew, tim (2): and suddenly what happened? tim: More people are doing it. And it also didn't mean that we didn't have people making big budget productions with big film crews, but it meant that you now have millions of people who can make a living entertaining other people with video and music that they create and share on YouTube. I guess I, I'll just say the future giveth and the future taketh away, but in general, so far in, in computing, it's giveth more than it's taken away hugo: without a doubt. And I, I couldn't agree more. I do think something in a lot of ways for those of us working in machine learning and data and natural language processing, having. A natural language interface to software is the holy grail in, in, in a lot of ways. Mm-hmm. And so it's incredibly exciting in that respect, but also the possibilities that o it opens up, as you've stated explicitly, everyone being able to have more access to create software [00:14:00] and what that will bring about. I'm really excited about product managers and UX people and these types of somewhat technical, but maybe not super software and engineering capable people really having inroads. Now, I'm also really interested what it means for us who are super technical already, and I mentioned this to you before we started the show, but I've been working with Cursor agent recently, and I've been building based on an an LLM course. I'm, I'm teaching, I've got a whole bunch of content and I've been building an information retrieval system. It has a bit of rag but not, not entirely around it. And the ability to start with an MVP and then work with cursor agent and see what it builds except reject, quote unquote vibe coding in, in, in some ways. But it is. It feels like, yeah, surfing in a different way. Like I actually had feelings I hadn't had before. I, I think, which is really important to, to consider. I'm wondering from your perspective, what type of new things could emerge now, and I know that's a very broad [00:15:00] question, but I know it's something you think about deeply. tim: First off, I think that in the beginning of every new revolution of every, of every tech revolution, people first start trying to make the old thing. And it takes a while before somebody makes the new thing, and then they go, oh, we get it. So think about, again, I, I think a lot about this through the rise of, and actually this is, maybe this is a little bit of a side trip in a lot of the narrative, at least in the beginning it was like, oh my gosh, open AI is the next Google. And my response was, no, actually open AI. Looks like I, I'm not sure which, but they're either the next a OL or the next Netscape. Hmm. Yeah. And, and why is that? I have two reasons. And one of them is this mistake that I've seen again and again through my career, which goes in these big industry transitions. And this is very much the narrative that I developed when I was talking about what, what I call web 2.2 0.0 [00:16:00] after the, why did some companies survive the.com bus and what was it about them? And it, they really figured out what it meant to be internet native in a, in a way. And the first generation of companies were not internet native. They were knockoffs of what went before. So if you, you go back to that a OL uh, Netscape comparison. So a OL was the big content aggregator of the first age of the internet, and they had this idea that they would be like the giant central place where you found content and the web came along and it was internet native. It went, why does it all have to be in the same place? Same thing with mp3.com versus Napster. Some period later this, the first generation of people are like, wow, we can put all the songs in one place. And Sean Fanning, who's a 19-year-old kid at the time, comes along and says, why do they have to be in the same place? We can just find them wherever they are. So they, we have these internet native, decentralized [00:17:00] systems. So the whole idea of big centralized content repositories in the way that the first generation thought about it was just wrong. They came back, but, but they came back in a different way. And I'll come back to that. Meanwhile, along comes Netscape, and they're the software play. And they go, okay, we have, we own the the commercial web browser and we're gonna own the commercial web server, and we're gonna have, once we own both ends of this, we're gonna be able to make the internet operating system and we'll replace windows. And guess what? A Microsoft, they didn't win on that because Microsoft came in and compete with, to compete with them at Apache was the open source alternative on the web server ad. So they didn't get that. But it was also just the wrong metaphor because the software layer was not actually the control point anymore in the age of the internet of interoperable software programs with open source in particular, it wasn't really possible to basically get control at that pure [00:18:00] software layer because we had open standards. And so saying, so basically Netscape did, made the mistake. And again, I guess going back to even to the previous revolution, the PC revolution, you can look and see how IBM controlled the industry by controlling the hardware architecture they had. The dominant hardware software was an afterthought. And so all the initial companies in the PC were competing to be the dominant hardware player, and Microsoft emerges as the winner because they played the new game. So I guess the point is, in each case a new, somebody comes along and plays the new game. So here are these people. The first generation of internet companies fail. And then what emerges is, you know, companies like Google and Amazon that are internet native and they realize that, oh, it's actually about network effects, about big data, about algorithm management of massive amounts of information and solving what Herbert Simon called the attention, problem and abundance of information creates this scarcity [00:19:00] of attention. And so therefore we have to, he, he literally said in 1970, in the future, we will need, it's hugo: absolutely wild how Herb Simon said tim: that, yeah, we will need machines to help us allocate our attention effectively. And so every company in that generation was an attention allocation company in one way or another. Amazon for product search, Google for search in general, uh, Facebook and others for social interactions. And along comes Bert and the transformer paper in 2020 and I go, oh my God. Google just repeated the IBM mistake when they published the specs for the PC because they thought it was not gonna be, change the game. And Google effectively open sourced their secret sauce. And what that means is that idea of you get to be the big central player that owns everything is, is not going to be be the end game. And of course we've seen that now with deep seek. So I, I think we're coming [00:20:00] back to, in, in some ways the, the next iteration of what's happening is really how do we discover a decentralized AI future as opposed to the wet dreams of VCs, is that we're gonna have a, a centralized AI future in which my company is the winner. And, and that kind of relates to my feelings about what's wrong with Silicon Valley right now, which is they're trying to buy the future rather than compete their way to the future. hugo: Yeah. And that's a direction I'd like to go in. I do wanna, I'm glad you mentioned Amazon as well, 'cause I do think Amazon provides a really interesting case study in something which Jeff Bezos Oon at at al realized there was something about brick and mortar stores, which they could translate, but they also realized that there were certain things about the online environment where your mental models could change. And correct me if I'm wrong, but I think in your book, WTF, what's the Future and Why It's Up to Us that I'll link to in the show notes, you actually give an example tim: of hugo: how people thought. [00:21:00] That when building e-commerce flows that you'd need to put stuff in a cart and then take it to a checkout, because that was our mental model of how That's Right. Shops work. Of course, Bezos had the brilliant recognition that you can just have a click and buy button and you just need to change a few lines of code. I think he may have even patented he He did. tim: Yeah, he did. Yeah. hugo: Right. But that example of recognizing what works in the new domain that didn't in the previous are the types of things we're talking about. Right. tim: That, that's right. And, and so the reason why a decentralized economy that has more people playing is that gives you the opportunity to invent the future. Because again, bill Joy, who was the, you know, chief technology officer and co-founder of Sun Microsystems back in the day, he said, the one thing that you really have to remember is no matter how good your company is, all the smart people don't work for you. Yeah. And I, I think that. Uh, you know, this idea, which Google did a pretty good job of getting all the smart [00:22:00] people, but not forever, you know, and a lot of open AI attracted a lot of the, the smartest people in ai, but they didn't, they couldn't keep it. So my general theory of tech progress is innovation happens most in periods of decentralization, and then that decentralization leads to this experimentation in which somebody figures out crucial pieces of the future and they accumulate market power and that market power leads to a new cycle of monopoly. It's what happened with the rise of Microsoft. It happened with the rise of Google and Amazon, and I think it's gonna happen again in ai, but it's a process that I don't think you can short circuit. I. And so all of the, this goes back to why my arguments with Reid Hoffman about his idea of blitzscaling, which is just you buy the market share, you buy the [00:23:00] attention, you buy the whatever, and you become dominant really fast, and then you can hold onto it. And I go, that might work sometimes, but I think in the end it's not a winning strategy. And it, you see hugo: winning for the business owners or entrepreneurs, that's, but it isn't winning necessarily for all of us collecting that. That's tim: right. And, and so my classic example of that in the past was with Uber and Lyft. Now after, so basically the VCs come in. First of all, it always struck me this interesting thing that Google raised about 35 million. That's all it took. And Sunil Paul, who really invented the GPS enabled on demand ride hailing model with sidecar raised about $35 million. hugo: Yeah. And just tim: for context, is hugo: Sidecar the one that was before Uber, but it was probably too close to when the iPhone came out so people weren't comfortable using Commerce? No, not only that. tim: Sunil actually [00:24:00] patented all these ideas in 2010 years before Uber and all that. And then when he finally went, oh, I'm gonna do this. But, and both Uber and Lyft originally had different models. Like Uber was originally we're gonna use SMS to call black cars for rich people who wanna get picked up somewhere. Lyft was originally a ride sharing app for going inner city and long come the VCs and they doubled down on these companies. They adopt ILS technology and they just pile on and they basically drive everybody else out of the market. So we don't get the period of experimentation like they buy, share the, the prices are super low. Unsustainable low. And so taxi companies don't adopt the technology. They're just driven out of business. There's no local competition. And so finally, they, they go public. The, the VCs get their money out, and at some point, Uber and Lyft have to raise their prices, and then we find what the market level really [00:25:00] is, and then the competition starts to come in. I just saw recently the bolts coming back from Europe, coming in from Europe. They're starting, they're starting to be much more in the way of local innovation with, with taxi cab companies adopting on-demand technology. And all of that would've happened in a, in an ideal world without the Blitzscaling model. Anyway, so fast forward the ai. I do think that there's certainly been a need for massive amounts of capital in one sense, but it's pretty clear that you can't buy your way to the success. And we don't really know how it's gonna shake out who's gonna be the winners. But I do think that the sort of the architecture of, if you're a gamer, you'll recognize all our base do belong to us. And if you're Lord of the Rings, you'll recognize the one ring to rule them all. That has become the Silicon Valley religion, and it's just wrong. It's way better and way more interesting when there's a lot of competition. [00:26:00] And yes, eventually people win, but they don't get to buy their way to the top in the beginning because that just suppresses the innovation that we need to figure out the new thing. hugo: Yeah. And you're not responding necessarily to a broader market dynamic. So there's a lot of, mm-hmm. tim: A, hugo: a deep lack of information symmetry. And on top of that, I think part of my beef with it is I've got issues with different aspects of capitalism. This seems like bad capitalism, like it doesn't help us have a thriving middle class On top of that. The proponents of these ways of working, talk about it in terms of it being the free market. And it's anything like, at least be honest about what you're doing. It's anything but a free market. When we have these, when tim: people set out where the first goal is, I gotta achieve a monopoly. hugo: Yeah. A race to the bottom. Exactly. Yeah. tim: Yeah. But let's go back to the earlier discussion of programming. 'cause as this is triggered by this piece, I wrote the end of programming as we know it, and this idea that I'm, I'm putting together an online conference to get [00:27:00] people starting to share their stories about how programming is changing. And I wanted to just highlight the notion, first of all, that I don't know the answers. I don't think anybody knows the answers, which is why I am trying to bring a bunch together, a bunch of people to tell their stories. We wanna know, how are you doing using this technology? What's, what have you been able to figure out? It's very fast moving. We're trying to build a community. That's a parade for me. Of all the people are inventing the future and let's. Tell their story much as back when I told the story of open source, it was like, oh wow. Here's a set of people that the, the re regular world doesn't necessarily know about who are actually on the front lines. They're not necessarily VC backed, they're just cool people who are doing cool things with this stuff and actually really figuring out what's going on. And so I'm trying to figure out how to collect that group and bring them to the world. But I'm also trying to counter the narrative and very deeply, my real trigger for this [00:28:00] is very similar to my trigger for my activism around Web 2.0. Actually a around open source. Let's go back to the first one. What was, what was my activism about open source? It was like, dudes, you, you bought this idea from Richard Stallman that free software is somehow hostile to business. Mm. And it's the revolutionary movement that's anti-capitalist. And so you're against it. And I go, I've just been around Berkeley Euch, I've been around the X Window system. And they were, Hey, we're just sharing this stuff and we want everybody to build on it. And I go, that's a vision that, uh, and, and Tim burn asleep. Put the web into the public domain. Oh my God, this is a different story here. You gotta get the story right. And of course, when everybody went, oh, it's not just about linnux, it's about the internet. Suddenly there was a news story. So Web two oh, it was like, oh my god, the.com bust, it's all over end of the internet, boom. And I'm like, wait, look around. There's these companies that's that really throve. What, what's different about them? [00:29:00] And I, so, so now I'm going, I'm sick and tired of hearing this is the end of software development and all programmers are gonna be outta work and there'll be no more junior devs. I go, no, no. The, the true narrative is, yeah, all those jobs are changing and some jobs are gonna go away. Actually, I just read something from Harvard where they were going there on their analysis. 12% of jobs are threatened by AI and 19 tech jobs and 19%, uh, of, of jobs that formally didn't use tech will now need to use tech. So you go, that's, it looks like a growing market to me, not a shrinking one. So it's just a reallocation. And, and so I want to tell that story. I want to talk, there's this fabulous story I heard from one person. I don't want to give his name because, but a prominent tech person and he told me the story of his high school age daughter who got an internship and she's a bio nerd. She got an in internship at Stanford, sophomore in high school. And this professor says, I gotta give her a [00:30:00] project. He says, I think pulse oximeter's a pretty, pretty crappy technology. I bet we could do better if we looked at the, the, the capillaries in the retina. Can you look into that? It's a throwaway idea for him. He doesn't have time. He didn't follow a, a grant. He gave it to a. Uh, a high school intern, and she's never programmed before, but she works with chat, GPT. She gets it to figures out how to get it to write a program to isolate the capillaries. And a bunch of, you know, she gathers a bunch of images of, uh, of, of retinas and look at the capillaries and figures out how for it to do the image processing, to figure out the, the oxygen saturation and it works. I go, and there's two lessons you can take from that. Oh my God, she didn't need a programmer. And the programming. And I go, I go the other way. I'm going, oh my God. The beginning of programming. This is like suddenly the surface area of all the questions that we could just ask [00:31:00] and throw off, like before the cost to look into that would've been write a grant, hire a programmer, right? And now it's give it to a kid. And I go, what does that tell you? The surface area of exploration is so much bigger, you know, as, as you make these tools more available to more people. And then you go also then suppose you take that and you go, okay, now I wanna turn that into a product. Suddenly you need real software engineers again, because she doesn't know how to basically productize that it's not, uh, and, and figure out all the, the data storage. But she was able to prove the idea and, and so I, I guess I just feel like the stories like that bring home the opportunity in demo, in the democratization of programming that comes with ai. And so our job as an industry is to figure out, okay, so it opens up these doors. [00:32:00] It, it, it creates new possibilities. Now how do we actually evaluate whether that's. You know, good, good work. In other words, did this sort of AI experiment by somebody who's on exposed to programming actually do the job? Do we, how do we evaluate that program? How do we figure out if we deploy it, will it scale? How do we figure out how to deliver it? What's the right way to turn this into a product? All of those activities haven't gone away, and so I, I just, I think it changes the way we have to think about the jobs. But guess what, that happened with the web too. You had a user interface designer in the world of, of making gooey software. And yeah, I would guarantee you that there are a lot more user interface designers in the world of the web than there ever were in that world, because just guess what, there are a lot more web applications were made. hugo: Exactly. And this is, as you point out in your essay at the end of programming as we know it, which are link to once again in the the show notes. Mm-hmm. This is an example of many things [00:33:00] including the elasticity of of demand. But you give several historical examples. I think one is the, the. Arrival of WordPress and Squarespace and all of these things, and that heralding the end of front end web development. And what do we see? Then of course, we don't see the end of front end web development. We see more specialization and more people wanting these services provided, right? And more people creating them just at very different levels and a ization of the processing system tim: and, and just an exploration of the possibility space. Again, you, you, we could use so many historical analogies. It used to be that that writing was reserved for a priesthood and, and there was a set of scribes and most people were illiterate. And what happened when, uh, with the printing press, it was more people learned to write, more people learned to read. Uh, the market expanded and, and fast forward. I used the video example before You democratize the tools of making video. More people are creating, more people are consuming. And [00:34:00] yes, we democratize the tools of music making. I, I I, I, I feel like what you think about, you used to have a symphony orchestra and it was the reserved for kings, and then we end the rock era and you can have a couple of men and women get together with guitars and drums and whatever, and make a band and what happens More music. Yeah. And so I, I guess I, I just feel like we're, we're, we're, people are, occur me in a panic when they don't need to be. They need to be out there having fun and. And hanging out with the people who are inventing the future and who are excited about it. It'll carry you pretty quickly of your pessimism about the future of software development. If you get out there and, uh, you just hang out with people who are having fun hugo: and hang out with an LLM, literally go and just, if you haven't used Windsurf or Cursor agent mode or continue, which is an open source version of these types of technologies, jump in and and play around and see what the possibilities are. It is mind blowing. I love that you mentioned writing as well, 'cause I do, I [00:35:00] recall in your essay you mentioned an email correspondence with Chip Quinn who wrote AI Engineering, which was recently published by O'Reilly, who finally, I've been emailing with her recently. She's gonna come on the podcast at, at some point. Oh, excellent. It'll be, will be super fun. She's great. She isn't she And in, in pure chip, very deep, thoughtful thinker style. You mentioned that, that she, she said, and I paraphrased, but current developments in AI really. They don't take away the important parts of software engineering and, and system building. They actually reveal the important parts of it su such as that's right. System design and, and these, these types of things. She compares it to writing that. Originally people thought writing was like writing something down. With the advent of all the computation we have, computers we have and word processing, what it reveals that is writing is about the logical or otherwise ordering of words to create semantic meaning that is commun communicable to o other humans. And that's a high level task. I suppose my question for you is, I know we [00:36:00] don't know yet, but how can we think about what software engineering evolves into or building? So software systems, tim: here's the thing. Uh, I actually got a wonderful image from Sam Schacht who used the image of metallurgy in the industrial revolution. So you have this steam revolution and they're trying to figure out how to build railroads and they need rails. They need boilers that don't explode and they have to get better at making. Steel and it's really the beginning of the real science of metallurgy. And he used, the analogy is we're at the stage of cognitive metallurgy. We have to actually act, we're trying to figure out cognition. And I think that was an interesting observation and, and I dunno whether it's just cognition, but there are going to be whole new disciplines that emerge because of this new paradigm. And I guess, again, if you just roll [00:37:00] back slightly, I, I think that if you look at the innovations of the web, there was this much bigger surface area for innovation than there was in the design of PC applications. You know, you had a limited palette and suddenly you have an infinite palette. And so new things got invented somewhere along the line. I, I've never quite figured out whether it was. The newsfeed in the order of priority was at Twitter or Facebook. Who did it first? That became a thing. It was a new user interface paradigm. That was pretty amazing. If you even think about, there were always, there had been search engines before Google, but Google figured some things out, pay per click advertising, which even though it started with, you know, overture and go to Google, really perfected it with its auction technology and figured out how to make ads that actually worked, although they later seemed to have forgotten that. So there were all these [00:38:00] things that had to be invented because of this bigger surface area. And I, I think in a similar way, as we start to apply AI plus programming to new kinds of problems, we're going to figure out that we need new kinds of supporting infrastructure. We're gonna need new kinds of. Monitoring. We're seeing this right now with AI agents. How do they all talk to each other? What are the rules? How do they, how, how do you know that the agent really represents the person that says it represents all this stuff is work to be done? Who's gonna do that? It's, it's this creation that has to happen. I actually, I just wrote a paper about a, a short piece that I think it's gonna come out in the next couple of days. I, I go, you know, when we know somebody who can basically remember anything they've ever read or can calculate large numbers in their head, but they can't really invent anything new. We don't call 'em geniuses. We call idiot savants. [00:39:00] And in some sense, or ai, everybody's saying they're intelligent. No, they're idiot savants and they can do these magical things when we ask them to. And they also don't have, we may have, even if you call it artificial intelligence, rather than artificial expertise or artificial whatever, there's no artificial volition there. They don't get up there in the morning and say, I think I'm gonna try to figure out a new way to detect blood saturation by looking at the retinas. Somebody has to tell 'em that. And so that, that sort of creative spark that says, let's do this thing rather than that thing. The choices that we make are so critical. And when you, when something becomes cheap and accessible, we use more of it. And so we have this new power, which has been effectively commoditized, and it just drives value to this new thing, which we are in the process of inventing. Yeah, hugo: absolutely. And I also, I do [00:40:00] love David Donaho's term, recycled intelligence that he uses to describe Oh, I like that. Oh, I like that. Machine settings, recycled intelligence. That's good. Which is, it's not on. Yeah, I like it. hugo: Yeah. I am interested, particulars, you and I are both very interested in education and a lot of the work you've done over the past 40 to 50 years has, has been bringing cutting edge practices to the masses. And actually on that note, wait, is a running nearly 45 years old? tim: It depends how you count that. We started the predecessor company. It was started in 1978. I, it was a tech writing consulting firm. I had a partner and we, we broke up in 83, and so that's when O'Reilly was formally founded. So I guess I would say so depending whether you count from 78 or 1983, we, I either 42 or 47 years in the business. Amazing. I, I, I hope you'll have a be [00:41:00] 50th Yeah. hugo: Birthday party at, at some point. But I am interested as, as a teacher and an educator as well. It's not obvious to me at the moment what to teach people. Let me correct myself slightly. I think teaching principles of generative AI and, and, and software development, these types of things are useful. But in terms of tools, frameworks, larger processes, methodologies, and I'm wondering. What your advice would be for people who wanna learn at the moment. And I think we, 'cause we're in such a chaotic space, it's not clear, tim: yes, it's gonna be a moving target, but I guess I would say what is, as the base layer, it's what is it that somebody needs to know to get over the hump of asking Claude or you know, LLM of your choice to write a simple program for you. Because I do think that it's, in some sense, there's [00:42:00] that activation energy and, and a chemical reaction. There's a resistance to getting over the, over the hump of that, that first reaction that it can become self-sustaining. And what's the activation energy required? How do you lo you get somebody over that hump? And I guess I would roll back to my first experience with Unix, which was transformative for me. Okay, here you're using a, uh, a computer and it lets you type, but it doesn't let you program by normally unless you go off into this separate space called programming. But Unix did, that's why it was magical for me. And it was just like this idea that you, oh, wait. The same commands that you give one at a time on the command line also can be assembled into a script. And oh, the same commands that you use in your editor, uh, to search for something or to make a global [00:43:00] change, can be turned into a script. Oh, those two things can be put together into a complex program, and suddenly I'm there writing these complex, like writing a, effectively a program that reads a thousand page document, looks for certain things and generates an index. Right? Because it was just like this. In the same way that when you're a kid, you learn to speak in simple sentences and then you get to speak in more complex sentences. Eventually you're able to write and speak more formally. I think it's just what is the, how do you get somebody to the point where they get pulled into their problem? And that could happen in a bunch of different areas. Like lately, I'm like, I've been using Claude to be my, my legal assistant. I, I never, I have a family property owned with my family. That's always been just like handshake. We go, we really, the, the accounts are saying you really need to have a, you know, some kind of operating agreement. I go, okay, Claude, let me explain the situation to you and bang, [00:44:00] we have a. You know, a, a legal operating agreement and flushed it out in legal language and I go, great. You know, so that was an example in a field that I'm not comfortable in, but you know, suddenly going, oh, I could actually use Claude as a paralegal, opens up a possibility space for me. And I guess the same way, what would be the things that you would do to show somebody who's not a programmer that there's something they want to do that will be enhanced if they just learn a little bit of programming? And what I mean by a little bit of programming, you're still on a stage where you know, like, you go, okay, I wanna do this thing, and it, it writes a Python program. And if you don't know, even know what the heck that is. Yeah. That's the end of things. And so you have to get, I think we'll be able to get to a point where people will simply be able to do the vibe coding thing and get all the way there without even being able to look under the hood. But I suspect [00:45:00] that. It'll be a while before people won't get frustrated and go, it didn't really work for me. It is a little bit like, you know, when automobiles first came out, you had to be a mechanic. You know, I'm sorry you had to be a mechanic. You read the early accounts of people who had those first Fords or whatever. They, they had to know a lot of stuff 'cause they were breaking all the time and we're writing code. Even now, even experienced software engineers we're writing code that breaks all the time. Yeah. I still remember back in the, in the days of Usenet, when people, uh, had the quotes in their signatures on their emails or they're using it posts. One that stuck in my head, which was, if carpenters built houses the same way the programmers wreck programs a single woodpecker, if you'd come along and destroy all of civilization. And there is this level of which, you know, we really do in general have a pretty fragile set of. [00:46:00] Softer artifacts, even at the, the biggest and best companies. And they're only kept going because there are people to maintain them. The, whole issue of technical debt. And so in some sense, and this was why I was really interested to see this, uh, post on X by Stevie Yeager, where he was talking about using, Claudes just crank through issues and, and technical debt, , and he was super, very impressed. You should look for that post because I do think that there's some areas where AI can really help at that level. But from the point of view of your question, it's the same question that you would have for teaching anybody anything. Find something that you love, where it helps you to learn this skill and the love will teach you what you need to know to go further. I think about my, my, my daughter who's now a composer and the first time. She basically told herself to play the piano. We went to see the movie, the Piano. She loved the [00:47:00] music. So we brought her the CD and we had a piano at home. And so she basically started figuring out how to play it by here. And I looked at her, I go, I could no more do that than I could walk onward, but whatever. And she basically never took a, you know, you know, eventually, I guess she took some music theory and stuff, but she just basically had enough talent that she got drawn into making music by listening to music and figuring out how to play it. And I think in a similar way, there are a lot of people who are gonna become self-taught vibe coders, and they'll start with some passion project and then they'll go, well, I need to learn something in order to take it further. And guess what? They can talk to an LLM tutor that will help. Take, bring them further along and if they build something really great that has to be engineered as opposed to built in such a way that any woodpecker could knock it down, then they'll have to get some real software engineering, hugo: right? Yeah, [00:48:00] exactly. And of course the stacks we have currently are, you're right, they're incredibly fragile. They're also incredibly bloated in a lot of ways. Yeah. I think I am interested in, I'm freelance at the moment, consulting and a variety of other things, but I. There's so much happening right now that I actively set at least one day a week, ideally two, but it usually gravitates to one. To learn. Yeah. To research and learn and experiment. Yeah. And I think when we do end up in these types of complex and chaotic environments, it's actually very important to, it's not business as usual, I don't think. And I'm wondering how you would, so we have a lot of listeners who are data scientists, machine learning engineers, AI engineers from ics up to executives. How do you think they should start to think about organizing, you know, the way they use their time and the people mm-hmm. That they work with as well? What we, should we be telling our teams as to how much time should they dedicate to be actually [00:49:00] learning as opposed to, yeah. Creating robust stuff that they would in a different type of time? You know, I guess tim: for me, I have a sort of different approach to learning, which is. Learn by doing. And I guess I, I, when I think about my customers, I guess in, in the current world of O'Reilly as it's in, in its incarnation as a a learning platform, we do have a lot of traditional learning, but the core learner who drives our business is a self-motivated learner who's trying to get something done. And yeah, maybe they need to be certified in something for their job, but I really believe that you learn best when you're trying to do something and the skills follow the project. You don't go out and learn skills in the abstract. You, you learn skills in the concrete. [00:50:00] So I guess I just feel like my advice is don't treat learning as. This thing that you do. I mean, in one sense, yes, you should always be learning because you have to even know what the possibilities are. If you've never heard of cursor, then you're not gonna try it for your project. Right? If all you know, if all you do is you've heard about check GPT, and so that's the only thing you use, then maybe you won't get as far as if you, so there's that kind of learning, but that's ambient learning. Like you just got, you have to be reading, you have to be talking to people, you have to be hanging out with people. Yeah. And I I, but the thing that should drive you is some passionate project that you want to accomplish. hugo: That makes sense. And I think part of my question was that ambient learning should occur, or it's best if it occurs all, all the time. But in a period like this, I think there's an argument that ambient learning needs to occur even more. Yeah. So how should we think about incentivizing this as in, in organizations and making sure that, yeah. And of course working on particular, uh, [00:51:00] projects, but essentially we all need to become more r and d labs and experimentalists than the previous type of builders. tim: Right? I think that's right. Yeah. I think that's, Ethan Mollick makes a point, and this is, why do most individuals say AI makes them more productive, but you don't see the organization becoming more productive. And it's why. And one of the reasons is that people aren't encouraged to, to, to experiment and share in their jobs. Again, there's all this fear, oh, maybe I'll, I'll get, you know, laid off if, if they discover that AI can make my, can do part of my job or whatever. And instead, we really need to all be experimenting, sharing, building, figuring out new things. And I do think that there is a, I think it's probably the culture of learning should be a culture of sharing. Yeah, absolutely. In a certain way. And that's something that, that we haven't quite figured out. You know, in the, obviously there are a lot of ways you, you do that, uh, online. And I, I, um, [00:52:00] I do think that there's, yeah, we needed to be doing a lot more of it. Um, but I still, I feel like hanging out with people who are, are interested in the kinds of things you're interested in and Oh, oh, how did you do that? Oh, yeah. Oh yeah. That, that, like, I, I, just to give you an example, I have a particular idea, which I don't want to lay out here, but I heard that Lucerne, who was the founder of New Relic, had done as a, just a fun project. Something that was similar to my idea. So I had a call with him today, and we just had the best time and I learned so much. Like, how did you approach this problem? Oh, here's how I'm, I'm approaching it back and forth. It's a super high bandwidth learning where I, where I, I go, oh, I never even thought about that approach. And he is, oh, yeah, that's, so we, we just, you know, and if you can find a, a, a community of people who are interested in the same problems that you're interested in, and share what you learn by trying to solve them together. Uh, that's a really fantastic way to learn, which is why I am trying to recreate with this [00:53:00] upcoming programming event, a little bit of that feeling of we're trying to build a community of people who you go, oh yeah, I know who are people who are interested in the things I'm interested in. hugo: That's something I really loved about your book that we discussed earlier. WTF You, and you've written and talked about this before, this concept of thinking in, in, in vectors. And I'm wondering if first if you could explain yeah. For our listeners what that is, but then it can, how it can help us think about or hypothesize about the possible futures with respect to ai. tim: Yeah. Alright, sure. So this thinking and vectors idea actually comes from scenario planning, which is a discipline that was really invented by Peter Schwartz and Lawrence Wilkinson and a few other people. . And it's a pretty cool methodology where you try to imagine, rather than try to imagine the future, you, you actually imagine a, variety of very [00:54:00] divergent futures. And, and the way they do it is they say, identify a couple of crossing vectors. And you know, we did a, a scenario planning training at O'Reilly back 25 years ago with Lawrence. And we were looking at the future of learning and we were like, online learning. And you go, okay, the internet is making it possible for it to be decentralized in new ways. I forget what the other vector was. But then there, what they teach you is once you've identified these vectors, and they, again, first of all, the idea of a vector is it's not a scalar, it's a direction as well as a value, right? And so you, but anyway, so you think of these things as vectors. In this particular case we're looking at, uh, I can't remember what the second vector was that we came up with. But yeah. Anyway, in W-T-F-I-E, climate change as a, as my example, but regardless, they, they say you look for news from the future that tells you that you're on the right track along one of those vectors. You, you know, when I think about [00:55:00] the present moment for me, there are things that I believe that are the things that I want to base my future strategy on, but I need to know if they're coming true. Like I have a profound belief in the power of decentralization to en encourage innovation. I also have a belief in the role of decentralization, in creating more opportunity for more people, right? So I go, okay, so let's say, I wanna understand what the decentralization vector is around ai, right? And, and that means, okay, is it happening? How fast is it happening? That's the, the, the magnitude component. And, and then is the centralization vector. How fast is that happening? And so if you look at that, then along comes deep seek and you go, wow, that's an incredible piece of news from the future along the decentralization vector that says, oh, decentralization is what is happening. And, and we can count on, [00:56:00] count on it actually being a disruptive force in the industry just like it was back with the PC or the web. Right? And so I think we're gonna have, uh, I'm, I'm increasingly confident that we're gonna have a decentralized AI future. And then I go, what does that require then? And, and, and for me, it requires what I call an architecture of participation. And, and that idea really is shaped very much by. A lot of what I observed in the early days of open source because while everybody was focused on the idea that open source and free software were about licenses, I said, no, no, actually they're actually about the architecture of the system that you're building. And that was shaped very much by my early experience in the Unix community, which I was, when I first came to Unix, it was BSD 4.1 and Unix system three, and then system five from at and t. [00:57:00] And I could see this collaborative community that was building this thing. And it was building this thing because of the architecture of Unix, which was small pieces that were designed to work well together. Like everybody knew that you wrote a program, it read standard in, it, wrote standard out. It, the format of what was in both of those was a, an ASCI file or, or was asci. Right. And, and, and then they had a bunch of simple perimeters for manipulating that. And so you go, okay, so that was an architecture participation. And so I watched how, and then when I remember Linus Tural saying to me, I couldn't have built Linux if, even if I'd had, I couldn't have built a new kernel for Windows, even if I had all the source code and the architecture didn't support it. That's what, what crystallized that idea for me. And so I'm going, okay, so. So we ended up with an architecture of participation in open source [00:58:00] and big monolithic open source by license programs like the GMP or open office that didn't really get, that, didn't have that architecture of participation struggle. While the programming language, I remember CPAN, the co, uh, comprehensive Pearl Archive Network was part of how Pearl flourished Python, JavaScript, all these vast ability to support extensions and libraries and frameworks on top. Apache did the same thing. They had an architecture participation, and I guess I, I'm really looking for that. In ai. And I think this one low level piece of this, which is brought to us by open source models, like semi open source, llama, deep seek, whatever, but there's a higher level that is also required. And that is how do these ais communicate and cooperate? And right now it's a shit show. [00:59:00] It's like we don't yet have, for ai, even the concept that would lead to a world such that, and this relates a lot to my thinking about AI and copyright. So you imagine right now everybody's trying to govern AI so it doesn't violate copyright. And so you imagine, you say to it, write me a, a Paul McCartney song. It'll say, I can't do that. I can do something similar or write me a Stephen King novel and go, I can't do that for you Dave. And you go, but I can do something similar. And what it should be saying is, I can't do that for you. But check with Stephen King. His AI will do it for a feed, because guess what? You want to have the Stephen King novel that has all your friends and royalties and it'd be pretty cool and it'd be pretty cool if Stephen King had trained in AI that used his particular to let you do that. That's an opport crazy kind of opportunity that shouldn't be enabled by the system. That is, I don't know this thing, but somebody [01:00:00] else does. You know? And, and I think the idea that somehow, you know, all your base are belong to us, you know, became the religion of people. Building large language models is a fatal flaw because all it's means is that people with, now that they've consumed all the low hanging fruit, anybody with really valuable additional information, it's like, dude, you can't have it. Whereas if they had, were thinking about this question of, okay, how are we going to, you know, build something that looks like. A network of cooperating and that becomes very much more interesting. And again, I'm not quite sure what that, what level that cooperation needs to happen, because I think there's gonna be many layers to it. Again, you think about it in the sense of when we talk about AI agents, there's really that term means two separate things. When Mark Beni for Brett Taylor, think about an AI agent, they're like, okay, here is an AI that has learned all your internal [01:01:00] business data, your internal business processes, and it can represent your business as a front end to users. Right? And then there's this other version, which kind of comes from the co-pilots and the whatever. We're gonna have computer use, where they will come and they will talk to your APIs and actually, does that look at all familiar to you? Here's an AI that looks like it's going to use this, it's just gonna call old school APIs. That's a little bit like the old days when we did screen scripting. Because really what ought to be happening is your AI talks to my ai, you know, and they figure out how to negotiate what they're gonna exchange. And right now, that level of the computer use tool use API is the equivalent of screen scraping. In the early days of the web, before we actually thought to have web services APIs, what would it look like for [01:02:00] ais to have conversations about things like, what do you know that I don't know? What do you know that I have to pay you for? How are we going to work together on this, uh, complex project that draws on our, and that's that kind of back to what Sam Schlock was talking about this, that's that sort of cognitive infrastructure that we have not built, that we express as humans. Where we negotiate, we figure things out. And that one emphasis really goes to this other idea I'm really playing along with, which is that in a lot of ways I, I started out thinking about how I have this AI governance project at the Social Science Research Council, and I've been focused on what we learn from disclosures. I guess it started with really this notion that financial disclosures, like generally accepted accounting principles, enables a, a regulatory market. Jet Clark and Jill [01:03:00] Hadfield, uh, call that way and means that every accountant knows if you use Gap, they know how to read your financials. Uh, they were originally mandated in the thirties only for public companies by the SEC. The SEC said to the accountants out there in the world, come up with a standard format because there's a wild west out there and people are making all kinds of false claims and nobody knows what anything means. So we've got a standard language for accounting. It was a set of disclosures, but if you think about, that's actually a networking protocol. So I've been thinking about that and then I started thinking about owner safety. And you think about, oh yeah, there's a bunch of, there's a bunch of signals. It says, okay, there's a double line and the middle of the road means don't take a left turn. Guess what? Why? Because there's a hill and you can have visibility and somebody's coming down the other way. You're dead or you're gonna be in a car accident, whatever. Or you don't go go fast in the school zone. This is, there's a set of disclosures, [01:04:00] which are a networking protocol between. A community and the people using the roads. And then you start thinking, okay, okay, so now we have something like the robot exclusion protocol, which is trying to say to an ai, don't come take my stuff and everybody's routing around it. That's, I crossed the double yellow line, uh, you know, uh, heading to the crest of the hill and we're gonna have a lawsuit. Right? At least you're not dead yet. But, uh, we are, we're figuring out bit by bit what that language of disclosures needs to be. It, it gets a, it evolved. If you look at the evolution of markets, you end up figuring out effectively the protocols. And I, I'm just really fascinated with this notion of we need to figure out the communication protocols between AI is, is one of the big challenges of the future. And, and this even goes to, you know, what's the monitoring [01:05:00] infrastructure? One of the things that's sort of fascinating to me, you know, out in the deep seek story is that it was not open AI that noticed that all their data was being exfiltrated by this Chinese company. It was Microsoft that noticed it, and my colleague Elon Strauss noticed, oh, actually the cloud providers are perhaps the regulatory layer, the equivalent of the roads, like all the AI safety stuff that we've been talking about is a little bit like crash testing cars. And actually looking at what happens on the cloud infrastructure is like auto safety as applied to watching what, whether people are speeding on the roads or driving dangerously. So the whole focus of AI safety probably ought to shift to. A lot from the model safety to the cloud infrastructure layer, because that's where you'll see bad behavior. So it's things like that. I'm, I think there's just so many interesting, this is getting away from this notion of programming, but there's so many interesting things to be invented for this [01:06:00] world to, that we're excited about, to actually turn into a workable system, think about all the energy, use everything so much to be invented, so much work to be done. How could anybody could possibly think that programmers are gonna be out of work as a result? Without hugo: a doubt. And it is, there are so many exciting things that we haven't even been able to comprehend yet. So I appreciate you giving some of the things that we can at least comprehend. The first steps of clarity. I also love that you, the idea of a Stephen King, LLM or fine tuning LLM to be able to provide Stephen King things or having some sort of information retrieval system. tim: What I like is that Stephen King could do that. Not that I, I, I don't wanna, I, I don't wanna see. Yeah, actually it would, in one sense, it'd be great if Stephen King wants to license it to some model provider, more power to 'em. But I think the idea that we already know, we've already decided that it's a little, a bridge too far for Chatt BD to say, oh yeah, we have Stephen King mode. Yeah, pay him. It's a, that's [01:07:00] good. That's a step in the right direction. 'cause they were pushing that. 'cause implicitly you can do that now and you shouldn't really be able. hugo: Absolutely. And I do, I, people who've listened to this podcast before have probably heard me say this before, but I definitely would've preferred a, a present, if not a future. Now, where the New York Times has it, maybe they use the same base model as a lot of people, but then they use some information retrieval system with all of their stuff so that they're not having to fight these extremely bizarre lawsuits with open ai who. C clearly have gone and taken copyrighted material behind a paywall. Yeah, yeah. Or like, whatever, right. Similarly, the Stack Overflow story is, I think, the canary in the coal mine with respect to a, a lot of things. I'm, I'm, I, I would love Stack Overflow to have had their own LLM, which we could all interact with, and we've seen like the absolute decrease in, in, in traffic to the, the, um, platform in the time since GBT came out. I, I suppose I'm wondering, is there an argument to, [01:08:00] to achieve more decentralized generation development and use of LLMs, that we could have foundation or base models, which are like public utilities of sorts, and then we can all build our own things on top of that. Is that a possible future? tim: You know, it is, and I have to say it might, it could have, it, it maybe it's still a possible future. I certainly think that if I were, you know, I, yeah, I was at a number of workshops about what should the public sector be doing about ai. And that was certainly one of my suggestions. I think it's probably somewhat less necessary because of the way the industry is shaking out with, uh, there, there was this sort of notion that some company was gonna get to a GI and, and whoever got there first was going to have enormous power. And that was the big narrative by which companies raised lots and lots of money. And then we've seen the scaling laws start to slow down. We've seen the emergence of, of, of much cheaper, [01:09:00] uh, more power efficient models, decentralized. And you go, oh, it was pretty clear from the beginning, that was the wrong model. And there is no moat. Again, you know, Google, somebody Google wrote the, that there is no moat memo several years ago. tim (3): Right. tim: And I think the industry is waking up to that fact. And then again, I think it's, it goes back to that sort of piece I wrote about, that AI has an Uber problem. The, I think as we get past that Uber problem, we are going to start inventing new business models. We are going to start inventing new kinds of services and new ways to monetize. Because the business opportunities always come from solving the problems that people have, not from just trying to capture all the value. And that could, one of my company mottos at O'Reilly is create more value than you capture. And, and [01:10:00] I think if you look at the, at least at a financial level, the. The models of the open a eyes of the world is we're gonna make all the money and we'll leave you, we'll leave you a little bit, we'll give you some value, but you, you're gonna have to figure out how you live in the real world. I guess it's something Jeff Bezos said that I've been thinking about. He said too many people, when they think about the future, ask what will change. It's really important to ask what will not change. And I think one of the things that will not change is there are billions of people on this planet and they need to have a way to make a living. And if you don't find a way to give it to them, there will be a revolution eventually. And so anybody who's not thinking about creating value as well as capturing it, and I don't mean just, Hey, I gave you a bunch of stuff for free. [01:11:00] That's all the value you get. Somebody who I took away your livelihood and I let you, um, ask, ask questions of an AI is not a fair trade because people actually have to be able to put food on the table. And these kind of naive and careless kinds of notions of somehow the market will sort it out, just don't really hold up very well. I do think the market will sort it out, but there's just a hugo: lot of damage in the meantime. And to your point and a point you've made time and time again, all we need to do is look at history right and not be, not be history blind. So we are gonna have to wrap up in, in a minute. I, I love the way that you've framed where we are now, particularly to where we've been and the possible futures. I also love how you've given practical advice in terms of learn by doing and explore a, a, an experiment. I am interested as a final takeaway for listeners who are, are technical, if there's anything you'd encourage them to do more of or anything you'd like to see them do. tim: Something that [01:12:00] Lisa Rahel said in the early days of blogging. She said, blogging is narrating your working public. And I think that idea of in a period of experimentation like this, being extremely public about what we're doing and thinking so that we have a community of minds that is exploring this. And again, this is why I think the race for Monopoly is such a bad idea in the, in the start of a new technology revolution. I look back and I go, Tim Burners Lee put the web into the public domain. When Rob McCool invented CGI. Everybody was able to do it right away. Literally anybody could copy anything that was, there was a view source on every webpage. Every learned HT ML When Brian Pinkerton did the first web crawler, everybody knew, oh wow, that's an idea. Everybody can do that. And, and, and there is, I think, a, a lot of really great information [01:13:00] sharing, I think, in the AI world. So I don't think it's, it's terrible, but I think it could be much, uh, more of it at the level of businesses sharing what's working and not working for them as they try to apply AI in their business. Again, we try to encourage that in our online events, in our live trainings. How do we get people to say, here's what we tried to do, here's what worked, here's what didn't work. It shouldn't just be. That we expect that from the cutting edge model developers, it should be really from this application learning. And I, in this regard, I really urge people to read a book by an economist named James Bessam. He wrote a book many years ago called Learning by Doing, which is really about the industrial Revolution as it played out in, in the, the Mills of Lowell, Massachusetts, the, the fabric mills. And he really is a really fascinating analysis of how and why technology takes [01:14:00] time to diffuse. And it's because of this diffusion of knowledge as people figure out how to actually apply the technology, it is this sort of practical feedback loop, accelerating that feedback loop. Anyway, Ethan Molik is a huge fan of that book as well. hugo: And, tim: and also hugo: I do think it's, so I haven't read the book, but you do mention it in, in your essay at the end of programming as we know it. And one thing that stood out to me was. Your description of Besson's description of how once machinery was introduced in, into the factories, how the people using them weren't unskilled workers. They were skilled workers, just with a very different set of skills. And I think that's really important to consider now as we think about what software system engineering is may, maybe it isn't writing code anymore, maybe it's a totally different set of skills that we have. tim: Right? That's, and that's already the case in so many areas. You think about the rise of, say, DevOps, which didn't make really exist. I still remember I wrote [01:15:00] a piece, I don't remember, it must have been 2007 or so, maybe I, I can't remember exactly, but it was called, uh, I was looking at the future of the cloud and it was that my, this Microsoft Cloud VP had said in a conversation with me in the future of being a developer on our platform will mean, uh, being hosted on our infrastructure. And I heard that and I wrote this piece about that means then the people who are running that infrastructure are gonna be really important. And I remember Jesse Robbins, who later was one of the people who helped me start our Velocity conference, which was about web performance and operations, said all of us. He had a title, master of Disaster at Amazon. He said, we were like the computer janitors and, and he said, everybody in in our department put up your post on our, all around in our cubicles. 'cause you were the first person who said that we were gonna be important. You know? And I think that it's a little bit like that where there are gonna be people who are part of the woodwork here and they're suddenly gonna [01:16:00] be, oh wait, no, everything depends on you. You know, we need you. And I don't know exactly what all those new job roles are, but I am quite confident that they will. Exist. And I guess that's, again, this back, going back to thinking and vectors, you'd go, here's this thing that is likely to happen. We don't know exactly how it's going to happen, but we can see, uh, like in, in that particular case, it was more and more applications are going to be, you know, hosted on big cloud infrastructure. Again, it, to go back to the very early days of Google, and we were a little too early, but like we were, we have to write about everything that Google does because eventually everybody's gonna do it. And, and it was like, you know, Google's a one-off and then after four or five years ago, everybody needs to learn what Google know, right? Big data, Kubernetes, all, all, all the things. And I think in a similar way, there's gonna be people who deploy large scale AI applications. Somebody who first. [01:17:00] Figures out how to do the really good AI agent front ends. And then everybody goes, oh wait, we don't just need to have a web front end. We don't, and a mobile front end. We have to have an AI agent front end. And all of a sudden that this is, you know, the piece I quote Brett Taylor in, in that piece on the, and the programmer as we know it, you know, it's like suddenly agent engineer is gonna be a new job title and it'll be the next, the new version of front end engineer and CHIP one is, is all all over what the AI backend engineer, hugo: you know. Exactly. And I think part, partly to Chip's point and, and undercurrent of what you've saying, been saying is how are skills rearranged? How are tasks rearranged? And to be clear, technology rarely automates jobs. It automates tasks and we collectively in some fashion figure out what jobs are and how we reorganize things so that jobs can create value. That doesn't necessarily happen consciously per se, but part of Chip's Point and something I love that you that, that you talk about as well, is that. If [01:18:00] we look at the type of skill sets emerging, which is perhaps to manage software that's doing certain things at different points in time and interacting with the, with the real world, if those are the skills people who've been working in machine learning, engineering and machine learning software, data, product building, and, and these types of things, they've already been doing that for the best part of a decade, if not longer, right? tim: Yeah. Well, I think one of the ways to think about it and, and I, in a certain way, yes. You think about the Google, I gave a talk, uh, I'm trying to remember how far back it was, but it was basically about this idea of companies like Google, the job is actually managing a bunch of software workers. This was already the case 10 years ago. Google has this vast infrastructure that Amazon, who actually takes your order, right? It's a program right? If you think about it that way, what jobs do all the people at Amazon have? There's people [01:19:00] who manage that infrastructure, the people who design the interactions, the people who monitor whether it's working and how to optimize it, right? There's this massive optimization at Amazon, right? It's not just, so there's all these software workers and your job is to manage those software workers. So now if you roll that vector forward and you go, oh wait, ai, this current generation of AI is just a general purpose software worker as opposed to a custom built software worker, and all the tools that were required to manage a software worker kind of already exist. They just have to be updated for general purpose software workers who now are gonna be doing more things, if you think about a Google or an Amazon, they have a bunch of special purpose software workers. Just for their business. And now every business will have a bunch of software [01:20:00] workers across every aspect of their business, and they will now have to have the ability to eval, uh, how, how well those workers are doing their job. What does it do to business performance when they change something or when they interact in a different way. How do we figure out, uh, what resources they need so they don't get bottlenecked? All kinds of things like that are, are now basically things that have moved from the domain of vast companies like Amazon and Meta and Google, where they do this at scale with specialized software workers. To general purpose software workers in every business. I hugo: love it. And I actually, it actually, I haven't quite thought of it in these terms before, but it puts Bezos's, Jeff Bezos's big mandate in, in a different light for me to remind listeners what that is. It's something like all teams will expose their data and functionality through APIs or service interfaces or something like that. Right. And that was incredibly [01:21:00] prescient to recognize that if we're gonna have modularity and be able to switch things in and out and have software communicate with itself and people manage that, you do need APIs to be able to achieve that. Yeah. Even tim: internally. Yeah. And, and yeah. And so I guess we're developing this new science of how that works when you have a bunch of general purpose tools. That can be used. They have to, we have to figure out how they co communicate within the company. For example, what are the internal mechanism? I do think maybe it will be APIs, but as I mentioned earlier, I do think that in a certain way, an API may be in anachronistic concept when you can just have the two ais in some sense communicate. You know, I guess it's the difference between a, a railroad and a, a road and a railroad. You go along the tracks and you have the, the one power vehicle hauling a bunch of passive cars. When you get a road, every vehicle's [01:22:00] independent and we're moving APIs of the equivalent of, of railroads. And what we need is the equivalent of roads without hugo: a doubt. And I think we do, we're starting to see early cases of that. So if people haven't checked out the model context protocol from Anthropic, definitely check that out. So it's an open standard that enables all of us to build. Secure two-way connections for LLMs to chat. And this is a first, and that was built, originally released last November, and now people are really jumping on it. This is a first approximation, but I suppose once again, we're seeing that we're in such early days where we're still figuring out how to, as, as you point out to yeah, figure out whether summarizations are good. Thank you so much, Tim. That was such a great conversation. All right. Thank you for, , hosting me. Thanks so much for listening to High Signal, brought to you by Delphina. If you enjoyed this episode, don't forget to sign up for our newsletter, follow us on YouTube and share the podcast with your friends and colleagues like and subscribe on YouTube and give us [01:23:00] five stars and a review on iTunes and Spotify. This will help us bring you more of the conversations you love. All the links are in the show notes. We'll catch you next time.