Generative UI and React components with Matle Ubl === Tejas: [00:00:00] Hi and welcome to PodRocket, a web development podcast brought to you by LogRocket. LogRocket provides AI first session replay and analytics with Surface, The UX and technical issues impacting user experiences. Start understanding where your users are struggling by trying it for free at logrock. com. I'm Tejas Kumar. And today we have Malte Ubel, the CTO of Vercel, here to talk about generative UI and React components. ~Um, ~Malte, welcome. Malte: Hey, thanks for having me. Tejas: I'm so excited to dive into generative UI and React components. ~Um, ~from the recent talk you gave at js in Paris. ~Um, ~maybe we can start by just laying a foundation of, ~you know, of this, right? ~Generative AI or Gen AI is a term that is in the zeitgeist, but generative UI may not be to some people. So why don't we unpack what is generative UI? Malte: Absolutely. I can give a bit of a history where we came from at Vercel, where, ~you know, ~I think maybe about a year ago, obviously AI was blowing up and there was this like relatively obvious idea of saying, why can't I type a prompt into some web interface and get [00:01:00] out a perfectly running UI, right? And so I think we and others were hacking on this and. ~And ~at some point we ~kind of ~got it working really well. And we called ~the, ~the product that we made from that V0, V0. def. And what that thing does is it, ~you know, ~it takes the prompt, generates React code for you. And that kind of seeded the idea in our heads of, There's something here you can actually generate UI, but this is like the most advanced version we actually took a step back and we're like, wouldn't it be nice if I can just ~talk like ~talk to an AI over text, but also use UI components and I might already have them written in react and so we set out to build a system that would allow us to say, Hey, ~we want to ~AI, here's the ~AI, the~ UI components that we have, and they are now part of your vocabulary. And so if I asked you to, I don't know, change my password, you'd have to ~like ~ask me for my password. You can ~like ~give me the change password [00:02:00] UI dialogue because it's part of your vocabulary. And so I think when we talk about gender of UI, really talk about this, where we teach the AI about your business UI and have the AI be able to, ~you know, ~respond with UI instead of text and also be able to take input from the UI as the user makes same changes and bring that back to the kind of awareness of the AI. Tejas: I'd love to dive into that a little bit more,~ um,~ in detail. How do you add, you mentioned teaching the LLM or teaching the AI to respond with these components. ~Um, when, ~when we hear words like teaching, we think about machine learning. reinforcement learning. ~Um, ~fine tuning ~is, is, is that, ~is that how, or is, is it some other type of mechanism by which we,~ um,~ explain to the AI how to respond with components? Malte: Yeah. This is a really good question because it's actually not none of those things. And ~I would basically, ~I would classify how you use an AI into maybe three big buckets. ~But ~the first one is the important one, which is ~you, ~you prompt it. ~So you, ~you give it [00:03:00] instructions ~and, ~and then you get a response. The other tiers are tiers of training. ~And, um, ~but we're not in that land. ~Um, ~we are ~in the, in the, in the, ~in the land of prompting, but prompting has ~kind of ~evolved from the original text inputs as well. The most relevant here are function calls. ~So they're like, ~it's an abstraction. And that abstraction is ~like.~ We don't even have to get into it, but ~like ~the big insight is react components of functions, AIs can call function calls, that's basically give the react components to the AI. ~And, and it basically, ~from the AI's perspective, it calls the component and then we render it to the screen, if that makes sense. Tejas: Yeah, that makes total sense. ~I mean, ~it's a very fitting pair because as you mentioned, React components are just functions. If an LLM can call a function, then it can invoke a React component. React components return a tree of VDOM or JSX. So how then do you go from taking this value returned to the LLM from its function call and rendering it to the screen? I guess that's the question. Malte: ~Yeah. So the,~ it's really worth thinking about [00:04:00] function calling as it's more of a metaphor.~ Um, the, ~the key part is that. When an LLM calls a function, they really call a function that you as a developer wrote and you can do whatever you want. And in particular you, so it's, it seems a bit abstract, but like really, yeah, ~you can just, ~you can just render a function. Like ~it's, ~it's in the end up to you and the AI SDK ~kind of ~does the plumbing to do it, but like ~it's, it's, ~it's just plumbing. There's no particular magic. And there is also not a very prescribed life cycle. So the return, like the, obviously the rear component returns some JSX and you render that to the screen on the other hand, when you, when it comes to the kind of thing that the AI then learns from that UI, it's, we think of that in terms of AI state and not in terms of necessarily a functional return value. So the classic example would be, and sometimes there's also nothing to return, right? ~Like ~if you ask, what's the weather in [00:05:00] Paris, then the AI calls your weather component, but you're also done, right? There's no, no step two, if that makes sense, because it's pure display, Tejas: ~this is, ~this is really interesting. ~Um, ~one gotcha here ~with ~when working with LLMs is determinism, right? Meaning for every prompt, you get a different output. ~Um, ~can we call these functions reliably? So for example,~ um,~ if you ask for the weather in Paris, what are the odds that you actually call the weather widget function and return,~ um,~ generative UI, as opposed to just getting a text based output, Malte: Yeah, I would actually ~kind of ~answer a slightly different question from the one you asked, because I think that's a good question we can talk about specifically. But the more interesting part is that. The pure existence of that AI component actually takes a lot of non determinism out of that application, right? Like you can basically think of ~your, like ~your possible response space. Like the AI can always choose, okay, I'm just going to respond with some text and~ I can, you know, ~I'm going to riff off on, ~you know, ~some tangent that you never planned [00:06:00] for, but at least if it hits your UI component, you know exactly what's going to happen. And especially once you go beyond examples like weather,~ um, ~that actually is super powerful,~ like, ~because nowadays people put all these AIs into their business processes. And so ~the, the, ~the notion that, yeah, you can chat with a thing and super flexible. Eventually you basically just put the people into a funnel of some use case. And, and you kind of, that funnel is the one you always said is actually takes a lot of that anxiety away from how you deal with AI, because it actually ~kind of ~takes away entropy ~from, ~from the things that could possibly happen. Now ~it's, it's,~ Tejas: constrain ~the, ~the possible paths a user can take. Absolutely. Malte: Exactly. ~Like the, ~to answer your specific question,~ the, the, the, ~it can indeed happen that this AI doesn't decide to call your function that itself is really an exercise in prompt engineering. So ~the, ~the way the AI SDK ~kind of ~tries to avoid this problem is that ~it, so ~it uses,~ um, Uh, ~Zot as the way to describe the interface of the function,~ um, ~that's most used for [00:07:00] typing, but one of the really useful features of Zot ~that, ~that team, like in their infinite intelligence added to the language long before people were talking about using it for AI, is that literally every type has a describe function that you can call, and so you can give extra information. That goes beyond the type that you care about from a, ~like ~pure, ~like ~TypeScript typing perspective. And when you use the Scriscribe function with the AISDK, this information becomes part of the prompt. And so you can very precisely, you don't have just the get weather function, which, ~you know, ~AI probably smart enough to understand what's going on as a location parameter, which is like city, right? Like it ~kind of, I mean, ~it's an easy example, but you can ~like ~basically provide more contextual information, for example, What the regionalities that are supported, whatever, like you could imagine, right? So you can, you just write more text to ~kind of ~get more precision. But yeah, ~it's an, you know, ~in the end, it's ~kind of a, ~a classic ranking problem with certain probabilities involved. ~And, and, ~and, ~you know, ~I'm not going to say things can't go wrong because ~they, ~they absolutely do. [00:08:00] But ~you, ~I think as I was saying earlier that on the net net, this actually does reduce ~the, ~the risk of kind of very unexpected behavior. Tejas: right. And so an exercise for listeners to actually use to make ~their ~the invocation of their react components, the function calls more deterministic is to consider the describe function of their Zod schemas. ~And, ~and that's where you can fine tune this, you can make sure that it's called, say, nine or 10 out of 10 times based on how you describe each parameter. Malte: Exactly. Tejas: Great. ~Um, let's talk, ~you mentioned AI state, let's talk a little bit more about AI state. Okay. ~Um, and, ~and contrast it with UI state. So for UI state, ~you know, it's, it's ~in react, you have use state or use context, ~which is, ~which is pretty declarative. ~Um, you, ~you get a dispatch function to set the state and actual state value. ~Is, ~do we understand it correctly? That AI state in contrast to this is really just a history of messages, messages from the assistant, the system and the user, or is there more to it? Malte: There's a little bit of wardrobe, but not that much more. So ~the, ~maybe we move on from our weather example, cause it's ~kind of ~a one way street. ~Even though, ~even the weather example you would have as an [00:09:00] AI state, the weather in Paris is 21 degrees and cloudy. Cause it's nice ~to, ~for the AI to be aware, even if it's job to display at this time, because now you can ask a follow up question,~ um,~ for like, should I wear shorts? ~Right. Um, ~and the AI is ~kind of ~aware what it displayed, even though like it didn't do the display job itself.~ Uh, and, and, you know, ~and the function itself was the one that called the weather service, right? So ~that's, ~that's the simplest version of AI state. ~The, ~another good example that we've been using is one where you go through a flight booking kind of process. And part of that is that you have a seat map ~and you, ~and you select a seat on that seat map. ~And the, ~so ~the, ~again, no magic involved. The way you do the AI stage for the seat map is that you write a little function that takes the state ~of your, ~of your React component and serializes to a string. Tejas: Hmm. Malte: user selects a seat, you write a little function that says user selected seat 17 B Tejas: Interesting. So the function returns that string and that string is persisted as AI state. Malte: and it gets ~that, that, ~that string is just basically being fed to the LLM so that, because it loves to, [00:10:00] it loves natural language, right? And so like you basically translate ~the, the, the, ~the selection user made on this, like possibly very sophisticated UI ~into, ~into ~like ~a string that the AI can understand. Tejas: How does one set that? Is there a set AI state or use AI state hook in the AISDK that just is responsible for translating? Okay. Wow. I've never used that before. That sounds so straightforward. Incredible. What are ~some, ~some places the AI SDK is being used in the wild? And I have to ask because ~so, I mean, it's, it's not,~ I'm not unique here. I think a lot of the listeners as well, use a lot of AI tools these days. I think the one that I use the most is perplexity. And I think that's a great example of,~ um,~ Extremely stateful UI, right? They don't do UI yet in their textual responses, but I, for example, go back to conversations I had ~like ~two weeks ago ~with ~with something like perplexity or even chat GPT. And we'll just ask, Hey, clarify that. That's my prompt. And it's aware of the state, even though ~it's ~it's a very old conversation. And I have to assume that there's some persisting of state there. Anyway, this is just my long winded way of asking,~ um, ~where [00:11:00] might we see the AI SDK in the wild today? Doing some of these things. Malte: Yeah. I think ~the, ~since you mentioned perplexity, there is a really cool open source project called ~Morphic on, I think ~Morphic. sh, which is like a perplexity like UI implemented on top of the AI SDK. And so it's a great example to ~kind of ~look at more advanced ways ~of, of, ~of doing ~this, ~the same thing. ~Um, you can actually go back.~ I think ~one, ~one really good story from that we had ourselves is that ~we. ~We built our v0. dev tool ~that was ~before the AISDK existed. We took the lessons from it and ~kind of ~made the AISDK from it. And then later reported v0 to use it again. So that's a very concrete way of using it. And we actually have a very exciting new version of it coming out. ~That there's,~ that's a bunch of people are testing it already. So it's ~kind of very ~imminently launching. ~And that, that, ~that one's actually ~much more kind of a ~more traditional, that kind of ~like. ~ ~ ~Chat forward, ~um,~ with a lot of AI UI integration happening in there. ~And, and so that's kind of how we, like, you know, ~in everything ai it's early days, ~the, ~just judging from the NPM install [00:12:00] numbers, ~like ~there's ~very, ~very strong adoption and ~I think you, you see people like the, the more, ~the most ~kind of ~straightforward use case is that. People start writing these chatbots for their business. This might be customer support, it might be the app that the customer support agent is using,~ um,~ or it might be something like a, ~you know, ~chat GPT bot for my megacorp, right? As rack access to ~like ~all the documents ~in my, ~in my system. And so ~these, ~these type of apps, first of all, with the AI SDK, it's also ~more, it's just ~easier to make ~the, the, ~the chat of itself, because there's lots of streaming stuff involved. ~Like ~that's actually pretty difficult code to write. And so having something that does it for you in itself ~is, ~is nice. ~And, ~and then,~ uh,~ it kind of has the cherry on top of ~the, ~the kind of React component integration, which I think is useful for all these use cases. As I was mentioning earlier, whenever you have. If you ask a support agent, the question, isn't it so much nicer if instead of responding with ~like ~a list of here's the steps you need to do, it just says, [00:13:00] okay, you want to change your password, type your Tejas: Here's the widget. Yeah. Malte: like, ~you know, um, so that's, ~that's also why I keep coming back to this example, right? Because it's ~like, ~like fulfillment of user intent~ is, ~is what the AI SDK is for basically. Tejas: ~do you what? ~I want to get your thoughts ~on ~on something maybe somewhat radical because right now, even with the UI, the main form factor of interaction is still a chat interface, right? The only difference is the LLM will respond ~with ~with react components. ~Um,~ Malte: Yep. Tejas: what about a scenario? where there's say an e commerce website like Amazon,~ um,~ or ~some, ~some competitor and the recommendation section, there is no user prompt here. So it's not generated by chat per se, but some developer or the development team behind ~this, ~this hypothetical e commerce application, instead of ~You know, ~doing, I don't know, like a GraphQL query to get products by user preference or something generates that section with,~ um,~ an LLM. So ~the, ~the prompt is, I'm a user. So by rag, I'm a user. These are my last few purchases. ~Um, ~generate some UI. And then,~ and then this, ~this section, these product recommendations in it, like a table layout,~ um, ~are just generated by an LLM. Do you [00:14:00] think that's a realistic use case? Do you think that's something we'll see as a maybe upcoming trend in the industry? Like have portions of UI come from an LLM? ~Is there, is there a, ~is there a benefit to that over the traditional way? Malte: Yeah, absolutely. And for,~ like,~ very concretely for that use case, the feature of the AI SDK you would use is basically ~JSON,~ JSON coercion, right? ~Where, ~where it started with, there's no JSON,~ like,~ there's no,~ like, ~function calling involved. It's just basically you specify Zot Schema for the output you want. ~Um, ~super top of mind yesterday, there was support for this launch on OpenAI. All right. To make it more reliable, but you still like ~with the, ~with the SDK, you have the benefit that it just works ~on, ~on every model you might choose. And especially here, you might want to have one that's particularly, ~you know, ~not as expensive. ~Um, ~and so ~in, in, ~in that world, yeah, you basically,~ you, ~you assemble your prompt, you say, please, ~you know, ~I'll put this JSON, ideally it's the same format that maybe your product recommendation react component just likes ~to, ~to be fed and. And voila, ~you can, you can, ~this is actually a relatively straightforward application to build. Tejas: [00:15:00] But ~like, I, ~in terms of cost benefit, this would probably be more, it's an interesting game of economics, this, right? Because it may be more expensive to run because of the LLM generation cost, but it's also cheaper to build because you don't have engineers creating like strong recommendations, algorithms, and then implementing the UI for it. So yeah, I don't know, maybe I'm asking,~ uh, ~a question that it's too early to answer, but would this be cheaper or more expensive? Where would we,~ what's,~ what's our best like speculation here? Malte: I agree with you definitely ~that the, ~that there is this disruption where this used to be super hard. You need to hire the specialists and probably have a six months project ahead of you. And now you can ship one like a prototype in a day. ~Like ~there's literally ~nothing, ~nothing stopping you from doing this. ~Right? Like it's, ~it's so straightforward, but that doesn't mean that the quality is amazing. And so maybe we can have a,~ the, the, the. ~different conversation about the quality aspect as well, because it is super interesting, ~but like,~ but the, ~the, the ~really disruptive thing with these AIs is that they do something reasonable really quickly. And so ~from a, ~from a cost and also performance [00:16:00] perspective, and those actually tightly,~ uh, like ~correlated because you basically pay for reserving a bit of a GPU and the quicker the answer comes, the less money you pay in the end. Tejas: Yeah. Malte: And so ~the, the, ~the process that definitely. Works is that you go and build a prototype and you use a frontier model and then you drive costs down by going down from the frontier model to something quicker. What I think, and yes, I know you've looked at this in the past, right? And there was this notion of, do we use fine tuning or not? And I think like I would have given a different answer maybe six months ago ~than, ~than I do know, because I think ~the, the, ~what's really remarkable is that actually starting with. Probably Google, but then quickly followed ~by, ~by Anthropic and now kind of OpenAI. ~They,~ they don't feel quite like this. ~They,~ so all these providers have these ~like ~suites of models. You have the frontier model, you have a mid tier and you have a low level tier. And so certainly ~for, ~[00:17:00] for ~like ~Google and Anthropic, it's really nice because ~they, ~they've ~kind of ~vibed together. ~And, ~and you can ~kind of. Like you ~use the Frontier one because it can reason, but you don't really need that for your product recommendations, right? You can write a prompt that's very specific that tells it like how to do this, like maybe what you actually care about. ~And, ~and so you can actually prompt engineer ~the, the, ~the much cheaper model to do something~ very, ~very reasonable ~in this, in just, ~just by ~kind of ~being more specific about what your expectations are and what ~kind of, what maybe, ~for example, drives recommendations. ~And, ~and that's, I think, really interesting. ~And ~because they ~kind of, they ~form a suite, things like JSON coercion or function calling, all these features, they kind of work on this ~like ~very, ~like ~vertically integrated fashion versus ~in the, in, ~in previous land where you said, okay, I tried this on GT before it's great, but it cost me 50 cents a generation. And I'm only making, ~you know, ~maybe, I don't know, 5 cent on average per item. I'm sure. Cause that's a bad deal. ~Right. ~And so I need to now go down to GPT 3. 5, but that was kind of, that was very different model. It was much dumber, ~right. ~And it just vibed [00:18:00] different from GPT 4 and because that's now different. I think it's like this tearing down on the model side has become substantially easier and it doesn't require fine tuning. Tejas: ~That's, uh, I think that's, ~that's a good answer. ~I ~I'll say this. I have seen quite a bit of success with,~ um, ~fine tuning, especially using something like open pipe, which is a fine tuning as a service company. ~Um, that,~ Malte: Yeah. No, I'm not saying you should never use it. ~Right. ~But ~it's, ~it's like basically it's really cool to have. This ability to not quite go there. ~Right. ~And especially ~if, ~if,~ you know, ~like fine tuning has, I think certain strengths and weaknesses and it might not always be applicable. Tejas: Yeah, I think it's just really, really, really expensive. That's the biggest weakness fine tuning has going for it because a lot of people think the training process itself is expensive, which indeed it is. ~But, ~but I think there's a big misconception that it doesn't stop there. ~Like ~once you have a trained model, you've got to do evals. ~Um, ~evaluate the outputs. And that is also expensive. It's very time consuming. Oftentimes the model outputs nonsense because it's garbage in garbage out with the data. And then you've got to go clean the it's just, yeah, that's making a case for probably rag here or [00:19:00] multi LLM. orchestrations, as you've mentioned. ~Um, ~I'd love to dive a little bit more into your talk from js. ~Um, ~you mentioned a transition from Software 1 to Software 2. 0,~ um,~ much like Web 1 to Web 2 to, I guess, now Web 3 and Web 5. ~Um, what, can, ~can we elaborate a little bit on, on ~what, ~what tenets there are of Software 1 and 2, maybe the differences between them as well? Malte: Yeah, totally. ~I think the, like,~ so ~there, ~there is clearly like a disruption with modern AI models and the main kind of specific on that AI angle, the main difference I'm seeing is that.~ You know, ~and for context, ~I mean, ~I've been with Rosanna for over two years, but I was at Google before. And so I've been like building AI application for ~like ~a decade, right? And ~like, ~this is what like everything we talked about. It's what the vast majority of employees did ~for, ~for all of the time. But ~it felt, ~I mean, first of all, that was like a big tech thing and it was very like research heavy. ~Yeah. Very like knowledge, heavy, the, the, ~the iteration cycles were like super slow. Whenever you developing a new [00:20:00] model often involved, like actually making bespoke infrastructure for ~like ~that particular thing. Tejas: ~Right.~ Malte: ~And, ~and fast forward to now. And I think ~where, where, ~where there's a real disruption is that nothing's stopping anyone listening to this, to become an AI engineer, right? ~The, the, the, ~the barrier of entry is both low because the models are really smart. And then we even have copilot, et cetera, helping you do it. So there's this like double speed up,~ um, ~making it easy ~to, ~to learn new things. And so I think there's ~a pure, like ~a real step function that kind of moves out of this, ~like ~researchy, ~like ~infrastructure back and heavy world to one where you think about user interfaces and ~you think ~you have these models that are so smart, especially if you start at the frontier, like we discussed earlier, where you can rely on the model being smart first. And then you maybe optimize later, but you basically start with a product, right? Versus with a research paper, right? Like you start with ~like ~something that's really nice. And we're like, okay, yeah, I want to launch this. ~Um, ~and then you think about how versus saying ~like, ~okay, what is like in the future? In my two year development cycle, what could I possibly achieve [00:21:00] here? Tejas: ~So, so it's, ~it's shifting in the way that it's being more democratized. If I understand correctly, ~this actually invites, ~this invites a really profound question. This is probably,~ we'll, ~we'll wrap up with this question,~ um, just to, ~just to be honoring of everyone's time. There's this, I'd say foundational article ~on, ~on latent space,~ um, ~by Sean Wang Swicks, I believe a mutual friend of ours, where he, ~you know, ~formally defines. AI engineering,~ um, ~as calling a model as a service. So it's this beautiful diagram, which I'm sure you've seen, right? So that you've got,~ um, ~machine learning research, machine learning engineering, a big line down the middle,~ um, ~that is the API and AI engineering being on the other side of the API, effectively saying true AI engineering. in terms of an accurate definition of what it is, is you interface via an API to some LLM. So you don't train the LLM, you don't start with a paper, like you said, you don't do research, ~you just, you just,~ and maybe in very crude, simplistic terms, you use fetch in JavaScript and fetch from Anthropic or OpenAI, get a response and use that to solve a problem. ~Um, ~is that statement accurate in terms of representing the role of AI [00:22:00] engineering? Malte: I would say yes. And I'm really like, ~I mean, I'm a, ~I'm a fan of the word, but ~it's, ~it, Like it might not even be inclusive enough, right? ~Like, I mean, ~as in the sense that ~it, it, ~it has its time. It's a very useful word, but ~it it's, you know, ~in five years ~and we ~will ~kind of ~seem archaic because that's just every software engineer, right? ~Like, like you, ~you don't need extra skills.~ Uh, ~and ~like, ~I know this can be daunting if ~you, you know, cause you, you know, ~you've maybe even had a college course ~on, ~on neural networks and it didn't quite get it. And now you're ~like ~supposed to build this, do this AI stuff. ~Right. ~But yeah, it's just an API. And not only is it just an API, ~it, you know, it's, ~it's really meant to ~like, you know, ~infer from what you tell it, what you want. It's really helpful. ~And, ~and so ~the, the, ~I've actually rarely seen a lower barrier of entry. And obviously there's specialization and you can get better at it, but ~it's like a, ~the learning curve ~is, is, ~is I think very smooth. ~And, and like, yeah, ~certainly if you're building any form of application, including front end application today,~ you can, you can, you know, ~give it a try ~and, and, ~and be very successful without having to.~ You know, ~become part of, ~you know, ~a research elite, like you would have had to five [00:23:00] years ago, Tejas: ~I, yeah,~ you mentioned daunting and I think ~that's, that's a, ~that's a great way to describe it to some, I think another way to describe it to some would be,~ um, D ~devaluing. So I will say this. I caught some slack once ~for, ~for saying exactly this at a conference. ~Um, ~and a lot of the. Like ~sort of ~high elite academics in the room were like, Oh, you're totally devaluing our very difficult and intense work, which, of course, it's not the intention, right? ~Like, ~I think ~it's ~it's highly valuable work. But ~when you ~when you say the barrier to entry is low, or you make something accessible, ~I can't, like, ~what I've noticed is some people tend to get protective of their field. ~Um, ~what can we do? I guess it's a very abstract question. Malte: I think ~the, the, the, the, ~the reality is this is actually not unusual.~ Like ~there was a time it's actually much shorter ago than a sink, where if you wanted to store some data in a database, you had to build database Tejas: ~right.~ Malte: ~and, and, ~and that required like very specific skills. And now you don't anymore. And ~like, ~just to be clear, like I ~kind of ~get my feedback and stuff engineering in the nineties and ~you know, ~there was some, like my SQL was really young, but like you had to ~get, ~get Oracle [00:24:00] and like you couldn't afford it. So you had to build your database yourself. ~Right. And,~ but then,~ you know, ~2010, not all that long ago, Google and Amazon would build new databases. Cause the ones on the market. Actually didn't do what they needed to do, right? And so that needed these very specific engineers and being able to hire them was a barrier of entry of becoming a large scale cloud company. But now, ~you know, ~like then a few years later, Amazon puts it on AWS and you can have Dynamo, right? And you can have a planet scale application without. Do you know any particular skills ~that, ~that from a distributed engineering background? ~And, ~and so we are actually going through a very similar,~ uh,~ disruption and it's normal. ~It's, ~it's the most normal thing in the world. ~And, ~and certainly~ the, the, the, you know, ~in practice people on the expert side of the ~machine, ~machine learning sides are clearly benefiting from this, right? Because the market for this, for the application of their skills Has grown ~from, ~from relatively small to infinite. ~And, and, ~and respectively, I think they're all doing really well. ~You know, ~I think they're fine. Tejas: ~yeah, ~yeah, good. I like that take. ~Um, ~listen, [00:25:00] Malte, ~it's been, ~it's been a real pleasure having you on the PodRocket podcast and all the things we talked about between generative UI software one and two and the future. You mentioned this upcoming launch to drastically improve v0. We'll keep our eyes out for that on behalf of myself and all the listeners. Thanks so much for coming and chatting with PodRocket today. Malte: Thanks for having me. This was super fun.