The future of serverless is WASM with David Flanagan === Noel: [00:00:00] Hello, and welcome to PodRocket, a web development podcast brought to you by LogRocket. LogRocket provides session replay and analytics which surface the UX and technical issues impacting user experiences. You can start understanding where your users are struggling by trying it for free at logrocket. com. I'm Noel, and today we're joined by David Flanagan, founder of Rockout Academy, and he's here to talk to us about WebAssembly being the future of serverless. How's it going, David? David: Yeah. Very well. Thank you. Excited to be here and talk about future face and technology, which always puts a smile on my face. Noel: ~Yeah. Yeah.~ I think this is maybe one that ~a lot of, uh,~ a lot of devs will have some kind of questions right off the gate. ~Um, ~In, ~I guess, ~why WebAssembly is something that we, ~you know, like whatever~ care about or need server side, because I think ~like~ the portability was always ~like~ one of the big selling points of WebAssembly. And I think that there are benefits of server side there. But can you kind of paint that picture a little bit, ~kind of ~refresh us on what WebAssembly is and why we might care about it outside of the [00:01:00] context of browsers. David: Yeah. ~Well, I mean, let's, ~let's start with the browser context first, right? WebAssembly started off as a way to increase performance of mission critical code within the browser. ~Um, you know, ~people use WebAssembly applications every single day and probably don't realize it. I'm talking about Figma, the Adobe creative suite on the web browser and a whole bunch of other places. The reason that we needed that is because JavaScript is a great language and a browser, but it has some performance characteristics that aren't ideal for heavy calculations, AI based workloads, or even, ~you know, ~canvas based extreme gaming, drawing, all this kind of stuff. So people have seen The success of WebAssembly in that fashion and thought,~ well,~ can we do more with it? Yes, we get a sandbox that is secure, extremely performant, and ~can, you know, ~you can use it from almost any language, as long as it has some sort of target to compel or transpel to WebAssembly. But to do it in the server, there were a few things missing,~ um,~ namely POSIX or the things that POSIX brings, that is file [00:02:00] access, network access, sockets, et cetera. Noel: ~right.~ David: And there was a project that spun out of the WebAssembly working group in Mozilla back in the day called Wazzy. That is the WebAssembly systems interface, and it wants to provide an interface that runtimes can implement to provide this extra piece of functionality, which means we get all the benefits of a truly portable runtime and sandbox that runs on the server and the browser much the same, except we enrich it with those features that we need for server side applications. So there's a lot of benefits to that. I'm sure we're going to get into them in due course with some more questions, but I think it's an exciting time that we can use WebAssembly from a wide variety of languages with a sandbox with performance and a whole lot of fun to go as well. Noel: ~Nice. Yeah. Yeah. I guess. Um, I'm like, ~I'm kind of in my head thinking ~a lot, ~a lot of devs are probably comparing this to something like, ~Oh, well, like~ I, ~you know, whatever, it~ could just write some code ~in, ~in the serverless world, either for a container or just like ~in a, ~in a language and deploy it without even needing to concede the container layer. If I'm like [00:03:00] deploying to a host functions provider somewhere, what benefit it. Does WebAssembly have over like traditional container deployment? Right, David: ~Um,~ the first one that everyone has come across at some point in time is called the code start problem. Code start problem ~is, ~is if we don't have a hot cache, some container ready and willing to accept the request. Then there is a bootstrapy cold start issue that is in that in order to run a Linux process, you need to set up the sandbox, which involves a bunch of Linux primitives. That is the user namespace, the mount namespace, all these other namespace that containers have made easier, but just because the interface to it is easier doesn't mean that it's extremely performant. And we know from now over 10 years of container based production applications ~that. ~That cold start time can be anywhere from an absolute best case, 50 milliseconds and the average case, milliseconds. And then the worst case up to 500 [00:04:00] milliseconds. A lot of people might be listening and going, and like, is that the end of it? Is that, is that really a bad problem? ~Like ~half a second. ~Um, ~Yes, I know. ~Right. You know, ~for some applications, it's fine. ~Um, ~but these are normally services that our users are hitting and we want that performance to be as best as possible for their use case. And of course, we don't know where they're coming from, what their connections like. There's a whole bunch of different things in the background here. Now, The way that we've solved that to date, web containers at least, is to keep them running and have them deal with more than one request. But that leads us to security issues. ~Um, ~Are our applications and our services truly idempotent? Do they leave anything behind that could be picked up by subsequent requests and abused for lateral movement? Who knows? Hopefully not, but there is a concern there.~ Um, ~so containers, they don't scale as well as we would like and web assembly characteristics and performance is very interesting and that when we measure the invocation time of a web assembly, let's call it binary,~ um, ~we're measuring [00:05:00] a nanoseconds, not milliseconds, not microseconds, but nanoseconds. ~And the, ~if you collate that time of a hundred invocations across a hundred users, we get into some pretty gnarly, interesting numbers that show you not just from a performance standpoint, why it's so important, but from a green energy perspective, from a financial implications,~ um, ~there's a reason that so many edge runtimes networks and CDNs are flocking so heavily to WebAssembly, and it's because when you add up all this,~ um, ~Spare compute is, uh, some huge numbers for sure. Noel: ~Yeah. ~Yeah. I think~ the, the kind of ~the question that then.~ I think~ occurs to one is ~why, ~why is WebAssembly ~like this, ~this kind of bytecode layer? ~Why is, ~why is WebAssembly the proper abstraction, ~like~ instead of some other, ~you know, ~JavaScript intermediary compile step that puts it into bytecode that helps solve this problem? Because I feel like you could conceptualize one that's written by some company that's targeting one language that kind of does this as well. ~Well, like what's, ~what's neat about WebAssembly versus [00:06:00] some,~ uh,~ WebAssembly. ~You know, kind of ~bespoke language, intermediary step to help solve this problem. Yeah. David: have that intermediate story step, right? ~Um, ~we have V8 isolates, so you can take any Node. js compatible language and run it under a V8 isolate. And you actually get some pretty good performance characteristics. Not as good as WebAssembly, but decent. But the problem is, is that for polyglot teams that want to work in multiple languages that are deploying to different edge locations, V8 isolates may not be enough. So it comes down to weighing up the pros and the cons. V isolates are great, containers are great, and web assembly is great, but only one of them is truly portable. And that is the web assembly runtime. When we compile the web assembly, we can run things on a Linux machine, a Windows machine, a Mac, Android, we can run on arm processors, X 86 processors. ~Uh, ~we can run it on raspberry pies that have next to no resources or even, you know. single chip computers at the [00:07:00] edge in a farm in the middle of nowhere with nothing but, ~you know, ~connectivity every 30 minutes. WebAssembly does not discriminate against any of this and it truly is one single binary to run anywhere. If we look at containers as a kind of contrast here,~ Um,~ multi manifest, multi architecture containers are an absolute pain in the ass. ~Um, ~you have to compile it for x86 and for ARM. You have to know how to bundle and ship them as OCI artifacts. And WebAssembly just provides a much better developer experience. And let's go back to this whole Mac and Linux thing. ~Um, ~I won't bring Windows in just now. Let's just focus on the two main developer platforms, which is Mac and Linux. If you work on a Mac, we've got something that is Linux like, but not really Linux, which means to run Docker containers or any other type of container, we need a virtual machine with an endpoint that can run that for us. Virtual machines are great, but they're not exactly highly performant, and they have some seriously,~ um,~ annoying characteristics, in that if you're working on an interpreted language, like JavaScript or TypeScript, you have to be able to sync that entire file [00:08:00] system into the virtual machine before you can run any commands. That has been a notoriously difficult problem that Docker has been trying to solve for the best part of a decade. On Linux, that goes away. ~Um, ~and with WebAssembly, it doesn't matter where we are because it does run in a sandbox, kind of like a virtual machine, that we get the great developer experience across Mac and Windows without having to do that juggle and handle multiple operating systems. Which is another key selling point for me. The fact that I can just give a code base to anyone and they can run a single command, be up and running is pretty powerful. Noel: ~I mean, I feel like, ~I feel like that was a lot of the thing that~ kind of~ led to the success of containerization originally over like the virtual machines of old was, was like a slightly simplified dev experience, but ~kind of, ~as you just stated, that doesn't always end up happening in the like large ~product~ production applications. Things can get tricky ~and, ~and a little bit.~ you know, ~tougher when, ~you know, the, ~the complexity of the app is growing and the dependency tree is getting more complicated. So I think ~like ~that makes a lot of sense to [00:09:00] me. ~There's, ~I think ~devs kind of a lot of people,~ probably a lot of devs dove in a little bit, got their feet wet with some, ~you know, ~hello world stuff and web assembly, and then ~kind of ~bounced off it and haven't gone back to it. ~Um, how, ~how has the ecosystem ~kind of ~been maturing? ~Is, ~is it easier to, ~you know, get a, ~get a production app? ~Kind of ~built an up and running to have WebAssembly running for your server side. David: Yeah. So let me bridge ~these,~ the last question on this question first, let's talk about someone called Solomon Hikes. ~Um, ~he's the founder, or at least the co founder of Docker. Literally invented an entire ecosystem of container orchestration, runtimes, et cetera. He has a tweet from 2018, I think it was, and I'm sure he hates it every time I bring this up, but yeah, he said that Docker wouldn't have to exist if WASI and WebAssembly had existed in 2008 when he started Docker. And I think that just tells you how important a truly portable runtime is. Now, if we fast forward to where WebAssembly and WASI are today, It's still incredibly early, um, very, very early. In fact, the WASI specification that actually enables [00:10:00] you to truly build a server side WebAssembly application,~ um,~ is only on 0. 2. 0 of the spec. Noel: ~Gotcha. Yeah.~ David: ~So. ~Just because it's early doesn't mean that you can't ship something in production. There's a lot of really great companies working in this space to provide the developer experience that people need to build these WebAssembly applications, serverless or not, right? The one that I use most frequently is called Spin by the team at Fermion. These are a bunch of people that were very active in the Kubernetes and container space that seen WebAssembly, seen the power of it and thought, ~let's ~Let's bring the same container DX to something that truly is portable. With spin, you can run spin new,~ um,~ it supports Rust, JavaScript, TypeScript, Python, C sharp,~ um,~ Java, whatever, any language that you want. They've got a template that allows you to just get. everything running in one command. Spin new and then spin build and then spin up. So three commands allowed already, but in three commands, you've got a tool chain that can compile that application to a WebAssembly [00:11:00] binary and run up with a WebAssembly runtime,~ um,~ providing a whole bunch of great developer experience focused SDK methods on top. So let's go back to WASI. WASI is an interface, not an implementation. It defines how to get sockets and file systems and networks But it doesn't provide them for you. What the team at Spin or what the team at Fermion who work on Spin have been championing is what made it into WASI 0. 2. 0, which is the component model. Meaning we have these interfaces provided by Spin, like key value storage,~ um,~ SQL databases,~ um,~ fetch APIs. They provide their default implementation,~ uh,~ and that backend can be swapped out by any component. So while the default may be the key values are provided by,~ um,~ the ~Um, ~Redis and,~ uh,~ SQL database is provided by SQLite. Anyone that wants to, in theory, could satisfy those same interfaces, swap out SQLite for Postgres, swap out Redis for, ~you know, um, ~etcd or Mongo or whatever, and all of that code just works, which gives you [00:12:00] this really cool environment where developers work to common interfaces. And then behind the scenes, operations teams, platform team, they can swap that out and migrate things to improve,~ um,~ cost to improve performance, to improve scalability, horizontal scale, whatever those concerns are. And the developers don't need to change a single line of code. And to me, I think that's a really powerful model when we, as operators and platform teams can encourage and give our developers an environment where they can increase their velocity, ship the code and not worry about all this stuff is ~kind of ~what containers wanted to be and didn't quite deliver. Noel: ~Yeah. Yeah. I think ~I've spent a lot of time talking to guests ~kind of ~recently about ~this, ~this problem, especially in the serverless space of ~like, we all, ~we all like ~ ~a lot of the ideas of serverless and even some of the kind of more primitives that serverless gives us, but ~there,~ there is this problem where a lot of them are a little bit ~kind of ~bespoke, ~you know, ~or ~they're, ~they're tuned to that specific environment. Do you think that this will be the interface that ~kind of helps, you know, ~Helps fill that void or bridge that gap,~ um,~ to make it easier for, ~you know, ~you to deploy a given [00:13:00] application somewhere without having to worry about the nuance of beat deploying in cloudflare versus ~like~ AWS. David: ~I mean, I'll, ~I'll give you a one word answer and then expand on it. The answer is yes. ~Right. Um, ~I definitely think ~that ~that's why I'm talking about this and this is why I'm shipping more WebAssembly these days. So I think we'll touch on this on two aspects. Let's go back to performance for just a second. ~You know, ~I mentioned milliseconds, microseconds, and nanoseconds earlier. ~Um, ~I gave a talk recently at js,~ um,~ talking about how important this performance benefit was. Now, it's hard to work with nanoseconds because, ~you know, ~people don't really understand the difference. Well, I mean, I don't know. I don't know. People I have spoken to do not understand how different a millisecond and a nanosecond are. So one of the things I did in this talk is I said let's assume a nanosecond is one second just because it gives us a really good mapping to say this is what the difference is. if we assume that a nanosecond is a second, then you could literally, in the time it takes to run a WebAssembly application, that would map to a container [00:14:00] doing the same workload, or at least the invocation, the bootstrap, the cold start segment. ~Um, ~it's equivalent to listening to Master Puppets by Metallica, like 8, 000 times.~ Um, ~it's the same as, ~you know, ~if we take the 200 milliseconds. For a container start time versus the WebAssembly binary. You could watch the matrix 180, 000 times before that container caught up with the WebAssembly workload. So hopefully that helps with the performance side. Then let's go back to the developer experience of serverless. Serverless is great from ~the, ~the idea that we can keep the developer's mindset simple. You write a single serverless workload, a small Lambda, a single function that does one particular thing. However, our applications are much more complex than that. So while it's great to write individual functions like this, we have to look at the composition of those functions to deliver a piece of software where the sum of the parts are greater than the whole. Now, if we look at AWS, because they're probably the ones that have been pushing us the most with Lambda and their stack, is that [00:15:00] we then need to go into API gateways, where we define how all these things work together. There's not really any decent concept of service tuning outside of step functions. ~Um, ~and everything just gets A lot more difficult to understand because you can't really show a developer here. This is what this application does. These are how these services compose and this is how a request comes through. The entire stack, what I love about spin is that they have built it in a way where they give you the API gateway by default, it's actually generated for you to the point where every time you add a new service or a component, as they call it to your application, you can figure the routing and you get internal service chaining by just referencing the other name of the component. Now, what I love about this is that we can then look at the, ~you know, ~the Tomo definition of our application. We understand all the endpoints, the services that they provide. And we get this added benefit that although we're using the Fetch API to service chain from service A to service B with looks like a fetch request, it never goes over the network.[00:16:00] Their runtime actually intercepts that and you get something that looks more like an IPC, inter process communication, which again is adding to the performance benefits. Now, I'm sure this is probably a little bit too into the weeds of things for the audience, or maybe some people love it. But,~ um,~ all of these things add up and when you get a developer experience where you run, spin, build, spin up multiple services, hundreds of services, and this is where things get even cooler. In my point of view, we can have one endpoint, one service for every endpoint or application, or we can handle the routing ourselves. So we get to decide how that composition works as well. And I think this is something. I'm probably failing to articulate it well, because I think it's a very visual thing. When you see a spin application and how it's composed and how everything like links together and just ~kind of ~like this click that happens, you're like, Oh, that's really cool. ~Um, ~when you're given the choice, but also sensible defaults, you're going to be successful working with this. Now, one of the things I did in the talk that I gave at Talk. js, which is so cringy, right? But I quoted myself. I had this tweet [00:17:00] from ~like ~five years ago where I said, ~you know, ~the best developer experience is one where developers can be successful with intuition or intuitive decisions rather than informed decisions. We have not had that with serverless yet. ~You know, ~you have to be informed. You have to know how things work, how they connect together, and then put it together. With Spin, we can rely on our intuition. We can just come back to coding and having fun again. And I think that's an important step forward with WebAssembly, Spin, and everything else in between at the moment. Noel: ~Gotcha. Nice. Nice. How does this, um, just, ~just to kind of, ~to kind of~ help illustrate or ~color, ~color this a little bit ~for, ~for devs listening, ~like if, ~if one is writing a spin application,~ how, how are you,~ how are these things ~kind of like.~ Specified,~ um,~ or ~like, how,~ how does one dictate how this,~ these like ~routing works or like complex and internal,~ uh,~ triggers for calls versus ~like, you know, ~just deploying to ~like ~a traditional Lambda. David: So when you're working on a Spin application, you ~Um, ~pretty much everything's in a single directory, if you wanted to, again, choice, right?~ Um, ~but you can just do spin new, you get a new application and that [00:18:00] can be a wild card entry point. Anything goes through as one thing. Now you have the choice. I can just say, okay, send everything here. I'll have my own API gateway and I'll handle the routing. It could be a switch case statement. It could be like it's a router, whatever you want, right? Whatever you are comfortable with. Could even be a framework, something like express. Or HonoJS. Noel: ~Oh, gotcha. Cool. ~ David: So once all that request comes in, you can then add new components. So we can say spin add, and we can pick the language. It could be Rust, it could be JavaScript, it could be Java, it could be C sharp, whatever you want. And each of those does the one thing that you want it to do on that endpoint.~ Um, ~where things ~kind of ~get interesting is that if we want to use remote components, That's also a thing. Now this is very early. ~Um, ~you can actually bundle your WebAssembly components and push them to an OCI registry. For anyone that's not familiar, this is how we distribute ~um, ~containers, container images. And in your spin. toml, you can actually say pull these containers from over here, put them together, and make them available on endpoint. Now there's already a really cool couple of things [00:19:00] here, is that all of my components in this application can be whatever language I want them to be. Which is a blessing and a curse, right? ~Um, ~it can be difficult to go from stack to stack from time to time, but let's be honest, different languages handle better, some tasks better than others. ~Um, ~if we're doing data validation and maybe best to use Rust with a strongly typed system, or even Go, if we just want to do some fetch requests or remote resources or write some data to SQL database, maybe JavaScript and TypeScript is the answer,~ um,~ and being able to pick and choose these components is great. The routing isn't entirely up to you. Again, you can have everything go through one and build your own API gateway, or you can map each service to each unique endpoint and the proxy and it handles it for you. And then you've got the enter service communication, which is just a fetch API away. ~So. Um, ~they try to be opinionated enough and that you can get started and follow their best practice But at any point developers can pull that escape hatch and start to do things on their own And there's no penalty for doing so either ~Yeah,~ Noel: ~Um, are there like, I'm,~ I'm thinking about ~other, ~other kind of common invocation patterns,~ um,~ [00:20:00] stuff like time scheduled based triggers, like Redis cron and stuff like that. ~Yeah. How do those, ~how do those work in spin world? David: so everything i've mentioned so far we're assuming HTTP is the trigger and some requests The HTTP protocol and it gets routed and spend and support more and these are all just components themselves as well So you can build your own But out of the box, Span allows you to run scheduled jobs with cron syntax. So we can say, run something every hour, every minute, once a month, whatever. And that service will be invocated for you by the runtime. It also supports some sort of PubSub. Usually it's the register right now. But again, you could work with Kafka if you want to write your own components. ~Um, ~postgres, SQLite, whatever, but there is a Redis one out of the box, which says whenever someone modifies or writes to the Redis queue,~ um,~ we can pick that up and invoke a service,~ um,~ on the backend for you. So there, there are multiple ways, and this is changing all the time. ~Um, ~WebAssembly on the server as early spin is currently on V 2. [00:21:00] 4, I think. So it's a slightly more mature, but they really are leading the charge. ~They're, ~they're the team that defined and created the component model and pushed that forward. ~Um, ~and we'll see more components come out of the back of this too. I actually spent a little bit of time last year trying to work out what it would take to build my own trigger outside of Cron ~and, ~and Redis. I think it worked out to be around a hundred lines of Rust code. So it doesn't matter what you're trying to do. Again, if it's Kafka, if it's some sort of,~ uh, you know, ~remote API, GraphQL subscription, you could build this all in and ~it's, ~it's not trivial, but ~it's not, ~it's not difficult either. Noel: Yeah. I think this probably ~kind of ~warrants us zooming out a little bit here. How are most teams? Deploying like spin applications right now. David: ~Uh, ~so we're quite a small community,~ Um,~ I can't say I'm speaking to thousands of people that are doing this, but the people I do speak to at conferences, they're just confirming on cloud. And it's got a free tier so people can deploy like up to five applications for free and just get started. ~Um, ~and it's just a command. However,~ um,~ [00:22:00] there's nothing special about the cloud, not to take any credit away from what they've built. ~Um, ~but they have a project called spin cube,~ um,~ where you can just deploy their runtime into a Kubernetes cluster, distribute your,~ um,~ spin applications as OCI artifacts and deploy them in the same way that they do. Obviously they've got some things that they do on top of that to make it more enterprising and scale for cheaper and all that, but you can do it that way too. ~Um, ~the other thing is, is you can just spin up a cheap Linux machine, VPS on whatever cloud provider you like,~ uh,~ and just run the spin command on that machine. And you're going to get some great performance characteristics from that alone. You may not get the high availability and the redundancy of scaling to multiple machines, but to be honest, most people just don't need that. Especially when the performance and the memory consumption, CPU consumption of these applications is so small, you can get a long way. With not a lot of hardware. So Fermion Cloud is easy approach. I would encourage everyone just to start there to get started. But if you do need to do [00:23:00] something more, ~you know, um, ~scaly or enterprise, whatever your constraints are, look at the SpinCube project or just run Spin on a Linux machine, it works pretty well. Noel: ~if, uh, yeah, like for, if,~ if deploying to a Kubernetes cluster, I guess I'm curious, we talked about ~this kind of ~the inherent,~ uh, the~ cold start problem, ~like with, ~with clusters in general, how does that not become reintroduced if we're deploying ~a.~ Even like a spin application, because there's going to have to be a layer there where containers are being created at some point. Sure. We can ~like ~be clever with, ~you know, have~ having stuff ready, but ~it's,~ it seems like we, ~you know, ~just kind of have introduced an additional layer of things needing to be started up here. David: Okay, so this is where you have to dive into the weeds a little bit about how spinkube works. So when you deploy your spin application to a Kubernetes cluster, you are not getting a single container for every single request that comes in. Right? Because all we're doing is you're running a WebAssembly runtime and a long run long running container, which then has its own WebAssembly sandbox for each [00:24:00] request. Kind of like a CGI model or a unikernel model depending on where your background is and what you're familiar with. So as long as you can run a handful of containers, 5, 10, 100, whatever, depending on the size of your infrastructure, they're running the spin runtime, or any other WebAssembly runtime. You can do mix and match, whatever you want here. ~Um, ~and then it's going to pull your, or at least make sure your WebAssembly binary is available. And as your request comes in, all it's doing is that WebAssembly invocation, not a container called star. So then we're measuring things in those nanoseconds rather than the milliseconds again, which means that we can push an awful lot through a single container, but still get the same sandboxing characteristics and the performance that we need. ~Um, ~the best of both worlds, ~I guess, ~is what I'm trying to say. Noel: ~Very cool. Very cool. So, so, so when doing these deployments, we talked a little bit about like, um, you know, integrations and the capacity to swap out different, uh, like pub sub event event providers and like data stores and stuff like that. Um, is, is there anything like happening? natively with any of these cloud providers?~ ~Or is it kind of like, Hey, you'll have to configure it yourself a little bit, like put in IP addresses or whatever it may be to get, to get this functionality. Or, um, is there any kind of special, I don't know, uh, kind of sanctified way to do this in, in the Fermion cloud?~ David: ~I don't think I understand the question. Can you try that again?~ Noel: ~Yeah, ~yeah, of course. I'm curious,~ like,~ I'm thinking of some utility one can get when using,~ um,~ like a Lambda natively, right? ~Like, ~it's a little bit easier for me to talk to,~ like,~ other AWS services out of the box, or like CloudFlare key values and things like that. ~Is, is, ~is some of that abstraction that,~ like,~ developer convenience inherently lost here? [00:25:00] Or ~is there, ~is there any,~ um, you know, kind of ~simplified way to get to some of these services if you're ~using, ~using a cloud provider specifically? Spun up for these WebAssembly applications. David: Yeah. Okay. I getcha. ~Um, ~I think ~there's, ~there's two approaches here. There's ~like ~what we have now and where we could be in the future. So where we are now is no convenience method. ~Um, ~if you need to be able to speak to SQS or SES or JKE or whatever servers are called cloud providers. ~Um, ~you have to use their SCKs. Now there's a bit of nuance there. ~Um, ~so because our WebAssembly applications can be in any language,~ um,~ if we're using Python, we've got access to Boto. If we're using Rust as a, you know, we have Rust SDK. So you don't lose any of this functionality, but the caveat is. It's their library built in a way that it can be compiled to WebAssembly, because right now, not everything can, so it could be that you're makeshifting your own,~ uh,~ gRPC or HTTP SDKs to speak to these services, which would suck, absolutely suck, but fingers crossed that you can just use those SDKs. It's not something I have tried. But I [00:26:00] think, given the state of things right now, you'd probably be successful. But that's not where we want to be in the future. I think as WebAssembly on the server picks up more traction,~ um,~ we're going to see these cloud providers come in and want a piece of that action to the point where ~we may, ~we may see variants of the spin runtime or web, Wasm time, Wasm, or all these, there's loads of WebAssembly runtimes. ~Um, ~we may see cloud provider versions of those where they provide interfaces for their services, meaning you get an AWS SDK for spin that allows you to see who sent message. And it just works without you really having to understand what's behind the scene. And of course there's the Fermion team are probably going to try and make this work too. ~They, ~they're open source contributors. They have been for decades. ~Um, ~I don't think It's a short term plan, but long term you could see spin ~for, ~for AWS, where they provide components to do this for you. ~Um, ~I don't have any insider knowledge there, it's speculation, but I suspect we will see a future where this is easy to hook in and the [00:27:00] same way the containers did it, ~you know, um, ~we've got cloud connect ~for, ~for Google where, ~you know, ~there's container integrations with everything or Kubernetes integrations and APIs to speak to Google clouds. AWS have their own, I can't remember what it's called. The same will happen for WebAssembly, as long as the adoption is there. As long as people are listening to this and other people and going, Hey, this sounds really cool. And we see that hockey stick movement where everyone's going, Woo, WebAssembly! Then the cloud providers will listen. Much in the same way they did with containers and Kubernetes. Noel: Yeah. ~Do you think, ~do you think there's ~any, ~any possibility that if we can ~kind of ~put on our ~long, ~long term hats here and speculate a little bit,~ like,~ do you think ~this, ~this web assembly shift, if it ends up like manifesting very strongly, do you think it'll be a thing that most. Devs are actually ~kind of ~consciously thinking about and making that decision. Or is this one of those things that potentially just ~kind of ~ends up happening under the hood without a like a conscious effort on behalf of the, ~you know, ~dev that's actually ~writing, ~writing their application. David: I am kind of bullish on [00:28:00] WebAssemblies. I'm inclined to say that I think we will all be working in this way. Not in five years, maybe ten years. ~Uh, ~because ~I mean, ~containers haven't really taken over a hundred percent, but they are almost as a de facto way, like local projects have a Docker composed YAML file and they spin up all the services and they get to work. I would like to think just from a truly portable nature that developers in 10 years time have some components running, which may be in containers, right? ~Like ~not all of our databases are going to be running on web assemblies or runtime, but we will have components where the containers can be spun up on cloud services or locally in a VM, whatever. And our web assembly workload speaks to them. Even to the point where most people might not even realize they're compiling to WebAssembly. ~I mean, ~there's nothing from the spin builds and spin up commands that tell you you're in WebAssembly except for you know you're compiling to WebAssembly. And that's where development should be. We should have an experience where we focus on our code, our functions, and our tests. It doesn't really matter what's behind the scenes. With containers, there's so many frustrations and head banging moments that ~you, ~you know you're working with containers. And WebAssembly can make some really [00:29:00] big strides there from a DX perspective. So I'm bullish, but you know, in 10 years time, hopefully I'm looking back and going, yeah, I was right. Everyone's now using this. I would be Noel: ~it's, it's exactly. I had because it's, it's,~ it seems like ~one of those, ~one of those things where, ~you know, it's hard, ~it's hard to speculate, but ~it,~ at least right now, ~it kind of, ~it ~kind of ~feels like ~devs. Most devs, ~a lot of web devs anyway,~ uh,~ are ~kind of ~in a world where they're like, ~you know, they're, they're, ~they're using some kind of~ higher, ~higher order framework or tool, right? Like ~they're, they're doing some,~ they're running some command that then is going and spinning up containers for them and ~like ~giving them a web interface to talk to this thing. And as you said, like right now they have to worry, like make sure Docker is running and do these things. But ~it does, ~it does ~kind of ~make me curious if the long term version of this is like, Oh, it's all just web assembly. You don't really think about it. I'm like, what's going on under the hood. You sure you're running your, ~you know, ~you name it, you name ~your, ~your meta framework of choice, and then a bunch of pieces end up ~being ~running in WebAssembly ~without, ~without really knowing it, do you think there is ~a, like ~An inflection point that'll ~kind of ~kick that off more. ~Are we, ~are we seeing that right now ~or, ~or is there a couple more pieces that need to be established in the ecosystem before this~ kind of ~hockey stick starts really[00:30:00] David: no, the inflection point is no, let's look at what's happening and the JavaScript ecosystem. ~Uh, ~we spent the last seven years pushing towards lambdas and serverless. And, ~you know, ~next GS is a good example of this where,~ uh, you know, You know, ~if we could do everything on the client, that's amazing. But I've realized, Oh, that's not going to happen either. ~Um, ~we know next GS are pushing server side components, react server component,~ um,~ because sometimes you just need something running on the backend and this is making our development experience mean that we need node or bun as a runtime. We're delivering HTML with client side JavaScript, but we still got backend services that have to be routed over an API framework,~ um,~ like CRPC or server components, et cetera. ~So. ~We're coming back to not everything has to run in the browser because it's a challenge. ~Um, ~so I think the inflection point is now, and what I'd like to see is that in order to run Next. js, whatever the next 10 versions are going to be, is that these server components are being rendered on the WebAssembly module, that the API components are being rendered in that WebAssembly [00:31:00] module, that we don't need Node as a condition to run this application, that we don't need Bot as a condition to run this application. ~Um, ~I'm hoping WebAssembly is the answer there that gives us again, a truly portable, because Node ~is, ~is a, it's got so many,~ uh,~ rough edges as well. ~I mean, ~I don't know if you've ever had to work with ~like ~the sharp library. ~Um, ~this has to be compiled for environments, ARM, x86 and all this. So WebAssembly could be a really strong contender to change that developer experience, allow us to write front end applications that have client and server side components, and you never really have to worry or think about it. They're just. Components with code that gets targeted and compiled. And you don't care where Noel: ~Are there any, ~are there any~ kind of~ major ~ ~hurdles or, ~um, it's not a, not the, not the right term, but ~anything you're looking for in ~kind of~ web assembly or maybe ~specifically.~ On the server side that you think ~like, ~once this problem is solved, then ~like, ~I think we'll really see adoption kickoff. David: that I mentioned, otherwise the spec is 0. Noel: Right. Right. David: so that we did this early,~ um,~ but much like the node JS ecosystem. What has [00:32:00] happened in the WebAssembly ecosystem is they're adopting proposal standards before they're accepted. And ~this is, ~this is no strange concept in Node. js, right? People are using add on functions before they were ever truly merged and annotations are being used now, even though they're not a standard. ~So, um, ~as long as people are willing to buy in, being on the bleeding edge and shaping the future of WebAssembly,~ um,~ there aren't any challenges. The WebAssembly project dead move slow for a while. And that was because, ~you know, ~a few years ago Mozilla essentially fired everybody that wasn't working on certain products and WebAssembly was a. a victim there. However, all those people got hired by people,~ uh,~ by companies that are doing CDNs because again, this is where they save a lot of money if they get that invocation time on the edge from milliseconds to milliseconds. And we're seeing a lot of movement now towards WebAssembly there and a lot of backing, financial backing from large companies like Cloudflare, Fastify, Akamai,~ um,~ I'm sure there are other CDNs I'm missing. So yeah, the challenge has hopefully been overcome and that the future is now, it's on the [00:33:00] horizon. Like we can see it and we just need people to share that vision and to get involved and start building cool stuff with WebAssembly. So Noel: ~Nice. Nice. Right. Totally. Yeah.~ On that note, ~how would you, um,~ what would you recommend to devs that are ~like,~ maybe listening to this? ~Haven't, ~haven't. Thought about WebAssembly in a long time, but like, I kind of, kind of intrigued. How would you recommend people start researching or jump in and try to build something? David: Yeah, go to the fairman. com website, click on spin and go through the get started guide. I promise you, build your first service and deploy it and you will find the joy of programming again.~ Um, ~just IDE, write some JavaScript. And then see ship it is a fantastic experience. We had this with for sale back in the day, right? But now we've got a new programming model. It doesn't need node that can run anywhere. I think the joy is there to be had and people just need to kick the tires on them, Noel: ~Nice. Nice. Sounds good. ~Was there anything else you ~want to ~want to plug or point people towards as we wrap up here, David? David: check out the Rockwood Academy on YouTube. ~Um, ~I have done courses on spin,~ uh,~ as well as many Kubernetes and container [00:34:00] based things. So if anything I've mentioned today ~is~ interesting to people, I hope I have content there to make their lives easier. So please check it out. Noel: ~Cool. Cool. Well, ~thank you so much for coming on and chatting with me, David. ~It was, ~it was a pleasure. ~This is a super, ~a super cool chat. David: Pleasure is mine. Thank you so much. Noel: Of course, of course. Take it easy.