Brendan Irvine-Broque === Noel: [00:00:00] Hello, and welcome to PodRocket, a web development podcast brought to you by LogRocket. LogRocket provides AI first session replay and analytics, which surface the UX and technical issues impacting user experiences. Start understanding where your users are struggling by trying it free at logrocket. com. I'm Noel, and today I'm joined by Brendan Irvine Broke, director of product at Cloudflare. He's here to talk about Cloudflare Workers and OpenNext. Welcome to the show, Brendan. How's it going? Brendan: It's great to be here. Noel: Yeah, I'm excited. I feel like Cloudflare Workers are always like near and dear to my heart. We use them a lot and I always like reach for them as my kind of handy little can do anything, at the edge platform. So I'm excited to talk about some new stuff that's coming. What are some of the big features? What are we talking about today? Brendan: Two weeks ago at Cloudflare, we have this thing called birthday week. And every year we celebrate our company's birthday by trying to give things back to the internet. and we had this kind of. big developer day on during the week where we announced 18 different big updates to the Workers platform. So we've got a [00:01:00] lot to jump into. Noel: Yeah. 18 is a lot. Yeah. That's a lot of big, a lot of big features. I saw persistent logging was mentioned. Can you tell me a little bit about that and like how that differs from what we've been able to do with logs historically? Brendan: you know, Historically on Workers, you've been able to tail the real time logs of a worker and see what's going on right now. But when it comes to being able to look at logs after the fact, you've had to jump around and use a whole bunch of different other tools, push logs to somewhere else. And, logs are one of the foundational things that you need when you're make your first deploy. You need to just understand what's going on. And so what we built is the first step of an observability platform that we're building into the, to Cloudflare Workers. If folks are familiar with a company called Baseline that we acquired in I think March of earlier this year we've brought a bunch of the stuff from Baseline into Cloudflare and this is the first step. So it lets you persist logs and query them. And it's Our journey towards integrating traces and wide events and other things into the platform or natively.[00:02:00] Noel: Nice. How was the like migration? So say you have existing Workers or Workers, you're using currently, what are you going to have to do to get this kind of. persistent logging. Brendan: So we have a little one line of config, or maybe I guess to be fair, it's two lines. And you say that you enable observability, observability equals true for your worker and you, redeploy that and within a couple seconds, you'll start seeing logs show up in the Cloudflare Workers dashboard. So we're trying to make this really simple. There's all kinds of stuff that, as we build this thing out, we'll get into head sampling and tail sampling and which logs do you want actually to be persisted. But the base case we think should be really simple. Noel: Yeah. Is there any existing like searching, filtering, that kind of thing? I imagine a lot of Workers are getting invoked a lot and there's probably a lot of noise in there for some devs. Brendan: Yeah, so we, we do have the ability to do head sampling at the start. So to say, I only want, N percent of my logs to actually be persisted. But then, we have this concept called [00:03:00] tail Workers and the tail worker is a worker that runs after your worker that runs your application code and it receives events and it receives logs and exceptions and we kind of thinking about is we've actually built a lot of this system so far on top of that underlying primitive of tail Workers and, with a tail worker, you can filter which types of events and define your own logic for what things do you want to actually be persisted. And so there's lots to build on there. Noel: , I feel like there were a couple of mentions of tail Workers. I want to say , another one was in reference to like request fees. For tail Workers and service bindings, I want to say, is that right? Brendan: You sound like you really use Workers. You're like up on all the lingo. Yeah, so one of my favorite things about Workers is this concept of a service binding and bindings generally. So a service binding lets you take one worker and connect it to another worker. And then make requests between them. But unlike, if you stood up to HTTP [00:04:00] servers and you made requests between them and you're going over the network and having to think about lots of protocols, service findings, you can just make HTTP requests between them, but you can also use an RPC system that's built into a Cloudflare Workers. So it lets you break down an application into a bunch of pieces. But one of the challenges there, it's like before we have, as most people do some degree of request based pricing and each request to each worker, you'd pay for, and , it's inexpensive, but it's still somewhat of a tax if you break your application into three or four or five different Workers. And so what we did is we said you should really only pay for that first initial request because you shouldn't be penalized if say no, you're on a team and maybe your engineering team wants to separate their concerns out with another team. That, that shouldn't that shouldn't be a driver of what you're ultimately paying and your hosting costs. Noel: Yeah. If devs want to take advantage of this service bindings, is there much they have to do to get that configured and, make Workers callable from another [00:05:00] worker. Brendan: Yeah let me see if I can explain this in podcast form versus with diagrams, but you define on the worker that has a binding, like in the initial worker that it has a binding to, say, worker B. So worker A has a, declared on it a binding to worker B. And that's the configuration that gives worker A access to be able to make that s service binding call into worker B. You can think of it in the same way as if you set up , a route to your application, something that was running on, slash foo, it's a similar kind of mechanic. Noel: Yep. That makes sense. Is this kind of the recommended way you guys steer devs towards when using Workers now? Cause I feel like there's two schools of thought with these, serverless platforms in general. It's the like kind of microservice mindset where make every worker just have the code for its thing. And there's also the like deploy an express app into every worker and then have it just, yeah, whichever. Route happens to be the one that the worker ends up calling. Like it does its thing which way are you guys [00:06:00] encouraging devs towards these days? Brendan: Yeah, it's a great question. I think for us, we want to make sure that it's always simple. So you don't have to like, just to get the basics going, suddenly you've created 10 different Workers and you've gone all in on some crazy microservices architecture. But we see a lot of benefits in being able to break things apart. I'll give you an example. We have this functionality. Built into a cloud for Workers is called smart placement and smart placement can decide automatically, where's the optimal place to run your worker? Imagine if you have a centralized database somewhere, even if You are making a request to that worker from australia. It may actually be the fastest thing to run that worker You know somewhere in virginia because it's making round trips to that database that's in Virginia, but like you may have an application that's composed of, serving static assets, and sometimes it has an API route that makes requests to the database. And so [00:07:00] those parts of application logic really need to run in different places to be the most performant. And so, where we, see service bindings and this idea of different entry points to Workers is that you can run different parts of your application in the ideal place. And I think we're only scratching the surface right now, to be honest with you of How easy we want to make that I think it's not incredibly burdensome today, but it does require a little bit of thought where you're asking I need to create these service findings. And like, where do I define this? And we want to make that a lot easier so that you can have one code base and, suddenly you're not having to think about modern repositories and more and more complexity until you really want to take that on. Noel: Nice. Yeah. Is, anything changing on that front in like deployability and like the, Wrangler, the CLI, is there, are there any updates there to wrangle this a little bit, make this easier to deploy larger multifaceted applications? Brendan: I would say watch this space. There's a lot that, we're thinking about. There's a lot that, when you build these [00:08:00] types of products, you often have these things where one piece of functionality gets like out ahead over the of the next piece. And there's a lot that we probably need to do to think through like, how should this work in local development and Wrangler making those mechanics a little bit easier. Noel: Nice. How about like on the just deployment front itself? I feel , like. one can make manual deployments, build something into your CI CD flow, like that you have running in circle or whatever. Is there anything you guys are looking at on that side to make continuous integration easier? Brendan: Yeah one thing we're really excited about is we introduced something earlier this year gradual deployments. So if you want to mitigate risk of deploying a change, you can deploy to 1%, 5 percent up through a hundred percent. And we use this ourselves a lot at Cloudflare because , so much of what we build at Cloudflare is built on top of Workers. And you could imagine designing a system of your own that said okay I merged this pull request and I want to go to 1%. And now I want to wait and soak that for [00:09:00] 10%. And then 100%. And, that's something that I think we obviously want to make easier. And, you shouldn't have to design your own logic around. So we're definitely excited about being able to do more in that space, because, as in it's an a gradual process, you start and you're building an application. Of course you want to deploy to 100%. Each every time you deploy, like you don't have any users at this point, like what were you doing otherwise? And then it evolves into okay like we have a lot of customers like we better be careful about this change. But helping people kind of graduate and grow into that process over time. Noel: Yeah. Yeah. Nice. I feel like that's an interesting, that's always an interesting journey of watching a, a product or a small piece of an app evolve. I think in the serverless world is that going from the zero to one is like a very different experience than from one to two. So it's interesting to hear. You guys thinking on that a little bit. Brendan: no, totally. It's actually one of the hardest pieces of, behind the scenes trying to build these types of products is because you never want [00:10:00] to introduce that extra friction of getting started. Not just for business reasons as Cloudflare, we want to make it easy, but just genuinely have different needs at different stages. So how do you layer that in at the right moment? Transcript. Noel: Easy deploy that stuff to pages, but then there's like a. I don't know if it's, I wouldn't say it's like a very high friction relationship but there is just these two like mental worlds of this is Workers, this is pages, and you have to make this decision early on which thing you're going to lean on more. Is that something you guys are thinking about? Trying to make that easier and bridge that a little bit? Or do you think that the relationship is like pretty solid already? Brendan: Yeah, this is a fun one for me in [00:11:00] particular, because actually before I had joined Cloudflare there's this GitHub issue somewhere online where I was using Cloudflare Workers and I was using Cloudflare pages and I had the exact same thinking that you had of wait, which of these should I use? And I opened it and, and number of months later, I ended up working here. And what we've been working on for a while is actually just that of trying to bring these products a little bit closer together. Because we agree with people that, you shouldn't have to pull up this big compatibility matrix to figure out which to build on. We built Cloudflare pages on top of the Cloudflare Workers platform. and it's grown really fast. And so what we've done most recently is we've taken the kind of native static asset serving that's built into Cloudflare pages and brought that into Workers. And so what that unlocks is I guess it was possible to take Cloudflare Workers and wire up Workers KV and, build a deploy a full, something that was full stack that had static assets built into it. But this just makes that a little bit more native and easier [00:12:00] and built into the platform. And so we're really excited about that because, there have been different pieces of functionality that have actually been only available in Cloudflare Workers and can bring that to a wider audience. Noel: who do you think it will be like the main? Who's the target user for this kind of static layer? Cause it sounds like those people like probably aren't using pages. Really? Who do you guys see typically deploying complex enough, like web apps to Workers where they would benefit from this? Brendan: I think it was interesting because so many people already use Cloudflare pages. To deploy static only websites or things like that, but also full stack like server side rendered apps, like things that run remix, et cetera. And so really it's not about a necessarily new kind of use case, but more like we just already have, Tons of people deploying \ those types of applications today to Cloudflare and want to make that a little bit easier. And, I would coming back to the logging functionality, we've brought this, these, this persistent logs feature to [00:13:00] Workers. Want to be able to give that to everybody who's building a full stack app, not just people who are building on Workers. Noel: Yeah, that, that makes sense. Do you feel like a lot of people do come in with these questions? I guess I'm particularly curious. Yeah. But people like running these like next steps, like you mentioned remix just now like, is that where a lot of this kind of decision ends up happening for most people? Brendan: Yeah. I would say people come in with their own framework. One of the things I'm often telling people on our team is we can have all kinds of our own, very clever ways of getting started with our platform. We've a CLI that lets you bootstrap a new application, but. People walk in the door with what they have. And I think increasingly people are pushing the bounds, right? They're part of their application may be static. Part of it may be server side rendered. And people are trying to push the bounds of performance in all kinds of different ways. And so really what we're always just trying to do is to meet developers where they are and listen. And make sure that our platform can fit what they're walking in the door with and not trying to [00:14:00] be, prescriptive. We're not our customers parents. We want there to be lots of different libraries and frameworks and different things that work really well on our platform. Noel: Yeah. Yeah. That's interesting. Like even just a couple of weekends ago, I was working on a completely, just a side project that was a completely statically built Next JS app. So it was like totally in static mode and outputting files. And I was like, I went through the same thing. I was like, I have three different ways I could deploy this on Cloudflare. I could use Workers. I could just deploy it as a bundle of assets, or I could use pages. And , when you go to the next JS doc, it's like, we recommend you deploy this as a pages app. And I'm like, I wonder if that still applies if I'm in static mode, because it's just files being served at the end of the day. Brendan: No, totally. Totally. It gets into some fun things. Like when we were working on this, we had to figure out okay if you only have static assets and you're deploying to Puffler Workers, is that a worker? Is that what the word is? And there's just so many things that you get into when you're having to name, naming is always the hardest thing or but just trying to rationalize that. Like one of the things that drives [00:15:00] me the most nuts out of anything is when we have three or four different ways of doing the same thing on our platform. That's the thing that we're always trying to fight against because that's what slows you down and that's what leads you down some path that's Oh, I tried this and now I have to rewind all the way. So this is really us trying to say no. Like there should be one way it should be really clear. It it should be simple to get get going. Noel: I think it is challenging because the web landscape is weird right now. I don't know, all the bit, the big push to SSR is making things a little bit more complicated. And there's like people trying to do more rendering on the server or more static stuff ahead of time. And that's way more of a focus than was happening historically. The landscape is definitely changing. That's not like an easy charge for you guys, I don't think. how do you guys foresee that evolving? Do you think that this SSR push is going to continue to be like a focus area for you? Brendan: I mean, It's definitely like the focus area is always for us what developers are trying to do at the end of the day. [00:16:00] And yeah, I think that there's lots of things where SSR is incredible and it's like the right tool for the job. There's certainly cases where having a static site makes a ton of sense. It really just comes down to, I think, a lot of. How much are you interacting with data? What's the interaction model like? Is the interaction model highly interactive on the client? Is it just navigating between pages? One of the things that we're really excited about going back to smart placement. is Sunil Pai. Shout out to Sunil. A lot of people who listen to his podcast are probably familiar with him. I think he's been on this podcast before. He has this fun demo that kind of shows multiple Workers in that flow and server side rendered content where the initial server rendering happens in a durable object, but then it makes a call out to a separate worker that's located as close to the data as possible. And streams that data back through. And so you get this interest, the kind of best of all the worlds of, the initial stream of content starts [00:17:00] immediately, but as soon as data is available, it flows back through. And so I guess what I'm trying to say is that what we're trying to do is to think about like, Where are these dichotomies and tradeoffs that developers are having to face down where they're like, oh, it's either I go all in on this one approach or this other one, and they both have these kind of benefits, but also downsides, maybe. And how do we. How do we create platform primitives where like frameworks can all take advantage of them and give developers maybe a landscape that could be a little bit simpler where they're not having to reason through so many different modes all the time? Noel: is, this is as good a segue as any. I want to talk about open next a little bit, and this kind of like idea of abstraction and like easier way for devs to think about and deploy these things is it, where did you guys. Fall into the the open next landscape and thinking about like the package [00:18:00] and how that should integrate where you guys pretty instrumental in like pushing for certain abstractions there, or do you think that the spec was pretty well defined? What's, how's that? How did that kind of come to pass? Brendan: We talked to developers and people have next JS apps and they want to run them on Cloudflare. And we have had an approach for this for a while. It's called Cloudflare slash Next on Pages. And it, takes the build output of Next. js and transforms it into a format that can run in on Cloudflare Pages. And, we obviously wanted to support that on Workers, but also we needed a way to support the Node. js runtime that's built into Next. js. When you look at the compiler code for Next. js, it has these constraints that are built into it that says this is how I'm going to compile and output an app that targets the Node. js runtime versus the kind of Edge runtime that's built into Next. And we just started hacking on it. We just started saying, okay what can we get working? [00:19:00] What's what can we experiment with? And, we were familiar with the OpenNext project for some time. And, just, I thought that the kind of broader goal of trying to make sure that the framework could run on any platform was something that we could get behind. Cloudflare has really always, thought a lot about avoiding platform lock in we've made great inroads with egress fees with other platforms, and it's a bandwidth alliance, and so that's a principle that we care quite a bit about and talk to the folks at Opennext, want to make this a community initiative, ERs are welcome, feedback's welcome anybody can contribute. Thanks I know there are other platforms that are trying to figure out what their adapters might be for this too. Noel: Totally. Let's see, man. I feel like we've, I feel like we've covered a lot here. I wanted to talk a little bit about both like storage and key value. Is there one of those you'd like to start with? Brendan: Ooh, I don't know, this is a tough choice, but Workers KV is near and dear to my heart, and I was really excited [00:20:00] about this announcement that we made a couple weeks ago. So Workers KV is the distributed key value store that's built into the Cloudflare Workers platform. And it's actually one of the things that's been around the longest. Came out not too long after clef flare Workers and we announced some updates to it that made reads to Workers kv up to three times as fast. And this is something that like. I think somebody, somebody had a good tweet about this, that. When you're making something that much faster, when it's already quite fast you have to do quite a lot. And so we did a deep dive blog posts that kind of outlines the change, some of the changes we made, some of how we work through some of the internals. But the reason that we're excited about this is that so many of our own products and so many of the. Products that our customers have built rely on Workers KV. So when we make this faster, it's not just about making this one service faster, but making, meaningful amount of traffic on the internet faster including serving [00:21:00] static assets from Klavler Workers and Klavler pages, because that's all backed by KV. Noel: Is there any like kind of insight into the technical improvements that are, have been coming out recently that you'd be able to give open, open the hood a little bit? Brendan: Yeah, so I guess two that I would point to is Workers KV makes use of Cloudflare's tiered cache and tiered cache is not a concept that actually I was super familiar with before I joined Cloudflare. But the idea is that, if you imagine Cloudflare has a network of data centers all over the world. You could cache content when a request comes into a particular Cloudflare location. But it's just cached there. What if when you were making that request to the origin, say where that key from Workers. kv is stored, you went through a tier, a series of different locations that each had their own caching. So maybe something is a cold read at the start but it gets [00:22:00] cached in that one of those intermediary layers. And then a request comes in from a totally different Cloudflare location, and before it hits the actual origin, which may be further away, It hits that cache that's an intermediate cache and then comes back. And so tiered cache is really powerful and baked that in a way that you don't have to configure or think about when you use KV. And then the other piece that we worked a lot on is just, and we get into this in the blog post, some of the different layers of Cloudflare that we looked at and we said okay we don't actually need to route it traffic back through part of Cloudflare's reverse proxy in order to serve a key from KV And just this kind of thinking about from first principles, like how should this be built so that it can be maximally fast. And I'm just really excited. Like we talk to customers all the time who are like, very performance sensitive and to see the results that they see on their end. Noel: Is there anything customers, users have to do to get any of these KV performance [00:23:00] increases? Do they have to enable tiered caching or anything like that? Brendan: No, it's actually fun. Like before we announced this people were waking up in the morning and like, oh, everything's just faster. And we would drop some breadcrumbs on Twitter being like teams at Cloudflare would share their graphs and things were just totally down in the, down into the right, in a good way. Noel: Nice. Awesome. Very cool. I did want to talk hyperdrive a little bit. Could talk about what it is and why, how it fits into everything we've been talking about here. Brendan: Is the way that you connect to a database from Cloudflare Workers. So if you're connecting to, Aurora RDS on AWS, you're connecting to something that you're already bringing to the table, Postgres, et cetera. And you might ask why do you need like a separate thing? Why does it have to be called hyperdrive in order to do that? And the answer is if you think about what a Cloudflare worker is, it's this stateless serverless function. If you think about connecting to a database, the first thing that you have to do is the client has to open a database connection, and there's some degree of overhead to that, [00:24:00] and databases can only hold a certain number of connections open at a given time, and you wouldn't want to design an application that Every time, a new API request came in, you created a new database connection and had to negotiate that kind of handshake back and forth. And so what hyperdrive does is it manages a pool of connections that's separate from your worker. And it can cache read queries. What you get is connection pooling. And you get faster reads and you don't actually have to think about it a whole lot. You provide a connection string, just like you would to any normal database driver, and you're off and running. we had this challenge with Hyperdrive, which is that, so many people have existing databases that are within a VPC, or, a private network of their own. And it's generally not good practice to expose your database to the public internet. Can imagine all kinds of things that people would be very scared about if you did that. Noel: yeah. Even when trying to configure properly, it's always like a little bit of a nail bite moment, yeah.[00:25:00] Brendan: yeah, exactly. Exactly. And so at Cloudflare, we have this thing called Cloudflare tunnels already that lets you connect to private resources. And so we took tunnels and we made it work with hyperdrive. Without you having to configure a whole lot on your end so that you can connect to an existing database existing Postgres database that you're bringing to the table without without your security team banging down your door. Noel: yeah. But that was actually my exact question, because I love tunnels, I use them for a bunch of stuff on like my home network to expose very nice thin slices of stuff to the internet. So I don't have to, have all these same problems of I don't want my home network publicly exposed. And that was actually my question is like, how does this differ than me spinning up a tunnel just, I'm like pointing it to a database internally is there any additional kind of, I guess you noted the caching, but is there any other like additional magic here that makes this? More desirable than like tunneling manually. Brendan: I think really, it's like connects back through hyperdrive and [00:26:00] integrates with idea of being able to open a TCP socket directly from a worker. And. If it really, I think that's the main thing there, Noel: Yeah. I feel like still, that's probably enough paired with caching. It's like that makes it, it makes it the tool to reach for. How do you do it? What is there, what do you have to do on the receiving end on the side, the piece inside the, in the, the VPC? Brendan: this one there's some great docs that we wrote up for how to get this going. On the post part of the hyperdrive docs you log into the Cloudflare dashboard and they'll be guided through creating a new tunnel and you can add that to your configuration. It's relatively straightforward it takes five minutes type of thing. Noel: Yes. No, like weird network requirements or anything like that. Like it's pretty. Brendan: No, we're always trying to strive for keeping this stuff simple. We realized that like most developers don't spend all day long in a cloud management console, nor should they, like everybody should spend their time writing code. My background is actually as [00:27:00] a, I was doing mostly front end software development. I've done a fair amount of full stack things, but like part of how I think about what we're trying to build is I want to build things for the me of 10 years ago, where I was like logging into some of the stuff for the first time, like, how do you configure any of this? Like, how do you get going Noel: Yeah, I guess to that note, is there anything else you wanted to touch on, or like new devs, is there anything you recommend they check out or get started, like, when they're starting? Dipping their toes in Cloudflare. What's the best bang for their buck? Brendan: with the best bang for the buck? Like where and how to get started? Or? Noel: Yeah. I guess not even a literal buck, but just yeah, like in there and they're looking to deploy something to Cloudflare, like looking at what tools might be helpful for just their, getting started, hello world, web apps. What might you recommend? Brendan: I actually point people to I think everybody right now is building something with AI. And There's we have a lot of fun tutorials of how to get started with Workers AI, which is [00:28:00] our hosted serverless inference, AI inference platform. And a lot of them walk you through, like probably heard about rag and everybody's building rag applications and how to do that with Workers and vectorize in D1 and how to get going with some simple building blocks there. That's probably like where I would start if I was building something right now just because I honestly it's 2024 and that's what everybody is building. Noel: Nice. Cool. Cool. Yeah, we'll have, we'll be sure to have links to the blog posts and everything in the show notes. Is there anything else you wanted to touch on quick? I know we've covered a lot in our like 30 odd minutes already. Brendan: What would be interesting would be we should talk about Node. js compatibility for a second, Because, this has been something that's, a longstanding honestly, pain point with Cloudflare Workers is, people have existing NPM packages that they want to make work on Workers and, you get some kind of cryptic error saying that a package isn't supported. And we've been working for the past while on improving our kind of compatibility mode with Node. [00:29:00] js. And we recently shipped a big revamp of this. that combines native APIs that are provided directly by the worker's runtime and polyfills that are part of a project called UnJS. And so UnJS is this cool open source project that tries to provide this kind of compatibility and standardization layer across a bunch of different runtimes so that as a developer, you don't have to think about which of these things are you targeting and UnJS will just fill in the gaps. of what APIs maybe a platform doesn't natively support. And we're really excited about this. You, if you haven't, if you've tried our Node. js compatibility on Workers before, you should give it another spin. There's docs if you go to the Workers docs on all this. But it really dramatically increases the number of Node. js APIs that are available. And a whole slew of new packages work. So if you've maybe, run into challenges with that in the past, I'd encourage you to give it a try. Noel: Nice. Was that the biggest kind of change under the [00:30:00] hood that you guys had to make to get this working? Is that the hurdle right now is just like getting compatibility for those Underlying node APIs, Brendan: So it's interesting. So one part of it is there's sets of APIs that, unequivocally should be implemented directly in the worker's runtime either for performance reasons, or just cause they're not possible to do well into spec if you did them externally. And then, there are some things where if you think about them in the context of a serverless function, something that's stateless, there's actually some ambiguity of what should the behavior be? Take an API like FS What do you mean a file system? Like I just uploaded this function. What is the file system? And so you could imagine ways that we might think about Oh, here's, here's how FS should work in serverless function. And, maybe it connects to an R2 bucket. There's all kinds of ideas, but it's not necessarily something that's solidified yet. But you have a package and that package may say require FS. It's okay like if Fs, the whole module from Node. js is not available, [00:31:00] that just explodes in your face. And so we had to solve that. And so what we did was we said, okay there's these APIs that we're going to provide these stubs of it via NJS. And just the fact that those stubs exist, even if not every API method is yet implemented means two things. It means that you can, there's packages where if they import that dependency, but they don't use it in the code path that you, code path that you're using, things actually can work and you have a path forward. But then it also means that if let's say you call read file sync from FS and that's not implemented, you get a specific error and you get a specific error that's oh, it's that method. then with that error, you can do a whole bunch of things. First of all, if you hit something like that, you should file a GitHub issue. You should yell at us. You should be like, Oh, you should fix this. Or, but the second maybe more important thing that's in your hands is that you can use a module aliasing, which is built into Cloudflare Workers into [00:32:00] Wrangler to say, anytime that you the bundler sees read file sync, I'm actually going to replace it with my own function. And so let's, you provide your own shims if you need to reach a little bit deeper and say, I really need this, NPM package that was written, four years ago, never updated. I don't know, maybe you should use a different package, but that's a different question. You can do what it takes to make, to get something to work and trying to give, put that power back in people's hands. Noel: Nice. Yeah. That's awesome to hear. And I feel like those are both use cases. I think just like having better error messages and like knowing what's going on will probably be very enlightening because yeah I can recall instances where I've gone in and tried to install it. And I'm like, Oh, this is failing. It's Oh yeah. FS is a good example. It's this is, I'm not using anything here that's doing this, but it's importing and it's okay, I've got to find a different package to use. If I don't want to go pull my hair out for an hour, Brendan: Oh yeah there's nothing as a product manager. There's nothing that makes me happier than really good error messages. It's something I'm telling my team all the time. There's so many things. I'll leave it with we actually are not [00:33:00] yet done for the year announcing fun things. So definitely watch this space. There's more coming soon. And , our biggest challenge this year was just fitting everything into this one week. Lots more to come around, around data and working with data on our platform. , I'll almost bottle the surprise though. Noel: Yeah. Perfect. Awesome. Like you can't spoil it all again. I'm like Cloudflare stuff's near and dear to my heart. So I'm excited. I'm excited as much as one can be, I think, to see what's coming. Thank you so much for coming online and sharing with me and chatting today, Brandon. Brendan: Awesome. Thanks. No, it was great to be on the podcast.