Void0 Evan You === Evan: [00:00:00] Silence. Josh: at logrocket. com. I'm Josh, and today we have Evan Yu, creator of Vue and Vite, here to talk about Void0. Evan, would you like to introduce yourself real quick? Evan: ~Sure, uh, ~hi, my name is Evan Yill and,~ um, ~I am based in Singapore and I have been an independent open source developer since 2016. And last year I started a company called VoidZero. ~Um, yeah, ~before VoidZero, I was mostly working on open source projects like Vue and Vite. ~And yeah, so, uh, ~last year I started,~ um, ~getting into the startup world and,~ uh, ~founded a company ~and yeah, ~and now I'm here to talk about it. Josh: That's very exciting. Can you share the inspiration behind founding Void0? Evan: ~Um, ~I think the idea is,~ uh,~ largely being two parts. The one [00:01:00] is I, as ~you know, ~an open source developer, I've been working on the framework and. Initially, I was only working for things that run in the browser, but as Node. js and the framework scene developed, I started to have to dig into the tooling layer for my own framework, and Worked on that over the years. Eventually,~ uh,~ at one point, I was like, there should be a simpler way of doing this, which led me to work on Vite. And as I was working on Vite,~ um,~ again, as I dig deeper and deeper into the stack, I just,~ um,~ slowly came to the conclusion that, ~you know, um, the, ~the, there is a lot of, Unfortunate inefficiency in the JavaScript tooling,~ uh,~ the status quo,~ um,~ that I think is a result of the natural evolution of the ecosystem without central guidance,~ uh,~ which is,~ um,~ good in a way because it allowed all the creativity and,~ uh, you know, ~different ideas in ecosystem who ~kind of ~just compete and,~ uh,~ Have all kinds of [00:02:00] different solutions. ~And ~but I think as the ecosystem matures, it's at a point where people start to want stable and mature tools and they want convergence to some extent, ~right ~so that they can focus on innovation in the areas that actually matters instead of dealing with a, ~you know, ~fragmentation and reinventing the wheels,~ uh,~ time from time. ~So, um, ~and I think there was a trend that I witnessed ~and, uh, ~In the wide adoption of VITE, because,~ um,~ initially when I created VITE, I meant it only for VUE, really, but halfway through, I realized it could actually support other frameworks, because a lot of the things were You know, the frameworks are doing are starting to get more and more similar, especially at the tooling layer, right? ~Uh, ~and I think a lot of the other framework authors agreed, which is why quickly one after another, a lot of these meta frameworks started adopting VEJ as their default. ~You know, ~the base layer of their tooling and a lot of the pure front end framers started defaulting to Vite as, ~you know, the, the, ~the tooling that they recommend to their [00:03:00] users. ~So, um, ~there is a level of convergence I myself have never witnessed before in the JavaScript ecosystem. I think that's a good opportunity to ~sort of, you know, ~capitalize on this, ~you know, ~momentum and maybe push a bit further. And try to come up with something that could unify more layers beneath what Vite is currently sitting at. ~So, ~but in order to do that, it does require a lot of effort from people who are fully dedicated to doing this on a full time basis. ~So, ~so far the projects I've been working on like Vue and Vite are all independent and the funding model is not really ~You know, um, ~enough to sustain, say, a team of fully dedicated engineers to work on it all the time, right? So a lot of our contributors in previous projects are either.~ You know, ~volunteers or part time. ~So, um, ~so the company is really ~kind of the, you know, my, ~my attempt of making this a reality because ~I, you know, ~I decided it's time to try something different too, [00:04:00] but the end goal is always ~being ~the same for me because,~ uh, ~the whole reason I work on these open source projects. It's really, I want to,~ uh, ~see more JavaScript and web developers succeed. And I think better tooling helps them do that. ~Uh, ~and that's always been the reason I work on frameworks and tools. And,~ um, so, ~I think ~the, uh, ~the starting point of Void Zero is no different. It's just we are aiming for something bigger, more ambitious. And also, at the same time, hoping to ~Uh, you know, ~establish a, ~you know, ~sustainable business model that can support these tools and make sure they're here to stay, make sure they're free and ~an ~open source for the long term. Josh: That's a great problem statement and start of the solution statement. So continuing, what is exactly void zero? Evan: ~Um, ~so Voice Hero is just the name of the company. Right now we are focused on building a collection of open source projects. ~Um, ~so ~we, um, ~if you go to the company website, you'll see that we list four projects currently. So there's VITE. VTest,~ uh,~ OXC, and Rolldown. ~Um, ~so the relationship between these [00:05:00] projects and the company,~ uh,~ can be a bit intricate because,~ um,~ We don't want to really say that Void0 completely owns Vite,~ uh,~ and especially Vitest, right? So Vitest is a kind of a spawn off project that's very closely related to the Vite team, but,~ um,~ we, the company employs multiple core team members of both Vite and Vitest. And,~ uh,~ of course, we have, ~you know, ~a lot of control over the direct direction of these projects, but we also want to make sure the governance of these 2 projects remain,~ uh,~ the team based governance as before. So there are team members that are either employed by other companies or not. ~You know, uh, ~sponsored, but not part of voice zero, right? ~Um, ~so their opinion and contribution is still important, whereas on the lower layers, which are more sort of company owned company driven are,~ uh,~ OXC and row down, which are the rust parts that,~ uh,~ we intend to use to support both V2 and VTest down the road. ~Um, ~so OXC [00:06:00] is probably the lowest level tool chain. That starts from a JavaScript parser that's written in Rust and starting from the parser, there are also components like a linter, a formatter, minifier. ~Uh, ~transformer that can transform TypeScript, ASX, target lowering,~ uh,~ pretty much,~ uh,~ anything that you can think of when it comes to dealing with the JavaScript language processing. And,~ um,~ on top of that, Rowdown is the bundler that is built on top of OXC, leveraging all its parts, and will eventually support Vite and VTest in the future. Josh: This almost feels like a sort of vertical integration where you've seen a lot of smaller companies taking over small parts of the space, but you have a unified vision as you've described it for the entire stack. ~Okay. Okay.~ Evan: stack,~ uh,~ is in that you will have one consistent AST, and you can do as much as possible,~ uh,~ in the same ecosystem, and making sure they're all using the same consistent configuration, the same AST format, the same [00:07:00] path resolution logic. ~So, um, ~I think one common example in the past is, for example, if you're using. Webpack for your application bundling, but you're using just for your unit tests. You essentially have to configure both,~ uh,~ file transforms and path resolutions with two completely different systems and making sure they work the same. And there will be inconsistencies, right? And you have two different configurations that you have to just keep in sync every time you make a change to your build setup, which,~ um,~ can be eliminated if,~ uh,~ the tools are just designed to work together. Josh: there any other major problems or inefficiencies that you're excited about having the unified stack resolve? Evan: Yeah, so performance is obviously a very big part of it. Just,~ uh,~ imagine in Vite today,~ um,~ if you're building a React app and you want better performance, you likely will be using the swc based Vite React plugin. ~Um, ~so in Vite itself, we also rely on esbuild and rollup at the same time. ~So, ~when you build a Vite application today,~ uh,~ with the swc plugin, [00:08:00] you're actually processing your code probably four times with different parsers. ~Uh, ~the first time you would ~pass it through, um, ~pass it through SWC for the transforms, then,~ uh, it's parsed by, it's,~ it's parsed, transformed, serialized back to string, and the data also goes between JavaScript and Rust and back to JavaScript. Then it's parsed through Rollup, which Rollup also uses SWC's parser, which is,~ um,~ Also, a separate binary copy, so it gets sent to Rust again, parsed again in Rust, sent back to JavaScript again. And now, it gets processed, the AST gets processed ~by, ~by rollup, and bundled, and then made into chunks of strings. ~Um, ~all the while, ~you know, ~you also have to do source maps for all these steps. And then finally,~ uh,~ you have the source maps and the code generated and,~ uh,~ we use ES build ~for the, um, ~for the target lowering transforms and minification. And if ~you, you, you know, ~in some cases, ES build minification is not as,~ uh, it's, it's, ~if minification quality can be not as good as sometimes Tercer or,~ uh, ~[00:09:00] SRC minify. So some users would opt to use a different minifier. So that's another pass of, ~you know, ~the final bundle gets parsed and minified again with a different AST. So you can see how much duplicated work we're doing here, just because each step we opt to use something different. And in this case, there are unfortunate. Reasons why we have to use yes, but a rollup. Um, I can, I can go in, go a bit into that ~if, uh, ~if you want, but,~ uh, ~long story short,~ um,~ if ~we can, ~we have the option to have one solution to do all these tasks with one AST without going back and forth between Rust, JavaScript, and serialized the ASTs all the time,~ um, ~we will do it. And that's exactly what we're trying to do with Rolldown. Josh: Let's say that you do succeed, that you've built a next gen unified tooling stack with roll down and so on. What does that look like for an end user writing an application with your tools? Evan: ~So our, and I, uh, ~so ideally wrote down would sit at the library bundle level, whereas Vite [00:10:00] sits at the ~application support, uh, ~application build tool level. So if you're building an SPA,~ um, again, Vite, you know, ~Vite was designed to be. Working out of the box for most ~of the ~use cases, right? ~So, um, ~in a way, Vite is already ~kind of ~providing the DX people love,~ uh, ~but the internals were not as efficient as it could be, ~right?~ ~So, but then ~Rodan would also Give people a lot of ~nice ~niceties. For example, if you're building a library,~ um, ~Rodan would be able to just follow your TypeScript code base with close to zero configuration. ~And ~in the future, it also comes with,~ um, if you use ~isolated declarations, ~it ~will ~also ~be able to emit the DTS and ~bond with ~DTS in the same path as ~you're bonding ~your TypeScript source code. ~Um, so ~our hope is that there would be a ~very, um, ~Very standard way for people to say, if I want to build a web application, I shouldn't have to think too much about setting up a built, a built tool chain. Just, ~you know, ~by using VITE, I get most of the problems standardized and solved with best in class options. Similarly, if I want to [00:11:00] bundle a library,~ uh, ~Rodan should provide all of that. ~And ~more importantly, I think the solutions should not be bundled to a specific runtime. ~So ~whether you are building with Deno ~or ~Bun or Node. js,~ right, ~you should have the same options available to you in all those cases. ~Um, so ~we do want to have ~the, you know, ~a consistent unified. Develop the experience for JavaScript developers. And I think more importantly,~ um, ~on top of VITE,~ what, ~what VITE provides today,~ right, ~there are other concerns in your development life cycle, like LinkedIn formatting, unit testing,~ uh, ~I think these things. ~Um, ~would also benefit if they all use the same, ~you know, ~AST format and have the same path resolution. ~So, um, ~imagine they can understand the same configuration file if you want to alias things in your code base, right? There's one central source of truth that just works across all these concerns. Instead of you having to teach each tool to understand how to, ~you know, ~understand your code base. I think we're [00:12:00] still quite early, because in a way, a lot of the work we've done up to this point is ~sort of ~catching up with what these individual set of tools are able to do all the way. But I think there are a lot of things once we reach the feature completion phase, there are a lot of interesting things we can do on top of that. Josh: ~Mhm. ~Before we dive into the long term, how you compare with other efforts and the open source model you have, all of which I find fascinating. I want to talk a little bit about that note you made on supporting different environments. For example, Dino node among others. You mentioned earlier that there are a lot of different ways. Of doing things in JavaScript now, there's, ~you know, this, ~the standard node app, and then you have GitHub actions and expo and react native and all these other different builds targets and environments. And it seems like every single project out there has its own bespoke config to deal with its nuances. ~So, ~how do you balance being able to be a good experience out of the box with integrated what people want versus being able to also support the myriad of different [00:13:00] outputs that every single project seems to differ on? Evan: So ~the ~that effort comes from a different angle, right? We can't obviously, ~you know, ~just immediately wrap or change the way some of the established tools work. ~Um, ~but I think by ~having these, uh, ~having our tool chain being composable. Individual components, ~right? We, ~we create a strong incentive for them to consider adopting our tools internally. Like we actually have been in ~the ~talks with. Multiple tools you've mentioned,~ uh, ~I can't really name specifics right now, but,~ uh, ~there's strong interest in established projects in adopting OXC as, ~you know, ~the basis of their new, ~you know, ~AST handling or adopting Rodan as a bundler, uh, for certain things, I think it's all possible if, ~you know, ~we, ~you know, Do ~end up shipping the ~best, ~best in ~class, ~class solution in every aspect and also make them easily embeddable as components, ~as you know, ~as either rust crates or NPM packages, that's easily consumable [00:14:00] and people can build things on top of it. Then there's a good opportunity for it to become the industry standard where,~ uh, you know, ~we can further remove those inconsistencies between these ~sort of kind of ~more,~ uh, ~monolithic systems. ~Uh, ~right now, a lot of them are using incompatible underlying tools. ~And I think something, ~basically what we want to see happen is what Vite is doing to the frameworks, ~right?~ A lot of frameworks previously were using different internal bundlers and configurations. ~Um, ~but now they've converged on Vite and that creates a lot of benefits because there are a lot more interop users of these different frameworks can now share plugins and they have a lot of mental overhead when they switch frameworks, ~because the underneath.~ ~Uh, ~those tools are much similar, right? I think at one point we do hope the voice 02 chain can have a similar effect on the job for ecosystem as a whole, where, for example, if,~ uh,~ say one day. React Native apps can be built with Rowdown. I think that would be really great. ~Um, ~and~ know, ~there are also possibilities where,~ uh,~ JavaScript runtimes,~ uh,~ could leverage the tools we've built to [00:15:00] enhance the development experience that ~they, ~they're trying to do, right? ~So, uh, ~I think there's a good opportunity for this to happen, but that's built on the premise that we do deliver the ~best, ~best in class tools. Josh: Sure. Let's talk about that a little bit then how you're going to ~develop or ~develop these tools. ~Uh, ~you have a very unusual model for a seed founded startup, which is that you are primarily based on open source software free open source. How do you balance that with the need to make money long term? Evan: ~So, um, ~this is the part people ask me quite a lot. ~Right. So, um, ~I guess what we can tell people right now is, we do have plans, and the line is going to be drawn pretty clear. ~Mm hmm. ~Everything we build as open source will remain open source. ~Um, ~we don't ever intend to do rug pulls where ~like ~one day we changed license and started charging you for the things you've been using. ~Right. Um, ~so the idea is there will be services, likely web services that's associated with the tool chain that we've built and the tool chain serves as a funnel for those services. Right now, I can't [00:16:00] really get into too much details on what exact services we're going to build. But,~ uh,~ that is the general direction and,~ uh,~ the services will be a bit further down the road, right? Right now, we're focused on building the open source tool chain to make it so our, I think the monetization premise really depends on that. The tool chain that we've built really becomes Almost industry standard where it's widely adopted everywhere. ~So, uh, ~the conversion and when it serves as a funnel and conversion to the services we're building, it will have a ~huge, ~huge base, ~you know, ~amount of users that we can convert from, Josh: Sure. Speaking about project governance, though, Even if you don't change licenses or take over projects altogether, you still have a strong influence on these projects. For example, if V were to need a feature to support a void zero use case, it's likely that your company would be able to prioritize sending resources for that feature. So what, if any steps do you think you'll be taking to ensure community governance stays with these projects, or at least doesn't get swamped by [00:17:00] void zero or similar? Evan: I think the governance largely also depends on who is actually funding the development. ~So, uh, ~in a way I think community governance in open source has always been, ~you know, ~If you want more influence on the project, you need to be the one doing the work. That's always been the reality. ~So, ~in a lot of cases, just being vocal about what you want to happen ~doesn't really make it a, you know, a reasonable I guess it ~doesn't really create the kind of level of influence you'd expect. You'd hope it would have. And I think that's the reality for open source for ~every, ~every project that's in practice, right? It's always the people actually shipping the code, doing the contributions who have the largest influence on it. I think in a way, what we can do is making sure that voice zero,~ uh, you know, ~when we deliver the work, we do it in a balanced way that, ~you know, balance in a balanced way that, um,~ Some features might be directly related to monetization efforts with 4. 0 in the future, but the bottom line is we wouldn't do it in a way that would hurt ~the, um, hurt the, uh, ~the users ~who've been using the open, you know, the, ~who've been using the open source projects just for free. I think one [00:18:00] important thing is. We don't really want to do open core because that forces you to draw a line between what goes into the paywall and what does not. And I think,~ uh, ~the line is better drawn at, ~you know, code, uh, ~code that runs on your machine is open source and free. But,~ uh, ~if the moment you want to use something that runs in the cloud, or, ~you know, ~you want to have,~ uh, ~monitoring or insights or continued analysis, ~right. ~Those kinds of things will cost you money. Josh: And you're going to be some of the, if not the, best positioned people to write those analyses to be able to understand how to, say, aggregate, build blogs or whatnot to gain those insights. Evan: Yep. ~Um,~ Josh: a roll down powered Vite alpha release,~ uh,~ either very soon or as a,~ uh,~ release ~released. ~Can you tell us a ~little~ bit about that and why it's exciting? It Evan: Yeah. ~Um, ~so it's actually already,~ uh,~ there is a,~ um,~ working progress branch and there are continuous releases that set up. ~So, um, ~we have been able to use that version to actually run the starter SPA templates that you can scaffold with create beats. And [00:19:00] we can even use it to run power beat press and build the documentation site of media itself. So in a lot of ways, it's quite usable. But,~ um, one of the ~one of the reasons we don't really consider it stable yet is because wrote on itself still needs a bit of work. ~Um, ~I think the biggest chunk of work left as of now. So in a way, wrote down is 1. 0. Beta is ~kind of ~synced with wrote on these 1. 0. Alpha in a way, right? Cause the stability of row down, row down powered feed relies on the stability of Rodan itself. ~Uh, ~right now the biggest chunk of work is really in the alignment between Rodan's edge case handling between,~ uh,~ between Rodan and ES build and between Rodan and rollup. ~Um, ~because one of the goals when we. Decide to work on row down is we want to unify the bundler that's used between development and production for better consistency. And I think the challenge in that is ~roll up and ~roll up and yes, build. Do have quite a bit of behavior [00:20:00] differences ~in ~in some of the edge cases, right? So right now ~we're trying to, uh, ~we're enabling the test cases for both projects and run roll down against them. We're not aiming for 100% Same outputs, but we essentially look at each test case and try to see what is the behavior it is trying to assert and whether it's really related to the correctness of it, or is it just asserting a very specific behavior ~that ~that this specific bundle decides to do. So we are essentially filtering both test suites. Down to the ones that are directly related to behavioral correctness and try to align with them as much as possible. ~Uh, ~so this ensures when we swap out ES mode and roll up with row down into Vite existing applications will, ~you know, ~encounter as few edge cases as possible. So that's quite a big chunk of work that we are, we're still focusing on right now. Josh: ~sounds like a lot. ~I'm curious also, do you have any plans to release the Vite alpha Five, 10 years ago, when a lot of the previous generation or its [00:21:00] predecessors were being developed, you had an even larger swath of people writing weird and wacky code to support, ~you know, ~Internet Explorer 11, even IE 10 or older, or other now roughly unheard of environments. Do you think that the bundling,~ the, ~the roll down area, the building area is getting easier to deal with over time now that we don't have to say support IE 10? Evan: Um,~ like, I think, ~I think legacy browser support removes part of the need, for example, in the syntax lowering transforms. Right now ~we only, we're only working in the, at least in the current milestone, ~we're only aiming for ES2015, down leveling to ES2015,~ uh,~ cause the need for down leveling to ES5 is quickly diminishing, right? So that's a good sign. But I think ~for, ~for bundlers. There are two aspects of it. The first is ~the, ~the need for bundling itself doesn't seem to ~diminish, ~diminish anytime soon. ~Uh, ~there will be some people claiming that you ~don't ~no longer need to bundle.~ Uh, ~with HTTP ~two, ~but in practice, that's just not the case. If you really care about [00:22:00] performance, ~right. Um, ~I think ~there are, ~there are things about ~like ~nested network waterfalls and just the plain overhead of. Each HTTP request comes with all the cost of headers and, ~you know, uh, just, ~just like the amount of headers and cookies that comes with each request can quickly overwhelm the browser. And one thing we've discovered with VEED is even on a local dev server, when you have several thousand modules, which is not uncommon in today's front end. ~You know, ~scale of development. ~Um, ~the browser just struggles to handle ~all the, ~all the requests in parallel, and it just creates a bottleneck of your page loading performance. ~Right? So, um, ~Bundling still is the most effective way to ~sort of ~address that. And on the other hand,~ uh, ~there's better compression when you,~ uh, ~bundle things together,~ uh, ~instead compared to individual models. ~Uh, ~I think the argument that you can just ship things on minified and uncompressed ~is. ~Works ~to ~only ~to a ~up to a certain scale, right? ~So, uh, ~but not everyone's building medium to small size applications. There are definitely,~ uh,~ larger [00:23:00] apps that also need to deal with performance. And I think,~ uh,~ you can't really constrain your,~ uh,~ your application size just because of your technical choices. So the bundler really is ~kind of ~a way for you to,~ uh,~ unlock that level of skill that you have to deal with. ~Um, ~and another aspect of it is ~the, um, sorry, I kind of lost my track of thought here. Uh, there's another thing I wanted to mention regarding, uh, the complexity of bundling. Ah, ~the model formats, right? ~So, uh, the, one of ~the biggest source of complexity in bundling is probably the interop between ESM and CJS. And historically, there's also like AMD and all that. ~I, I, ~ ~ I guess, ~luckily few people use AMD nowadays,~ uh, ~and it's more of a runtime construct now. ~Um, so ~unfortunately I think CGS is ~kind of ~here to stay. at least for the next couple of years. ~Um, ~the Node. js ecosystem is just so deeply entrenched in this sort of historical debt. ~Um, ~unfortunately,~ um, ~Vite has been very ~strong, ~strongly pushing towards pure ESM in a lot of ways. ~Um, like in the, ~we're probably the first build tool to force users to write pure ESM for their own source code. We only allow [00:24:00] ESM in user source code, and we only support CGS in dependencies. ~Um, ~unfortunately, In order to make things work,~ uh, ~especially in the existing application that's being, it's being developed for a while, and they want to migrate to newer generations of tooling, right? You can, it's easier said than done to just take your dependencies and migrate to modern ESM alternatives, right? It's,~ um,~ I think it's a good effort. There are multiple people trying to do that. They're trying to publish modern ESM alternatives of, ~you know, ~these legacy CGS dependencies. But,~ um,~ in reality, I think a lot of applications people are building. Had are stuck with old dependencies that are no longer being updated and they don't really have the capacity to port or rewrite those dependency themselves. And in a way,~ um,~ Bundlers is still ~kind of ~the,~ um,~ have to take on the responsibility of just handling that interop for them and making sure those dependencies can continue to be used. ~Um, ~I think a good sign is that Node. js ~is ~now has [00:25:00] required ESM. So I think part of the reason why CJS is ~so, you know. ~Resistant to and ~kind of ~just like stick surrounding the ecosystem is because Node. js ESM transition has ~kind of ~a some design issues. For example, the require ESM,~ uh, ~decision, I think could have been better addressed if,~ um,~ if from the first time Node. js supported ESM, it supported require ESM from the get go, it would make the transition much, much easier. ~Uh, ~and for that reason, a lot of packages, for example, Vue still have to ship CJS,~ uh, ~when it's expected to be consumed in Node. js because, ~you know, ~If you ship both CJS and ESM, then it's very likely you'll result in a dual module hazard in some way. ~Um, ~and sometimes even resulting in multiple copies of you, like one CJS copy and one ESM copy appearing in users bundles without the user even being aware of it. Um,~ so, ~Now that Node. js supports require ESM, [00:26:00] I think it's finally allowing a lot of packages and modules to ship pure ESM and transition to pure ESM. ~Uh, ~unfortunately, that also ties into the organizations. Node. js version upgrade cadence, right? ~So, uh, ~I think require ESM is backported to node 22, but,~ um,~ for all the organizations to ~kind of ~upgrade to node 22 as a bottom line, I think that will take at least one or two years. And then that's the moment when people will finally say, okay, I can move my package to pure ESM and that'll take another few years. ~Uh, so. ~Up to until that period, I think the bundlers will still have to play the role to handle the interop. ~Yeah,~ Josh: the one level of packages you described, and then another couple of years of flexibility for the packages on top of them. So we're talking about a roughly 2030, give or take era for many or most user land packages being ESM only, or something like that. That's quite a long time from now. ~Yeah, fun. One,~ Evan: but ~I mean, ~we do hope the work we're [00:27:00] doing will help accelerate that process if,~ um,~ maybe in a way,~ uh,~ for example, we have considered options like a strict mode switch where,~ uh,~ it would actually identify all the CGS dependencies in your tree and start recommending you to migrate away from them at some point. ~I don't~ Josh: ~There's a, ~there's a whole class of tools, not yet mentioned on void zero dot dev code mods that could integrate quite nicely with that. Have you or the company looked at also adding code mods or other things like that to your support? Evan: I think we do have the, ~you know, ~the base components that's needed to build great code mod tools for JavaScript. In fact,~ um,~ we're actually pretty good friends with the author of ASTGrep. I don't know if you heard of it. It's a great solution. And also,~ um,~ I think ASTGrep ~kind of ~plays a really, Nice row to ~fill, ~fulfill the kind of like fast Rust based,~ uh, ~codemod tool. That's, but it's also general purpose because,~ uh, it's, ~it's using TreeSitter, so it can parse multiple [00:28:00] grammars. And,~ uh, ~it's a bit more language agnostic. It can support multiple languages. ~Um, ~But even just used for JavaScript, I think it's already pretty good. ~Um, ~there might be room for collaboration with AST GREP in the future. Josh: Okay, so we've talked about roll down and a little bit about V, but we haven't focused on V in a little bit. Are there any particular big next steps other than the roll down integration that you're excited about for the V project? Evan: Definitely. ~Uh, ~so Environment API is probably the biggest internal changes that we've had since V2, I think. It's probably the biggest PR we've ever merged. Also,~ um,~ multiple people have worked on it. Um,~ it's a, ~it's a pretty complicated thing, but I guess,~ um,~ the quick gist of it is,~ um,~ previously, one of the reasons a lot of meta frameworks migrated over to Vite is because Vite provided the capability to, Easily run your application in Node. js for server side rendering with,~ uh,~ kind of hot module replacement for [00:29:00] it. ~Um, ~that initial implementation,~ uh,~ so there was a previous API in Vite, in Vite 5 called SSRLoadModules.~ Um, ~which essentially takes an entry point and convert it into something that can run in Node. js and applies SSR specific transforms. ~Um, ~we did that mostly because that was the only use case that we saw at that moment. But,~ uh,~ later on we realized there are different frameworks. There are frameworks that's actually meant to be run not just in Node. js, but maybe in other runtimes as well. For example. In cloud, on cloudflare, if you want your application to run in cloudflare workers, right? ~Um, ~locally, you want it to run in miniflare,~ uh, ~essentially,~ uh, ~so you want to run ~in a, ~in an environment, even locally, you want to run it ~to in ~to the same environment. That's close to your production environment, right? So previously, I think some frameworks that can build to run in Cloudflare workers, but the local development is actually [00:30:00] running in Node. js. And that's create, creates a discrepancy between the local dev environment and the production environment. So you would be accidentally using things that exist in Node. js, but not in Cloudflare workers. And,~ um,~ and similarly,~ like, you know, ~there might be other winter CG compliant runtimes, or maybe you want to run an environment in Deno or in BUN. ~Uh, ~so we actually looked at it and realized, okay, ~like. ~Your same source code in,~ uh,~ in a sort of full stack framework or a meta framework, right? ~Um, ~it actually needs to be transformed and run in different environments. And previously the browser node JS combination was just one specific case for it. They are just actually just different environments there. So we try to abstract this into something that's more generic called environment and API, where now you have. A browser environment, right? So it's specifically saying this, like for this environment, I want my code to transformed [00:31:00] knowing that it's going to run in a browser and it also should output format that's suitable to run in a browser. And for another environment, it might be running in Node. js and that the purpose is SSR, but the environment target environment is Node. js. And another case could be. ~You know, uh, ~the target environment is a worker, but the purpose is also SSR, right? ~So, um, ~environment API essentially allows you to ~sort of.~ More correctly describe this kind of architecture. It also allows you to have multiple environments more than just browser and Node. js.~ Um, ~so this unlocks a lot of interesting possibilities. ~Um, ~and it also because this is a pretty significant change. It's been going through some very long discussions. ~Uh, ~we've had early drafts, RFCs, and we've shipped it as an experimental feature so that we can get feedback from. The framework authors, ~I think, uh, ~the CloudFlare folks and the Remix team are both very excited about it. So we're getting a lot of good feedback from there. ~Uh, ~some of it also [00:32:00] drives the needs that we have in VTest because,~ um, ~VTest also leverages the same API,~ uh, ~to run your modules. ~Um, and in fact, ~part of the environment API work ~is, is kind of like ~backporting some of the things we did in VTest. ~Because ~in VTest ~it's, um. ~It had a sub package called vtnode, which essentially took what, I think it's interesting because vtnode is sort of a fork of what vt did previously on ssr load modules, ~just ~made more generic, ~and ~later on,~ uh, ~vlatmir, who worked on vtest,~ uh, ~suggested that, By ~essentially ~backporting the Vite node logic back into Vite, we can ~make the, ~make the SSR logic more generic, which eventually ended up being environment API. Josh: This speaks to something you mentioned earlier about having composable parts forming a more cohesive whole. You've taken that same concept here to environments, that instead of hard coding or making specific stuff,~ you've, ~you've taken the general idea of an environment and baked it into the underlying builder and framework. Evan: Yeah, I think,~ uh,~ in a lot of ways we do want to make our other tools we built first [00:33:00] composable and second,~ uh,~ we want to avoid them being, ~you know, very ~too tightly coupled to a specific solution or runtime, just ~like ~assuming something is the default and kind of forced back into it, or at least we should provide escape hatches or abstractions allow users to opt into other things. Josh: And one of those users is VTest. So same question for VTest. What are the big next steps you're excited about? The environment API integration, one of them. Evan: ~Um, ~this is probably going to happen later when we actually stabilize the environment API. ~Um, ~but one recent big feature ~in the test ~that the team's been focusing on is browser mode. ~Um, ~so V test browser mode ~is, um. Um, ~was, ~you know, ~the VTest team was able to quickly develop browser mode, mostly because,~ um,~ VTest is based on Vite, right? So when you want to, say, run the test actually in the browser, you can just start a Vite dev server and load those modules and make them run in the browser. ~So, um, ~I think previously,~ Uh,~ one complaint that users had about unit [00:34:00] testing front end code is,~ uh,~ when it comes to DOM APIs, you ~kind of ~have to simulate it with JSDOM or other DOM simulation, ~you know, ~simulating the DOM in the Node. js. ~Um, ~some users have ~very, ~really strong opinions around this because JSDOM is not a real browser, right? It tries its best to stick to standards, but still, it's not a real browser. ~Right. So, um, ~I think ~having ~having a browser mode that allows you to have your,~ you know, ~front end, ~you know, ~browser components tests that actually run in the browser ~is ~is really beneficial. But at the same time, ~being able to. Uh, ~personally, I still, ~you know, ~use Vue test with JS DOM in a lot of cases, because,~ um,~ it really depends on how much fidelity you want,~ uh, ~when it comes to, ~you know, ~being as close to the real environment as possible. ~Uh, ~a lot of the DOM related logic we're asserting our framework level. ~Uh, ~and during ~the quick, you know, ~the development, ~our, ~we want quick iteration for local unit tests, and then we run the end to end tests ~and when we ~before the publish ~in more, you know, ~in a wider browser [00:35:00] matrix to ensure compatibility. Josh: Do you think that long term, because you have such a flexible way to navigate environments, you might build a product equivalent to ~like ~a Playwright or Cypress, like one level higher to address that level of testing? Evan: ~Um. ~I think there is going to be a certain level ~of, ~of overlap, but,~ um,~ I think VTest is still mostly focused as a unit testing tool. So there's a lot less of the sort of browser automation aspects to it, which I think tools like, ~you know, ~Playwright and Puppeteer ~are still, you know, ~really ~provides the, um, ~provides the most value,~ uh, as a pro, uh,~ as a project. ~Um, ~I think. There can be interesting overlaps here. For some users, maybe just using the browser mode of vTest is good enough for their testing use cases. And I think,~ um,~ that's nice, right? But,~ uh,~ definitely Playwright provides a more comprehensive set of tools when it comes to, ~you know, uh, ~for example, simulating user interactions for end to end testing purposes. Josh: That's a different level of abstraction, a different level of user interaction than what VTest typically deals with. Yeah. Evan: [00:36:00] Yep, yep. Josh: Cool. Let's move on to the final piece of the four part puzzle. OXC. Are there any particular upcoming features or big releases you're excited for as part of OdeZero? Evan: Yeah, so right now, OXC,~ um,~ so I quickly, I want to quickly go through the parts of what we already completed for OXC, right? Because it's pretty big roadmap. ~Uh, ~so OXC started as a parser. ~Um, ~it comes with a linter that's already working. ~Um, ~the linter was initially created as a way to ~sort of ~verify and test ~the, ~the robustness of the parser itself. ~Um, and ~Borschen has been working on. For a very long time, quite a long time before he even joined VoyaZero.~ Um, ~the linter is ESLint compatible and ~is, uh, I think ~can also support ESLint 8 config files now. So it has several hundred rules that support it from all the major ESLint rules and major ESLint plugins, including,~ uh, ~ESLint plugin import. ~Uh, ~that's much more performant than ESLink plug in import, because OXC also implements its [00:37:00] own resolver,~ uh, ~in Rust. ~Um, so, ~overall, I think ~the, ~the Linter is probably ~the ~one of the pieces of OXC that's right now mostly directly usable as a standalone piece, if you're interested in just,~ like, ~experiencing the speed. Of Rust based JavaScript tooling. ~Uh, ~there's a certain level of overlap between Biome and this. ~I think, uh, ~in terms of Linter, the design goals are slightly different, as Biome is,~ uh, ~A bit more opinionated in terms of its linked rule set, whereas, uh, OnX linked right now is just more focused on,~ uh, ~a faithful port of Es linked rules at this moment. ~Um, ~so outside of the link, there's also ~the, um, ~the transformer,~ uh,~ the transformer is the part that ~we ~we're focusing on right now and trying to push across the finish line. ~Uh, ~there are several major transform typescripts that's already done JSX, that's already done. Isolated declarations,~ uh,~ that allows us to emit DTS without going through TypeScript itself is done. ~Um, ~so these are already usable via the OXETransformer npm package and also via as [00:38:00] individual crates. ~Um, ~and the part that we're trying to finish right now is the target lowering transforms, all the syntax lowering transforms, essentially down leveling your ES 2024 code all the way down to ES 2015.~ Um, ~It does involve quite a bit of work because,~ uh,~ there are like things like async generators and,~ uh, you know, uh, ~decorators, which are probably some of the hardest transforms. ~Uh, ~but we are pushing to have this done,~ uh,~ before end of the year. So the transformer should hit completion status,~ uh,~ in the next two months. And after that, there's the minifier. The minifier is already in prototype status. It already works. It's in fact, it's already in roll down. ~Uh, ~it's just,~ uh,~ we haven't really completed all the, ~you know, ~more advanced minification things, but ~the minifier thing, uh, ~the minifier itself is architected to be multi pass. Essentially you can ~sort of ~like closure compiler, you can,~ uh,~ customize and pick a trade off between. Performance and the level of compression you want. And [00:39:00] there's also an experimental project,~ uh,~ from a community member that's building a more advanced,~ uh,~ he calls it a tree shaker, but I think it's more like an advanced code optimizer,~ uh,~ that is able to do a lot of cost and evaluation ahead of time compilation. ~Uh, ~to achieve even more extreme level of optimizations that we will likely look and integrate into OXC's minifier in the future. Um,~ so, ~yeah, so the transformer and the minifier are probably the two biggest chunk of work that's going to be, ~you know, ~Push to next, and finally there's going to be the formatter. A pretty compatible formatter. We leave it to the last, mostly because IOM is already doing a pretty good job. But,~ uh,~ we're just going to do it for completion's sake. ~Uh, ~once we finish the other more important part. Josh: I actually want to take a step back and ask about the linter. So as you may know, I work on TypeScript ESLint, which is type linting. For those who can't see us, we're both grinning right now. Typed lint rules are quite a big [00:40:00] feature that are currently available in ESLint and there is no type system that's currently available to my knowledge in Rust land. How do you work with that, the two different areas at speeds? Okay. Evan: Yeah, so I think OXLink right now is strictly focused on,~ uh,~ linking that can be done for your pure syntax analysis, right? ~Uh, ~I've read your blog post on the state of type URL linking, and I agree with most of the points, right? I think,~ uh,~ trying to simulate a subset of the types,~ Uh,~ in the tool itself, it's ~kind of ~a dead end,~ uh, ~and you have, you're always at the risk of, ~you know, ~drifting out of sync with official TypeScript and,~ you know, ~there's also a very limited set of things you can actually do by just doing,~ uh, static, ~static syntax analysis,~ or, ~or at one point you end up building TypeScript yourself in another language, right? ~Um, ~I think a lot of this is type aware linting. You cannot get around TypeScript itself. So the performance bottleneck will come from TSC, right? ~Uh, ~I think there are [00:41:00] two ways of looking at this. One way is to,~ um,~ plug TSC into the linter so that you have to run TSC to get the type information during the linting process. ~Um, ~but then in that case, you're only leveraging TSC for, ~you know, ~The linked roles that you care about,~ um,~ and in a way, I always felt that type of where linked roles is a kind of a blurry line or a gray area where is it really LinkedIn or is it type checking? ~Um, ~for example, TypeScript implements certain features that, for example, it warrants on unused imports, which is technically a LinkedIn concern, right? But TypeScript does it anyway. Okay. So I think there's like some blurry lines here, like some of it falls into the type checking category where maybe, you know,~ uh,~ it is to some extent. I think that type of where linked rules should be somewhat considered type checking and should be just done. As part of the type checking pass.[00:42:00] And in that sense,~ uh,~ I've mentioned this multiple times, but,~ uh,~ Johnson,~ uh,~ has a project, Johnson who works on Bolar for Vue, but he also has a project called TSS Linked, that essentially provides an interface for you to author linked rules,~ uh,~ and ~they, ~it's converted into a TypeScript plugin that runs as part of the type checking pass. ~Right, so, ~this way you leverage as much as possible during the type checking phase. And only do the necessary linting with the information you care about. ~Um, ~I think that's a ~promise ~promising direction, but it's also,~ um, you know, ~you can't really make it any faster. ~Uh, ~if TypeScript itself doesn't get faster, I think at one point. Microsoft is the only company that has the capability to make TypeScript significantly faster by maybe porting it to a native language. And,~ um,~ I think I'm pretty optimistic that's going to happen.~ Uh, ~I think Microsoft has the incentive to do it. So we'll see. Josh: Yeah. [00:43:00] At this point, it's one of the few remaining things that people constantly gripe about for good reason with TypeScript. They've gotten isolated declarations, project references, all sorts of other great features, and most users are pretty satisfied with the type system. In fact, ~most users probably are, or ~many users are annoyed at how big the type system feature set has become, but yeah, a faster typescript would be lovely. Evan: Yep, I think that's,~ um,~ unfortunately, that's also one thing that Voice0 probably won't really be, ~you know, ~committing to because,~ uh,~ the level of resources you need to really re implement TypeScript using a faster language and also keep behavior consistency with the current JavaScript, ~you know, ~TS based implementation. I don't think it's practical for a third party to do it. Josh: I would love to talk more about this, but we're running up on time. Evan, is there anything else you'd like to talk about or ring up while we have you? Evan: ~Um, ~not much. I guess if you've listened up to this point, I assume you're pretty interested in ~JavaScript, ~JavaScript tooling in general. So do check out our projects. If you're, ~you know, ~interested in building on top of them, join us ~in ~on discord. We're pretty [00:44:00] active. And also,~ um, You know, ~try them out. Give us feedback. A lot of this stuff is pretty new, but,~ um,~ we're trying our best to make them as, ~you know, ~battle tested as possible. ~So, um, ~and stay tuned for future news. I think,~ uh,~ early next year. A lot of this will. Be a lot more ready than they are now. I think both OXE transforms and Rodan are ~kind of ~in this like sort of we've done 80 percent of the work, but the, ~you know, ~the rest 20 percent is just taking a lot longer. That's ~kind of ~typical in the development lifecycle, but we're currently trying to ~push, ~push through that stage. Josh: ~Well, ~that's really exciting. Evan, I'm honestly ~really, ~really pumped about all this stuff you're doing. I think you have a great perspective on the ecosystem and its tooling, and it's going to be awesome to have a unified approach like Void0's projects coming out and being really integrated. So good luck. Evan: ~you. ~Thank you. ~Uh, ~yeah, one thing I ~kind of ~want to add is ~sort of, um, ~when I decided to work on void zero and all this, right? Like we, one thing we do want to avoid is coming off as being super arrogant and say ~like, ~oh, the current tool sucks. And we're going to ~like ~make things ~110, ~a hundred X [00:45:00] better. Like we're, we definitely want to make things better, but I think I've always looked at the evolution of the JavaScript ecosystem as something that's, ~you know, That's sort of, uh, sort of ~inevitable because,~ um,~ a lot of people look at the JavaScript ecosystem and just,~ like,~ joke about it or laugh at it and think it's, ~you know, it's a, it's in a, ~in a chaotic situation and it's,~ like,~ irredeemable to some extent, but I think that's,~ um,~ I,~ as, ~as someone who ~grew ~pretty much my career grew up with the job ecosystem, right? I've been part of it ~for, ~for a decade. ~Um, ~I really enjoyed this whole process of everyone just like ~kind of ~in the wild west and figuring things out and being able to just a random person can come up with a project and it gets adoption and gets widely adopted. ~Uh, ~and I think ~this, ~this stage of the language life cycle,~ um,~ Is not really a bad thing to have. I think,~ uh,~ as long as eventually we can ~sort of ~learn from all the things we've done and come up with something better. ~Right. ~And I think. A lot of the things we're building right now is built on top of all the knowledge we've accumulated collectively over the [00:46:00] years as a community. And I want to like, as we're building these newer and shinier tools, I still want to, ~you know, ~acknowledge all the great things that's happened in the ecosystem, because it wouldn't be possible to build better tools without these prior arts. Josh: I strongly agree. I think that's a beautiful and correct and wise note to end the interview on. Evan, thank you so much for coming on. We really appreciate it. And,~ uh, really, ~really looking forward to all the great stuff Void Zero is going to be doing. Cheers. Evan: Cheers.