Matteo Collina and Luca Maraschi === Paul: [00:00:00] Hi there, and welcome to PodRocket, a web development podcast brought to you by LogRocket. LogRocket provides AI first session replay and analytics, which surface UX and technical issues impacting user experiences. Start understanding where your users are struggling and try it for free at logrocket. com today. My name is Paul, and joined with me is Matteo Colito and Luca Maraschi, and we're here to discuss Platformatic. Now, I've never really heard of Platformatic before this, so I'm so curious to ask you guys about what this does, because it seems like another infrastructure layer. And,~ uh,~ like it says on the website, it's for enterprise level Node. js application management. And The biggest headline, you guys got funded 4. 3 million just now. So congratulations and welcome to the podcast. Yeah, super exciting. So excited to,~ uh,~ dig in and learn more. Welcome. Luca: Thanks, Paul. It's great to be here. Matteo: Paul. It's great. Paul: And Mateo, how many times have you now been on ~LogRocket or~ PodRocket? Matteo: ~Maybe twice?~ Paul: ~Twice?~ Matteo: ~think twice.~ Paul: ~We love having Matteo on.~ Matteo: ~I, I, uh, you know, I think, you know,~ we think twice. So this is the [00:01:00] third one. And I think we should do a fourth one. ~You know, ~I have a lot of things coming, ~so you might want to, like,~ I'm very happily coming in a few times. Okay. And you should cover Node. js more, to be honest. ~So, uh, ~a lot of new things,~ uh,~ coming to Node. js lately. So you should,~ uh,~ add some coverage there too. Paul: I know Node. js has been popping off recently. It's lots of exciting stuff coming down and because you guys are making this platform level piece of infrastructure, specifically for Node, you must be experts. So we should definitely have you back on, maybe for one of these Node episodes. Matteo: Okay. Luca: ~I mean, ~it goes back to,~ um,~ basically first principles, right? The Node. [00:02:00] js,~ uh, uh, ~has been ~kind of ~like a run for a long time, but,~ uh, uh, ~Matteo and I identified that in this journey, in the build, run and operate of these applications, there were still a great opportunities to improve it, expedited and make it simpler and more accessible to everyone. ~You know, ~to enterprise. So we started our journey with the build. So our main question was like,~ uh,~ how can we create a standardized way, an easy way, a fast way, an accessible way to build Node. js applications. And that's how our open source,~ uh,~ toolkit,~ uh, uh, ~at the time started. And we gain,~ uh, uh, ~a pretty great. Traction from last September,~ uh,~ till today. So we grew from like few thousand downloads, three, 5, 000 downloads to 3. 2 million downloads per month. Not per year, per month. ~And, uh,~ Paul: That is beyond explosive. Luca: beyond explosive, especially because if you look at our compounding growth, we have a compounding growth in the next, in the past nine months of 37%, which took us [00:03:00] from,~ uh,~ like South,~ uh,~ of,~ uh,~ 400, 000 downloads. ~Uh, ~To north of 3 million. ~So, uh, ~that's ~kind of like ~has been ~kind of ~like our growth. ~Um, ~in this journey, clearly we saw that also to run those application, it was challenging. We are all familiar with the ecosystem of different JavaScript and node replacement runtimes. And we thought that actually what was missing was not yet another node. js because enterprise has already made their choice in enterprise. The cycle is a slow cycle. We all are aware of that, but it's also a stable cycle. So choices are made and are kept because they have a cost of implementation, maintenance and,~ uh,~ Sunsetting. What we actually look is like, how can we take,~ uh, uh, ~advantage of this gap and fill it with something that comes from,~ uh, uh, ~the past,~ uh,~ from the nostalgic world of, for example, Java, the most popular language in enterprise, by the way. So that's actually where it started. And so that's how what,~ uh,~ came to,~ uh, uh, ~To [00:04:00] life,~ uh,~ can we be, Paul: what is part of platformatic, right? As in W. A. T. T. Luca: yeah, correct. Like James Watt, ~the creator of the, ~the inventor of a steam engine. And we actually ~kind of like ~started to ~kind of ~like shape what we thought was this great revolution in Node. js and enterprise. And the last piece ~that ~that's what Watt and,~ uh,~ the intelligent command center are the two products that we launched like a month ago,~ uh,~ five weeks ago. The last part was, how do we actually,~ uh,~ manage, how do we operate all these applications? So we heard the terms of platform engineering ~many, ~many times, but, ~you know, ~like you just said in the beginning is,~ uh,~ the infrastructure is a very generic term, is a very generic,~ uh, uh, ~substrate, is a very generic foundation. We wanted to specialize it for Node, without having to force our users to buy yet another platform as a service. So we actually took,~ uh, the, ~the goods,~ uh,~ of,~ uh,~ many platforms that are on the market, that very popular, that probably they don't even need to be named because they're well known and said, how can we take that kind of ~like, uh, ~[00:05:00] mindset? and bring it to,~ uh,~ the enterprise where it has to run on your cloud, not on our cloud. We don't want to host your application. You can host it yourself. You already purchased that infrastructure, but especially we wanted to add that context that was required for Node RED. So we took the expertise of our team, of Matteo and the rest of the engineering, my expertise, package it into a product that people can take advantage without Having us to fly around the world to help them,~ uh,~ scaling that application. And so that's where,~ uh,~ the intelligent command center is that bridge between the developer world and the operation world. So we wanted to bridge the gap between that and ops. And in this journey, what we wanted to,~ uh,~ bring,~ uh,~ in the hands of our customer was something that would optimize,~ uh,~ at first,~ uh,~ not only the performance. but also the economical performance. But that's actually the more interesting piece for our buyer. And so that's how our suite came together. So you have Watt and Toolkit that are open source and the command center is our [00:06:00] commercial,~ uh,~ proposition. Matteo: something Luca that ~you, ~you brushed, but you that probably Paul is not familiar with. ~Big enterprise and big, um,~ big enterprise, big companies in general, especially not in probably in all sectors. Okay. Very often,~ uh,~ do not use,~ uh,~ or cannot use,~ uh,~ platform as a service solutions. Okay. They cannot have,~ um, you know, ~running code in one of the values. ~Uh, ~system to run functions. Okay, that are out there, but you need they need essentially to control it. Okay. And this is very typical ~in ~in the finance sector. ~Uh, ~or in is very typical for the finance sector, very typical for the media sector. It's very typical for health care and so on and so forth. And this is actually very important. This market was completely,~ um,~ was essentially underdeveloped and there was not really a good solution for these developers that ended up to have to reinvent the wheel, okay, of their,~ uh,~ platforms. Okay. And,~ uh,~ and now they can buy our software if they want to. Okay. And especially ~our, ~our [00:07:00] command center,~ right,~ Luca? Luca: They can always buy more and more and more. ~Sorry, Paul, we cannot hear you.~ Matteo: ~You're muted. Okay.~ Paul: ~Thanks. Uh, ~so there's a modular aspect,~ uh,~ to platformatic. It seems like you can buy one component. You can use the open source components. You can cobble together how you see fit. And,~ um,~ the fact that you're talking about air gaps. Installations and how, ~you know, ~it's a whole new market. That's a really great point. Actually, when I was working on platform engineering here at log rocket, actually, back in the day, don't do that anymore. Now just do the podcast. But when we were working together,~ uh,~ what I was doing was specifically air gapped installations for ~log rockets, ~log rockets, observability. monitoring. And so there was a special version of the product that was for, ~you know, ~the bigger companies were talking 10, 000 plus employees. Oh, you install this here on your servers. People are using the mouse on your apple. You don't send the data to log rocket sending it to your own installation, which was had a lot of challenges that were unique because it's not on your controlled infrastructure. ~Um, ~but this product platformatic is specifically geared [00:08:00] To run on your self hosted infrastructure. It sounds like it's okay. And the control center is that next level enterprise piece of your puzzle that lets people manage that infrastructure that they're running. Matteo: ~So~ Luca: ~because, oh, go go~ Matteo: ~Okay. So we don't.~ So ~our, our, ~our structure or our goal, okay, is to,~ uh,~ empower Node. js developers. Okay. So all our open source,~ uh,~ that you can find. Okay. It's very useful for developers at all stage, ~you know, ~enterprise developers, but also, ~you know, ~self started developers. Everybody can use this to run long running Node. js processes. To some extent, it's the missing puzzle,~ uh, uh, ~for those things. It provides that,~ uh,~ it provides,~ um, uh, ~shared logging. It provides share logging, share, a share logging integration based on Pino provides metrics,~ uh,~ automated metrics,~ uh,~ set up for Prometheus provides,~ uh, open telemetry, ~open telemetry tracing is already configured in the system. Everything is,~ uh, um, ~comes out of the box. Things automatically restart. ~Um, ~you can,~ uh,~ scale [00:09:00] multiple,~ uh,~ applications. We're going through that in a moment with multiple threads. If you want to, a multi threading is built in. All of these things are already,~ uh,~ are already set up for you. Okay. In,~ uh,~ in VAT. Okay. So you can try it right now. You can use it right now. It also ~can ~can run any Node. js applications. Okay. So you can just take your existing Node. js application and just run it with VAT and you get all these benefits essentially,~ uh,~ for free. Okay. Where the command center sits in, it's,~ uh, it, ~it allows us to run any Node. js application on top of VAT, but it In this way,~ uh,~ we can actually,~ uh,~ monitor it, how to scale it and all sorts of doing all sorts of interesting management stuff on top and provide essentially,~ uh, uh, you know, uh, ~one common deploy experience,~ uh,~ or one GitHub action experience,~ very,~ very in a ~very, ~very straightforward way to developers. And even,~ uh,~ for example, we can do even preview environments, which is something that is. not or branch deployments, as somebody has called [00:10:00] them, that is not possible right now in,~ um,~ in, in those kind of, ~you know, ~isolated systems. So it's,~ uh, it's, ~it's a ~really, ~really great things,~ uh,~ for those kind of teams. ~Um,~ Paul: as the manager or tech lead. of your department or director, you have some things you're always worried about when you're rolling out these Node. js applications. So yeah,~ like,~ what are those key challenges that you first set out for? Do you feel like those key challenges that you're targeting with your command center and the Watt application server have changed at all that you're trying to target? Luca: ~Um, I mean, ~I led big teams with large amount of applications and I can tell you that Matteo and I in our previous life,~ uh,~ when we were working together, we saw the same exact,~ uh,~ environment. The challenge,~ uh, Uh, ~the main challenge that all these organization are facing are purely,~ uh,~ related to,~ um,~ overall skill set. ~Uh, ~node is simple to develop, very hard,~ uh,~ to maintain and scale.~ It, ~it requires a certain degree of,~ uh,~ confidence and competence to ~kind of like ~take full advantage of it.~ Uh, ~on [00:11:00] the other side,~ uh,~ we are facing the need of this new generation of digital organizations that they need to scale features to the,~ uh,~ customers at the speed of light and they cannot over invest in,~ uh,~ in a platform. So the, this dichotomy for us was the perfect,~ uh, uh, ~space where to operate both with,~ uh, uh, uh, ~what, where we take advantage of simplifying the analysis paralysis, ~you know, ~challenge that every organization has because you can deploy a monolith and split it later on. There's no problem to do that. hands off. You don't have to do anything. It's just a configuration on the other side,~ uh,~ optimizing the resources that these organization already add, ~you know, ~in house. So I think ~our, ~our solution came at the right time, the right place for the right people, simplifying the complexity of,~ uh,~ scaling,~ uh,~ in many dimensions, those applications while, ~you know, ~streamlining ~the, ~the value of these organizations to customers. ~Um, ~and You ask actually the modular, you,~ uh,~ I lighted the [00:12:00] modular approach. That was since the beginning, the idea that Matteo and I had,~ like,~ it has to be something like a Lego set. We needed to,~ uh, to, ~to provide that to our users and our customers, the freedom of choice. And with many platform as a service, you have no freedom of choice. You just need to follow the paradigm,~ the, the, ~the guidelines of those platforms for us was like,~ uh,~ the open sources, that's to be your playground,~ uh, to, ~to see the value. And then with our,~ uh,~ command center, we can help you taking whatever you develop locally, whatever you develop for yourself to scale in,~ uh,~ in a massive way. And when we say scaling is not only about a matter of scaling on hardware resources, but also on people, that's the most important thing. So the two aspects that we always take care of are compute and people, the two variables in the enterprise success equation and cost effectiveness. So that's where ~kind of ~like our solution ~kind of like ~makes sense. I Paul: Lucas, there are some [00:13:00] particular things you would want to be familiar with when it comes to running Node. js. If you want to do it efficiently, take advantage of how to do it. Correct. Matteo: Let me take that. Okay. So the first thing, one of the, one of the biggest problem that companies do when,~ uh,~ deploying no JS is,~ uh,~ they end up,~ uh,~ monitoring and scaling those systems in the, using the wrong metrics. Paul: ~Hmm.~ Matteo: This is probably the worst, one of the biggest plague that is out there. Paul: Like just memory, which may not be relevant to how your app's Matteo: No, it's basically, it's ~even worse, ~even worse. Okay. Like a lot of take, let's take memory. Okay. ~Well, ~for memory,~ uh,~ they typically,~ um, uh, ~talk about,~ uh,~ residents at,~ uh,~ the RSS. Okay. And that's important because that memory is the amount of memory that is used by the whole process. Okay. Now, as ~you know, ~V8 is a garbage collector. It's a system based, uses a garbage collector inside and ~it allocate, ~it allocate the heap, [00:14:00] that memory in chunks. Now, What matters is that it's not how much total memory V8 is using. Of course it matters because that's what is blocked by the process. But in reality, it matters how much of that memory V8 is actually using inside. Because,~ uh,~ there can be some problem with fragmentation ~and other, ~and other details inside and V8 cannot usually do not try to shrink it. Okay. Because it optimizes performance. So the end result is that you can have a very healthy Node. js application that is, it's using, I don't know, a gigabyte of memory or 800 megabytes of memory. Okay. ~And, ~and then you can configure it your, and then a lot of companies out there configure it that when it reaches, I don't know, a gigabyte of memory or 900 megabytes of memory. Okay. ~It, ~it gets killed. This is, ~you know, ~and you say, Oh, what is happening? Okay, why am I not process continuously crashing? ~Well, ~you're killing them Okay, they have plenty of memory available for running [00:15:00] because ~you know, ~they're only using half of it but One CT it's 900 megabytes. You kill them like why~ like this is ~this is the bazaar ~It makes a you're muted~ Paul: ~Yeah.~ Versus like reorganizing how the fragmentation. Matteo: the difference is instead of monitoring the total memory, which of course you need to monitor for obvious reasons, but don't kill your process when it reaches,~ uh,~ on over an arbitrary threshold. You want to kill your process when it fills the full memory that is available for them. Okay, because it's normal for node to use all the memory that it has available. He wants to use all the memory that you give it to him. Okay. To node. Okay. It use if you give it to node one gigabyte, if it needs it, it will use one gigabyte. It's no problem It will not crash It's it will use the memory. So Instead you need to monitor the heap utility the heap used how much of that's how much of that memory has been using by node But in order to use that you need to instrument node. js Get that data out of the node. js process up onto your [00:16:00] control plane, or how we call them the command center. Okay. And use that information to make these decisions. And these is, ~you know, ~most companies out there right now do, are not doing that. Okay. And they are just ~very, ~very ineffective in how they manage their process. Another key indicator, another key metrics is the event loop utilization. Okay. The event loop utilization tells us how much of the event loop is free or busy. Okay. Or how much of that event loop ~was used, ~was used or free ~for, ~for other,~ uh,~ CPU activities to happen. That, that number, which is a number between zero and one, tell us, or a percentage ~as, ~as we all know what it is,~ um,~ That numbers quickly tell us if there is capacity, compute capacity left in our process. However, that number may or may not reflect on the CPU. So in a lot of cases, ~we have, ~we have seen a way higher CPU usage, okay, than event loop utilization or vice [00:17:00] versa. And however, the event loop utilization is the only thing that matters, not the CPU. You can have lower CPU utilization, but your system can be blocked for whatever reason. So you want to monitor the event loop utilization and not CPU~ for, for, ~for scaling purposes. Again, doing all of these things, having all of these set up, all of these metrics require a lot of inner knowledge of how Node. js works. And ~you know, ~you, of course, companies can go and learn all those bits. And eventually,~ um, you know, ~they will become Node. js collaborators as well and start maintaining Node. js too. ~Um, ~but this is a long journey for a lot of companies and a lot of most don't want to,~ uh,~ have those spend effort, right? ~So, um, ~that's the reason why we started this journey. And this is just one of the advantages ~that we, ~that we provide. ~Um, ~another one is about multi threading. As I said, okay, the recent releases of VAT that we did in,~ uh,~ in September allows us to run multiple Node. js applications within the same node process on multiple threads. We can also spawn,~ uh, uh, ~[00:18:00] multiple threads for each one of those and providing internal load balancing. Why is this very powerful? Why is this very useful? This is useful because,~ uh, it, ~it, it allow us to use all the CPUs available to the pods, ~to the, ~to the VMs. And we can actually increase,~ uh,~ actually reduce a lot the risk of the event loop blocking because those threads are independent, they have an independent event loop and so on and so forth. So the actual end system is way more stable than,~ uh,~ a single CPU based node system, because essentially. One route, one route that is very slow in or very blocks a lot of the event loop has a way less chance to affect everybody else. Paul: Thank you so much for walking us through those examples. It's,~ you know, ~I'm definitely falling into the bucket of one of those people that uses Node. js. I don't instrument Node. js. ~Um, ~so it's super interesting to hear about some of these things that are critical. ~I mean, ~if you're an engineer, you'll instantly understand the wrong metric is not going to have a good correlation with the right output. If you're [00:19:00] doing things on the wrong metric, you need to know what to track, how to get to that metric and how to surface it routinely to actually ~like ~act on it. ~Um, ~so you mentioned, ~you know, ~the control planes been mentioned just two or three times in our discussion. Mateo slipped and he mentioned the word pod. And it makes me think about Kubernetes, like a lot of people have used Kubernetes and there are instrumentation things out there that you can use for running certain applications. I know Go has a pretty rich ecosystem from this. ~Um, ~probably because it's just a binary you can run. There's a lot of instrumentation around that. But for Node, there's there are also things out there. So I'm curious how you guys see yourself playing into that like existing set of tooling? Do you feel like platformatic is just very niche in what it does? And I don't want to say niche in terms of what ~it can run, ~it can run a lot of things, apparently, but niches in the sense that the level of depth of knowledge that you're coding into the control center, and into what is unparalleled with the type of information other folks are getting out of the prefab to tooling? Would you say [00:20:00] that's the case? Luca: think, I think is the case. touch really well on the problem, right? And the reason why we started building all this,~ uh, uh, ~software is to solve this,~ uh,~ this problem, right? So if you think about how Kubernetes came to life, it's a generic tool, native, built to be language agnostic, and understand the concept of,~ uh, uh, ~resources, physical resources scaling. Now, I think,~ um,~ what Matteo was describing is that we have physical and logical,~ uh,~ resources like memory can be physical and logical. And in order to take full advantage of,~ uh,~ your infrastructure, you need to balance,~ uh,~ the, these two, right? So Kubernetes understands perfectly the cost of physical resources like memory, CPU, network,~ uh,~ at that level, a very binary level, if you may, ~right.~ It's just like a black or white. In,~ uh,~ for us was very important to inject a context and say,~ no, no, no, ~no, you are running node. We understand our node works and we can tell you how you can help us spawning physical [00:21:00] resources pods. Docker is the same. If you think Docker is a, an abstraction, fully language agnostic, right? Because they were born as an agnostic layer. We believe that on top of these agnostic layers is required. a specific layer to take full advantage of your hardware resources. And just to give you an example, our autoscaler is the clear example on how to take to fruition all this data, right? Because we are now able to instrument the horizontal and ~well, uh, ~in the future, the vertical pod autoscaler and say, this is the composition. This is the exact snapshot of what's happening.~ Uh, ~pod autoscaler, new machine. And we ~kind of like ~take control on now to orchestrate all these infrastructure scaling, but for node, like Matteo was,~ uh, uh, ~deeply describing,~ uh,~ a different,~ uh, uh, ~death is required because,~ uh,~ imagine, for example, the difference between a server side rendering application and an API. They are different, right? But for Kubernetes, they are. Exactly the same. They use a lot [00:22:00] of CPU. They can use a lot of memory. They can keep using a lot of memory because probably they are caching. They are, ~you know, ~doing some in memory operations, but we understand the context of each one of these applications. Like we understand exactly, you just deployed a Fastify application with the next,~ uh,~ and with,~ uh,~ whatever express side by side. And we know exactly how to fine tune, ~how to, ~how to create the correct compound, ~right. ~For,~ uh,~ for,~ uh, uh, ~for your,~ uh,~ infrastructures to be highly optimized. And this is a fundamental piece for ~enterprise ~enterprise. ~Like ~let's clear the,~ the, the, the, ~the play completely cares about cost optimization. And that's where we ~kind of like ~see ~the, the, ~the biggest opportunity,~ um,~ Paul: guess the reason Luca I was mentioning the kubernetes is I totally hear you on the physical logical~ You know, ~memory CPU thing. I do have experience like making a custom metric scaling on that custom metric. And there's prefab docker containers out there that they're like, Oh, I know you're running node, like I'll surface these metrics. But now I'm already [00:23:00] stopping myself because you have to know how Kubernetes works. You have to know how their load balancing works, you have to set up the custom metric. And it's, and after you mentioned the Express thing, it's even you guys are going deeper than that, because you understand the actual node app, it's not just the metric. There's even one more logical layer. There's ~like ~two logical layers here. It's inside of the node process via instrumentation, which in some flavors does exist out there in a very manual cobbled, non optimal manner. But then there's a second layer, which is what are you running? in this node process. Man. Okay. Now I understand that there's this second layer that is really neat because there are other players out there who are, ~I mean, I, ~I guess for sell, right? They're saying, Hey, one click deploy. We'll figure out what you're running. I'm sure they don't do much optimization on that, but they try to go into that second layer and say, what are you running on our serverless platform? ~Um, ~that's cool that I can do that self hosted. I can have something that understands what I'm running, Luca: and there is a level of entropy, right? Because,~ uh,~ like you said, ~if you, ~if you were to deploy your application on [00:24:00] Vercel, let's take an example, right? Or on whatever other platform, Netlify, you pick them. You have still a lot of work that you need to do. Because imagine you are an enterprise,~ uh,~ running an e commerce or running,~ uh, Uh, ~you still need to change your paradigm from whatever you used before to the new model, right? To take advantage of something that is not anymore in your infrastructure. So you have this kind of like a cost,~ uh, uh, for the, ~for the sake, if you allow me the term of like infinite scalability, right? But.~ You know, ~the reality, ~and I, ~and I faced this conversation in many enterprise conversation at the leadership level, right? Let's be very honest. The majority of the businesses, they don't require the infinite scalability of Netflix or Uber or Google search,~ uh,~ or open,~ uh, you~ Paul: ~Well, ~that's the arbitrage of cloud. Luca: sure, but the cloud was actually born to solve a different problem. That it was the scalability in terms of time. ~Like, ~because there is also scalability in terms of cost, right? So the correct balance for the enterprise, [00:25:00] and when I talk about enterprise, Matteo mentioned them. Banks, right? Banks are what we call,~ uh, um, uh, ~low volume, high value, right? If you think the number of transactions that are happening,~ uh, in a, in a, ~in a bank,~ uh,~ are probably 10 order of magnitudes lower than what Netflix is processing while we are chatting. ~Right. ~Or Uber, right? Or you pick any other organization that is facing,~ uh,~ these massive engineering problem, but banks,~ they,~ they care more about value, right? We agree, right? As ~if, ~if you and I are making a transaction is a value transaction is not a volume transaction. And so we tap exactly into that space, like where we say, We optimize for the volume that, that you need,~ uh,~ because we understand how many requests per second you're doing. We, we can see it,~ uh,~ Matteo just said it, we have open tracing, we can see anything, right? And you want to see everything because it's very important for you to see what's happening in your ecosystem. And so we use those kinds of like knowledge to understand, for example, you mentioned [00:26:00] express, or we mentioned Festify, mentioned any other framework, right? The context of this framework, because. We are not telling you to change the framework and write it in,~ uh,~ whatever, Fastify. We are just telling, we are just making sure that whatever piece of code you are running is highly optimized for the target, for the goal that you as an organization have. And then if you want to change, clearly, trust me, changes are always possible, but the ball is in your court, right? We just give you factual data. We just try to say, this is how much we can optimize. ~Uh, ~more than that, it becomes hard. Paul: ~yeah, ~yeah, gotcha.~ Um, ~let's say you're deployed for an enterprise, you're running a fastify thing, Matteo: Yeah. [00:27:00] Let Luca: Gives you the information you need to read it. You need to make sense out of it. You need to,~ uh, you know, ~combine it. There's a lot of intelligence that runs behind the data that we are collecting. It's not as linearly simple as, ~you know, uh, the, ~the RSS,~ uh,~ utilization goes high,~ uh,~ CPU is high memories,~ uh,~ low traffic is high. What's going on, right? ~Uh, ~that there's a lot of that goes beyond and ~like, ~like a kind of ~like, like ~to describe it,~ uh,~ we are solving a nonlinear problem. If it was a linear problem, trust me, it would be extremely simple for everyone to, to solve it, but it's a nonlinear problem. Matteo: Let me make an example. I,~ um, uh, ~we were, I was recently working with,~ um,~ with my team on a,~ uh,~ on a feature. And,~ uh,~ basically when I was looking at the graph at our monitoring, I said, Nope, this is ~the, ~the monitoring is wrong. There is a bug here. Okay. These numbers ~are not, ~are not correct. And the team was literally [00:28:00] looking, Oh, how come this is not possible? Whatever. ~And, ~and then ~I, ~I said,~ no, no, ~no, let's go to deep. And we went deep and we found the problem. Okay. What we want to encode and what we are encoding ~in, ~in our controls in common center is this specific knowledge that A few people in the Node. js team have, and that is not public, publicly available so that you don't need a Mateo looking at your graphs and saying, no, this is, there is a bug. Okay, but the system can essentially,~ uh,~ tell it, act on his own, make recommendation, tell you, ~you know, ~you need to change your architecture. You need to do certain things. Okay. Do. Um,~ you know, ~scale your system better,~ you know, ~do all sorts of things with the right data. Everything is possible. You just need to act on it. And our it we call it the intelligent command center for a reason, right? So~ this is~ Paul: It helps you make sense of it. Luca: He has a little bit of a brain. Paul: Yeah. It's a mix is what I'm hearing. It's a mix of we can self optimize and auto [00:29:00] apply things to make your application run better Matteo: Up to a point, up to a point, okay. You, at some point the developer needs to be, the human needs to be in the loop. If the human is not in the loop, it's,~ uh, uh, ~none of these,~ uh, uh, ~none of this software really is,~ uh,~ is viable because at some point a developer, a human being needs to make a decision on what to do, otherwise,~ um, you know, ~bad things will happen, to be honest. Paul: ~yeah, ~yeah, of course, especially in enterprise. You have to have controls. Matteo: ~put, ~put in place. So yeah, that's it. Luca: You cannot have copilot that comes and rewrite your application. ~You know, uh, ~if you think about a bank, ~they need, ~they need to be ISO compliant, PCI compliant. There are so many kind of like procedures that to be honest,~ uh,~ they are probably outside of even our scope of life or ~where they, ~where they will land. It's, ~you know, ~there is a legal aspect, but. ~What, ~what we are envisioning for platformatic is,~ uh, uh, ~and is the kind of like double meaning of also our name is,~ uh,~ we want to be the platform for the platform. We want to,~ uh,~ we want to come to create the foundation [00:30:00] that then we can use to accelerate many other functionalities. And you just mentioned like the self healing aspect, right? We mentioned many things around,~ uh, uh, ~analysis paralysis, right? So for us, the opportunity in the future is actually to,~ uh,~ help enterprise,~ uh,~ with,~ uh,~ that acceleration that they need,~ uh,~ to,~ uh, uh, ~to ~kind of ~like optimize,~ uh,~ their time to market. That's the end goal of every sort of company in the world,~ uh,~ is always fulfill more customers. ~So, ~and that's where we are for. Paul: Got it. ~I mean, it, ~it sounds like this would be a no brainer for an enterprise, especially when ~the, the, ~the maintenance and the optimization, like you said, it's a value. It's a value play, not a scale play necessarily. Not that this isn't ~like ~geared to ~like ~make you scale better, but in terms of ~the, ~the type of customer that this really punches hard for. ~Um,~ Matteo: It also has to scale. Okay. But scale,~ uh,~ means,~ uh,~ also that like,~ uh, ~being able to scale,~ uh,~ means,~ uh,~ various things. Okay. It can allow you to scale in terms of handling more traffic. It can allow you to be able to scale and in, in a way of scaling down. [00:31:00] Okay. And being able to increase density of,~ uh,~ of your,~ of,~ of the number of applications that you can run. Okay. A lot of,~ um, uh, ~very,~ um, uh, ~sensitive enterprises and companies have deployments where they have essentially one instance of the system for each of their customer. Okay. Because they cannot share anything on their data and so on and so forth. Those kinds of systems are very expensive to run. Okay. And if imagine if they are, they're built ~in a, ~in a way that use multiple microservices, for example. It becomes ~very, ~very expensive,~ very,~ very quickly. ~Uh, ~our technology allows also to,~ uh,~ run all of them in, as I said, ~in a, ~in a single,~ uh,~ node process,~ uh,~ dramatically shrinking ~the, the, ~the user resources. ~Uh, ~and because of, we have all these internal metrics, we know exactly. What services might be causing problem, which one might be blocking the event loop, which one ~is, ~is having trouble. And we can immediately pinpoint of this is the one that has the problem. ~So, you know, ~you want to do something about this. Okay. And tell the developer [00:32:00] action, give the developer actionable feedback ~on, ~on what's happening. Paul: A lot of people could use it today. So for folks listening, because, ~you know, ~a lot of people listening are developers themselves, they're hackers, makers, if people wanted to experience what, or platform at different modules of platform addict, what would be the best way for somebody in their bedroom or basement? Just to ~like, ~try it out. Matteo: We have a nice quick start guide. You can take it and use it and verify if it works. Paul: And you could just, would you suggest like running one app, like an XJS, a remix or whatever Matteo: That's exactly what is us. Yes, ~it's ~it's it runs a next JS application next to a generic node application. Okay. [00:33:00] You Paul: anybody writing Node. js applications, but under the guise and being really funded as a business targeting that enterprise market like that, you always get great things out of companies that have that. bifurcated like marketing because the people who aren't needing of enterprise still get great engineering that they can benefit from. ~Um, ~so I'm definitely excited to go check it out. And thank you for your time coming in and really breaking down the value. And it was really fascinating to hear about your experience with Node. js and how you're letting it show its colors in this product. Luca: Absolutely. And, ~you know, ~the future is bright. I think,~ uh, uh, ~the path ahead of us will also excite all the people, ~you know, ~listening to this podcast and the people following us. We are, this is just the beginning. Paul: If people want to keep up, since this is just the beginning on what's happening, where should people? Pay attention for updates. Like I'm sure the get hubs is a Luca: Twitter, get ups, our blog,~ uh,~ our newsletter. ~Uh, ~we try to record,~ uh, uh, ~on each major launch. We have a set of videos that we launch,~ uh, uh, ~along [00:34:00] with,~ uh,~ Clearly our product,~ uh, uh, ~there are ~many, ~many venues to follow us. Matteo is speaking at many conferences. I'm also speaking at conference. So following us directly is the best way to get,~ uh, uh, ~up to Paul: following you guys really quick. What's your handle, Luca, and Luca: ~Uh, ~very simple, Luca Maraschi, by like my first name and last name. I didn't have any fantasy the day I created it. Paul: What about you, Matteo? Matteo Matteo: It's over there. Paul: Colina. I just got to say it. So the people listening can know. All right. We'll also include those since,~ uh,~ you mentioned it's good way to keep up, follow you. You can go follow Luca and Matteo. ~Well, ~guys, thank you so much for coming on again. Thanks for your time and hope to have you again. Cause we will be talking about more node JS. Fascinating time for node. Luca: Happy to be here. Matteo: Bye! Bye! Bye!