Stephan Ewan: One of the joys of these early days is, like every day, you're talking to a different person, and you're learning something new about how they look at things, what they like about it, and so on. This is evolving almost every week. Eric Anderson: This is Contributor, a podcast telling the stories behind the best open-source projects and the communities that make them. I'm Eric Anderson. I'm excited to have Stephan Ewan on the show today. Stephan, thanks for coming. You're one of the co-founders of Restate. We should also note you're one of the co-founders of Flink and Data Artisans. Stephan Ewan: That's right. Flink and Data Artisans were my first real job after university, if you wish. Eric Anderson: Very good. And what a job. We'll get into Flink and Data Artisans. Well, actually, maybe we'll just start there, and we'll kind of evolve your story to Restate. What happened after school? And how did you get going on Flink? Stephan Ewan: When we worked at university at this research project, it was the Hadoop data waste-based war days, and we were working on a project that tried to marry some of the database and the Hadoop concepts. Stratosphere was the name of the research project. After I was done at university, we thought the students working on the project, the postdoc, Kostas, who was my co-founder of Data Artisans ... We thought it was actually too nice to just not keep working on this, and we founded a company, Data Artisans, around it and donated the project to the Apache Software Foundation. It became Flink. It's very much, actually, focused on batch-and-graph analytics and so on. Nothing stream processing there yet. We did find out, though, that there were a bunch of other companies, predominantly Spark and Databricks, being very entrenched in this space and that we had a very hard time getting any of those use cases adopted with Flink, even though it had a few interesting properties at that point in time. But about nine months to a year into the journey, we actually discovered that the foundation of Flink was actually really good for stream processing. And with a few additions, like the most notably distributed snapshots event, our mechanism, and so on, it was actually a really stellar system for stream processing. And then we pivoted our focus to that. And that's what I think Flink is known for today. Its stream processing capabilities. I think maybe we should even say the unification of batch and streaming is probably the tagline of Flink today. The real, real, as in not micro batch, stream processing was the selling point during the first years after we pivoted to stream processing. Eric Anderson: And you made quite an impression on people at that time. I was at Google Cloud, and we were working on this unified batch and stream processing thing as well. And I remember my manager at the time saying, "Apache Flink. This could be as big as Spark. Maybe this is going to be the Spark replacement." There was a lot of enthusiasm about the future of streaming at the time. Stephan Ewan: Thank you for saying that. That's very kind to hear. I think technology wise, there were a bunch of very good ideas in Flink. The Spark people also had good ideas. Let's not discount that. But I think there were few interesting things in Flink, most notably the idea to put streaming in the core, basically make batches special cases of streaming, have an engine that actually in many ways treats this batch as a special case of streaming. Starting from scheduling, starting from for tolerance with streaming, just has to add more fine-grained intermediate checkpoints to even the semantics of the operators and so on. I think over time we figured out a fairly nice model there, and I'm happy people get excited about it. I think Flink is still to this day a pretty cool technology, but I would say, in hindsight, the problem of stream processing is probably not the technology but more the accessibility of the model of the mental model that proved to be the biggest challenge in getting mainstream adoption. I think we're really only seeing over the last years with streaming SQL, with the better integration end-to-end of the streaming pipelines, the data lakes and so on, all this coming together that it's really unlocking its potential in the mainstream. This has taken quite a while and ... Eric Anderson: I agree with you. To this day, it feels underutilized, and I don't think the technology is holding it back, as you point out. Tell us what happened ... You were at Data Artisans. Flink had its early moment of attention. What happened to Data Artisans? And at some point, you kind of split off from the team. Some of the folks went to Immerok, and now you're working on Restate. Maybe you can kind of catch us up through there. Stephan Ewan: The Flink story is, I think even for an open-source project and the commercial trajectory, very interesting. Almost unique in a few ways. Data Artisans exited after a little over four years. Alibaba acquired it in the end of 2018, turned to 2019, and the reason was partially Flink was at the point where it was actually starting to look really good, kicking off, but it's been admittedly a very tough for years. I mentioned this in the beginning. We started out with similar use cases as Spark, who were very strong, very dominant, very entrenched in their adoption. Pivoted to streaming. On the streaming sites still. Streaming was very nascent back then. We not only had to convince people to use Flink for streaming. We had to help actually even tell people what's possible with streaming. Flink was one of maybe the first system that did stateful, exactly once streaming in a way that actually worked at scale, actually worked for complex tags, and so on. And it was a crunch getting that to the point where it wasn't trajectory that it would kick off. It had taken a toll on some of the people in the team. When Alibaba came with a, I would say for that point in time, very sweet acquisition offer and a full commitment to continue the open-source project and basically the entire location, even invest into the engineering team, and to the local sales team even, it sounded like a very attractive offer, and we took it. The interesting thing is, though, that shortly after the acquisition, Trump was elected. US-China relationships took a turn for the worse. A lot of our business focus was in the US. We couldn't really act on that for several reasons. Actually had to even partially wind it down. And so, Flink then continued very strongly on the open-source side but not as much on the commercial side because of those reasons. For geopolitical reason, as weird as it sounds. By the time ... And that didn't actually change throughout the three years that I stayed at Alibaba. I left in early 2022. By the time I left, it was still ... At that point, there was commercial uptake. Commercial adoption. We're working on all of this. But compared to the sheer size and activity of the open source, it was weirdly small. That actually did create the opportunity for a bunch of folks to actually go out, found Immerok around the open-source project, independent of Alibaba, and start saying, "Okay, we're actually going to go all in and push that." Finally get Flink the commercial backing and adoption, everything that it should have. And Immerok exited very quickly to Confluent because Confluent jumped on that train as well. This is actually interesting. Flink had two exits in a row a few years apart because the first exit was held back, not by the success of the project but more by geopolitics in a way. Eric Anderson: There was almost an eastern hemisphere exit and a western hemisphere. Stephan Ewan: You could think of it like that. Eric Anderson: Now, you're working on something new. It's certainly not a Flink derivative, but it takes some learnings from your stream processing days. Stephan Ewan: It does take some learnings. Restate, if you look at it superficially, has nothing to do with Flink. It actually solves, I would say, almost the exact opposite type of use cases and problems, where Flink is analytics on real-time events. Restate is transactional processing of real-time events or requests. Transactional processing as in ... Think any time you have a piece of code that updates multiple databases or database and a queue and then makes a call to service, and you want to make sure these things happen all or nothing. Work row-style fashion. Restate is basically a tool for that. You can think of it as an extremely lightweight, very low latency, very generalized workflow as code engine. Basically, turning any request, any event handler, into a workflow as code that can interact with any other event handler in a exactly-once communication fashion. Can be stateful. Can maintain state. Can run as an asynchronous task that to kick off, reconnect back later, and so on. It solves real-time, transactional problems, where Flink solved real-time analytical problems. This is how we thought about it. We've worked on that side of the problems sphere. Let's work on the other side of the problems sphere. It was, funny enough, inspired by some of the usage resolve link that folks were actually using a real-time analytical system to try and approach real-time transactional problems just because the tooling for real-time transactional problems isn't actually great. If you're from the sphere of using a database, a single database, it's of course fine. SQL database with their transactions. Amazing tool. But once you leave the boundary of a single database, actually not that easy to do something. You can go back to one of those heavyweight workflow engines to help you. But if that's not your thing, you're kind of back to wiring together message queues and queue consumers and all this manually and then managing all your ... You're almost implementing a poor man's workflow engine usually yourself or poor man's stream processing exo system. Something like this. And this is where we saw folks using Flink, even though it was not built for this, just because they didn't really find too many other tools. And that's where we thought it sounds like this is a space you want go into. Real-time transactional problem sounds interesting. Event-driven. We've worked and learned a lot of event-driven architectures. We think they're very good foundation to solve this. They just need a different way to be programmed and viewed to the user. Building manual event-driven pipelines is not something users tend to ... I don't want to say do well. Some do it well. Some ... It's a very hard thing to get right. It's much harder than most people initially think. Building something manually with a Kafka consumer in the database and across multiple services and getting all the common cases is very hard. We thought, hey, what's an easier way to make this approachable? And we actually arrived at workflows as code durable execution but generalized from the workflow sphere to RPC event handlers. All that is a really easy, very approachable model to cover all those use cases in an efficient way. That's what Restate is in its core. Eric Anderson: And part of the reason this hasn't been addressed historically, I think, is that there's always been this, as you pointed out, this confusion around the boundary of the database. There was a time when people would rely more on the transaction of the database, even do stored procedures and other things to accomplish some of these tasks, but with increasingly distributed systems, and if you wanted to rely on multiple steps or anything outside of the database, that kind of abstraction is no longer helpful. Stephan Ewan: That is true. I think there's a few reasons why maybe it hasn't been addressed before or why now is a good time to address it. I would say one of the things that works in Restate's favor, why it's a good idea to do this now, is ... If you look at the way networks and storage has developed, the networks have become incredibly fast. And the increase in network bandwidth over the years and also the next proposed versions of Ethernet is just mind-blowing. Network is not a bottleneck in the foreseeable future. The IOPS you can get out of modern disks also. It's absolutely insane. It's very hard to saturate this. This actually has made it feasible to add much, much more fine-grained durability into intermediate steps without incurring a network cost that limits your throughput or increases the latency built-in and monetary costs with the increased network and storage capabilities. Also, the prices of storage went down. You can see this. If you look at the chart of gigabyte per month in S3, it goes down all the time. Storage is actually cheap and efficient. You can do fine-grained durability. You can do it way more aggressively than you could afford to do it five years ago. And doing exactly that, this very fine-grained durability is what gets so many problems out of the way because, if you have a durable, a well-established, intermediate point to fall back to, rather than saying, "Okay, I've kicked off a bunch of things, and half of them have failed, half of them have to come back, but my application really needs to figure out itself which have come back, which have failed," it's a very hard thing to do. If you have fine-grained durability, and at every point in time you know exactly this has definitely committed, maybe this one is in an unknown window, none of this has started yet, it's a much easier problem to solve as a programmer, but it really requires a very efficient way to establish this fine-grained intermediate durability. Hardware trends have made it possible. Now, it's time to build a software architecture that can exploit that. And that's what we're trying to do with Restate. Eric Anderson: And there's been an evolution of workflow-ish solutions of late, right? I mean, there was ... The cloud providers provided some. The folks at Temporal and others have iterated. It feels like we're headed to something. Stephan Ewan: I think so. You can say that this new modern workflow engines. Workflows is code durable execution. Call it what you want. It's all related. This is a space that is shaping up. Temporal definitely been one of the first to establish this durable execution paradigm. I think maybe even before Temporal was Azure Durable Functions. And Microsoft Orleans or so kind of going in a similar direction. We now see a plethora of individual projects that try to do durability for compute that allow you to capture intermediate progress efficiently. What we're trying to do in Restate is not really built the library just that captures that durability in your program, the SDK, the hooks that your program actually goes into to persist things or to connect to durable execution channels and so on, but the architecture. The storage system that basically, in an extremely lightweight and all-latency way, can handle invocations, invocation process acknowledgement, service-to-service communication, all of this with the lowest possible overhead in terms of storage cost and latency. And basically the connecting piece between all this is you can think of it as taking the idea of durable execution from a workflow paradigm and making it really a more general purpose compute paradigm that you can use in any RPC handler, in any event handler, in something like a classical workflow. But then you need a system that actually connects those RPC and event handlers and workflows and maintains the fast journal that actually understands this journal entry is both a progress in my workflow, but it also represents an invocation to another service. I can atomically record this, and I can immediately dispatch this. You can almost think as what Kafka is to an event-driven low-level application, Restate is on the level ones higher. The sort of the broker that connects all the services but doesn't connect it purely in we give you an event, and you figure out what to do it, but we give you the high-level abstraction of a durable execution trigger by an event or an RPC handler, which actually turns into an event internally and all that. Eric Anderson: For me, that's a helpful analogy. In the case of Kafka, I give it an event. And behind the scenes, they're doing partitioning and various persistence things to ensure that event is going to be there when I need it. And in your case, I hand a transaction, and you can ensure that it gets executed, and I can know the state of the execution at any point. Stephan Ewan: Yeah. In Restate's case, you hand it an invocation. Restate's case ... You can always think of you define a service. It looks very similar as if you write in TypeScript or JavaScript with express as a service of a bunch of handlers. Or in Java, you go with Spring Boot and define your handlers. And then basically you make this a Restate service but saying ... You're not evoking this directly. I'm making Restate the reverse proxy for that. Restate sort of re-exports the API surface. Any invocation that goes to the service actually goes through Restate first, and then Restate manages that invocation in a durable way. It actually dispatches that. It invokes the service. It maintains the lifeline to the invocation. It uses this to stream back events for partial progress. Whenever the handler goes through a certain step, it's creating a promise. It's making a call to another handler. It's just running a block and says, "Okay, I want this durability to capture the result of that code block." Then this pipeline is backed through this lifeline to Restate which internally has this pipeline log through which it funnels these events and then sends the acknowledgments back or also acts on these events. If this event represents state updates, or if these events represent calls to other handlers, if these events represent durable sleeps or so, then it basically acts on this. Schedules the corresponding actions. You can think of it takes the programming level from transporting events to managing durable invocations. Durable invocations and the invocation as a whole is durable, but also the execution steps of the invocation are durable. It's basically taking this one level up. Sounds academic but makes, actually, world of a difference, especially because, if you actually define these handlers and make reset the proxy, it doesn't really matter how this invocation comes to be. If they're connected actually to Kafka and becomes the processing of a Kafka event, if you actually call this directly from another handler, if you call this in an API gateway fashion from the outset. That's one of the nice pieces. The other nice piece is, once you defined it as the set of durable handlers, it also doesn't really matter where it runs. You can run it in a container in a long-running fashion, or beauty of durable execution is you can also take something that looks like a long-running invocation and put it on something like AWS Lambda on FAS and just let it run until it hits the first point where it, for example, did an outbound RPC and is now waiting for the response. And you know that it's waiting for a response, and you don't really want to wait on Lambda, so you killed it, but you're not losing anything because you have the progress through the durable execution. And then when the response to the RPC comes to bring back the Lambda, recover it through the durable execution, give it the result of the RPC, and continue from there. You kind of pretend it's long running, but you really actually chop it into stages implicitly during the execution. You have a few very interesting things. You just write durable event. RPC handles. Doesn't matter if it runs in a long-running container. In a Lambda function. Doesn't matter where it gets invoked from. From Kafka, RPC, or anything. And honestly, these functions can be anything from the simple RPC-style functions that just talk to one or two APIs or even just to one. To a database. To really very, very long-running workflows. It can be anything that runs from five milliseconds to five days. Doesn't really matter. You program it all the same way. You kind of blur the boundary between what is a workflow, what is an RPC handler, what is an event handler, what is an asynchronous task, what is a deferrable function. Sort of interestingly becomes all almost the same thing just by virtue of decoupling the actual execution and runtime from the conceptual runtime through this mechanism. Eric Anderson: I wanted to get into that. That's been my understanding as well that, as the ease of doing this and as the abstraction improves, the areas in which we're going to employ this seem to grow. Historically, I feel the workflows targeted certain use cases. You described long-running ones or human-in-the-loop ones. And my expectation that is, with the Restate and other kind of easier-to-use models, that maybe we just use them all the time. Every event. Why not just pass it through Restate? Stephan Ewan: I would say that's the end game of everything. It should ... As a user, it should be very easy to make the decision to just use it and say, "Yeah, I want to use durable execution here," because that allows me to think of an extremely easy way about that problem. I'm updating a bunch of ... Making calls to different set of APIs here. And I can try and think of, is this really this whole sequence? Is it idempotent? Is it idempotent under concurrent requests and everything? You know what? Not worry about it. Let me just use durable execution. It's going to solve the problem for me. And that should be a very easy decision because it should add solid low overhead that you feel it's not going to get in the way. It's not going to add the latency that I can't use this on the synchronous path of user interaction anymore. It should be something that's cheap and fast enough. I just want to use it in the same way as people said, "Yeah, if I don't know what to do with my data, just put it in Redis," or something like this. It should be a similarly easy decision. I think there has been a continuous trend of workflow engines getting more lightweight, and I would say Restate is maybe the next step of that. I think you can think of starting from the old IBM workflow engines, extremely heavy in how they spawned their tasks and the whole protocol around this, to maybe the more modern workflow engines, like Camunda, and then you mentioned Temporal. Temporal is also based on the workflow-as-code paradigm, which is, in its way, a lot more lightweight anyways because you're not defining these complex graphs in a separate language. You're not saying, "Okay, before I turn this into a workflow, I have to go from code to something like BPNN markup or so," but you're actually staying at code. In the Restate case, you're actually going one step further. You're just keeping this in RPC handler, even. You don't even have to think about it as a workflow and factor it into tasks and activities and then workloads. Just keeping it as an RPC handler and saying it's going to get durably invoked. It can actually say, "This call should actually be persisted before the next step happens. This call should be persisted." Just connect this to the context, the individual calls, and I'm done. This is all I need to do. Plus, it's also going to be really fast and really easy to deploy so that just make the barrier of making the decision to use durable execution as low as is possible. Eric Anderson: And what exactly is the packaging, the developer experience, for using this? I add it as a library to my code, and then I do some, as you mentioned, some annotation here or there about where persistence needs to happen. And then there's a cloud service somewhere that I sign up for. Stephan Ewan: The Restate has two parts. It's very similar to if you think about a Kafka database. There's a server component that does the heavy lifting, where it's about the storage. The durability. The database of the Kafka broker. And then there is the SDK, which is maybe think about it like your JDBC driver or your Kafka consumer. Those exist in many languages. The server ... It's just a single binary written in Rust. The server you can ... It's open ... It's openly available. You can just run it yourself. It's actually fairly convenient to run it yourself. We've put a lot of attention to making it easy to run. I think the developer experience of and the usability around this is actually really nice. We've learned so much from the Flink days of things that we regret there. Plus, we've hired a few of people who, I would say, are borderline pathologically obsessed about developer experience, which shows in the system in a good way. It's one of the things that people like about Restate. It has really nicely thought through developer experience, and the adoption is really easy. It's just literally download single binary, get started, and you're done. It can run it. And it's not a dev server. It's actually the real thing. You can just take that thing as it is, even with its storage or database. Move it to the cloud. Continue from there. There's also cloud service available as a free tier for you to try it out to run simple applications. Has actually both options, managed cloud and software-stable open source or source available, depending on how picky you are. The licenses. It's the BSL. Very permissive grant. You can run it with pretty much anything you do except build the competitive managed service like that. I think the new standard in most open-source projects these days ... It's very common there. Eric Anderson: Given the broad applicability, how have you communicated to the market? Are there certain communities you target? Or what are the examples you like to put on the website that help to get people understanding what you're doing? Stephan Ewan: It's actually, I would say, the hardest thing of all that work that we're faced with. It's a very broadly applicable horizontal product. It's not one use case that you clearly associate it with. That being said, I think most users that use it really look at it as a lightweight FAS workflow engine. That's how we're usually talking about it. There's a second angle. Whenever I'm talking to folks coming more from the stream processing, from the Kafka side, and so on, we also use the analogy. You can think of it as the OLTP counterpart to Flink and Kafka Stream being OLAP and anything that you'd usually not use Flink and Kafka Stream server, but you'd manually do with the Kafka consumer. This is a Restate use case. Honestly, every time you're using a message queue for something that triggers anything in a workflow-style or transactional-style logic, I think you have a great Restate use case. This is really what we're building this for. But if you go to the website these days, the first thing it really shows is its lightweight workflows as code. Asynchronous tasks is ... I mentioned before it kind of blurs the boundaries a bit between event-driven application, asynch-task workflows, MicroSource orchestrations. All kind of very similar, but we're putting these still as individual use cases on the homepage because I think most people think about them as distinct things. Although, like I said, our end game is they should all become just the same thing. I think the idea of you can make the workflow guarantees so cheap that they become almost pervasively usable is not an idea that everybody immediately intuitively connects to. It's really about just taking people from ... Think about it as a workflow. Now, think about it as a really lightweight workflow. Really fast one. Really easy to run, deploy. Really something that doesn't change the shape of your application anymore. If you did write an RPC application, it stays. Just keeps looking like an RPC-style application and so on. And take it from there. But honestly, might be different in half a year because one of the joys of these early days is every day you're talking to a different person, and you're learning something new about how they look at things, what they like about it, and so on. This is very much in flux and evolving almost every week. Eric Anderson: That's helpful to think that, anytime I'm reaching for a message queue, I'm probably in a situation where I would benefit from Restate. Stephan Ewan: Yes. And if I can clarify that, I think there is ... If you look at the use cases that people have with Kafka, I think there's two use cases. There is a log, a classical log, that you use for long-term retention, that you feed into analytical systems, that you compute materialized views of with systems like Flink and Materialize and Kafka Streams and so on. And you really care about throughput probably more than latency almost in the sense of latency of a few seconds is good enough for most of these analytical-style use cases. But if you don't have a log that feeds into analytics or materialized views, but if you really think about it as a message queue where you put an event because you want the thing that this event represents to happen, it's somebody click the checkout button in your shop, and you want that checkout to happen because otherwise it would be really bad for the user. And that's why you're writing it to a message queue because you absolutely want this to happen downstream, but then there's got to be something that acts on this event and make the complex process represented by that event happen. If this is why you're putting a message queue somewhere, because you want a downstream flow of things to happen, you have a Restate use case. Eric Anderson: And in that world, as a user, I don't interact with Kafka as much? I just kind of interact with Restate? Or have I kind of jumped the shark a bit? Stephan Ewan: They play very well together at this point, right? Like I said, if you go to the website, if you've defined your handlers, every handler that you define connected to Restate, you can invoke it over HTTP or with clients from other programs or by creating subscription of a Kafka topic. Restate can drive them directly by pulling this off Kafka. There are probably cases where you can just say, "Okay, I'm many ways not really interested in this being a long log of events. I'm just interested in this being a durable channel between an invoker and thing happening." Yes. Go directly to Restate. Don't worry about the rest. If this is a sequence of events that you say, "They represent this action. They also really something I want at the same time for analytics or something that I want to go back to and maybe rebuild some other state from later," you really want the lock log semantics that Kafka could see. Then put it in Kafka and just connect Restate to Kafka and let it pull it from there. It's really up to you. Whatever fits your architecture better. Eric Anderson: Stephan, at the beginning, you talked about how the road with Flink was long. Some people, maybe yourself included, felt a little burned out at some point or another. Is it fun to be working on something new? How are you feeling? Stephan Ewan: It is fun. Something I learned about myself is I'm much more of a startup-small-company person than I am of a corporate person. Navigating the general of a company as big as Alibaba is not my best talent, I would say. But working these early technologies, thinking about the problems, the users, the use cases, how do you fit those together, this is exciting. I like it a lot. We're very early in our journey. We're way earlier than Flink when we were thinking about this. Like I said, we launched, actually, Restate 1.0 a month ago. We have been working on this for a little longer. It's a complex system to build. We open sourced our repositories. And in December, I think we wrote our first blog post that says, "We're working on this," maybe last October or something like this, but it's really available for real to use. Eric Anderson: You mentioned that you repositioned things recently with the 1.0 launch. If folks had looked at Restate in October, they should probably look again to kind of get the new picture. Stephan Ewan: Yeah. I think when we started about it, I think we're taking a much more of a closed-world view, and we're really optimizing the things that happen as long ... As soon as you have basically many, many services on Restate and the way they interact, there was the main focus. But I think one of the biggest things that we learned is ... Well, most people ... When they start out, they really have one service that they play with. Maybe two. It's really all about that one service being interacted with by the whole rest of the world. This is really what has to be the stellar experience. Everything else comes later. Everything else still works. But I think we've made big, big headways of making it really easy to take an existing service. Let's say your Java program ... Or you have a Springboard application or a JavaScript program with a Nextplus application. Taking this, converting it to a Restate service, is now actually very, very different. It's one of the biggest things that evolved based on the early user feedback. Because that's how everything starts. It starts with one small insertion point or one small use case. One first service that you convert. And ... Eric Anderson: Stephan, how do people get involved? You mentioned there's a lot of folks poking at Restate. Is there a social channel? What's the way to take a poke themselves? Stephan Ewan: Restate is open source on GitHub. You go to GitHub. You find all the links from our homepage, Restate.dev. The best way to get involved is ... If you want to drop an idea, open an issue on GitHub, but maybe, even better, join the Discord channel. Say hello. Just leave your idea. There's lots of folks being happy to talk about ideas. Experience. "Hey, we're missing this. Hey, I would like to write a test in that shape, but what's the best way to model this? I have that specific use case of those in those primitives. Am I using that the right way?" And so, I'm this ... It's very active. It's very friendly. I would say go start there. Eric Anderson: Thank you so much for joining. Yours is a great story because I feel like I've seen the touch points of progress from the early Flink days, the workflow, kind of evolution, and what you're doing now is fantastic. Thank you for sharing it with all of us. Stephan Ewan: Thank you for the kind words, and thank you for the chat. It was a lot of fun. Eric Anderson: You can subscribe to the podcast and check out our community Slack and newsletter at contributor.fyi. If you like the show, please leave a rating and review on Apple Podcasts, Spotify, or wherever you get your podcasts. Until next time, I'm Eric Anderson, and this has been Contributor.