Willy Lulciuc: So we were pretty much in integration hell for a very long time, and we realized that there needed to be a better solution. But not only that, get key players looking at the spec and help driving and building out some of these integrations. Eric Anderson: This is Contributor, a podcast telling the stories behind the best open-source projects and the communities that make them. I'm Eric Anderson, I'm excited to be talking with Willy Lulciuc today, one of the creators of OpenLineage and various other open-source we'll get into. Willy, great to meet you. This is the first time we're meeting. Willy Lulciuc: Thanks for having me. Eric Anderson: I feel like, Willy, we're long-lost friends or something. So I spent some time in data engineering land, crossed paths with Julien in the past, but we haven't met. So normally, I like getting into the project first, but in this case, I think it would be helpful to hear your story. You've been doing data engineering for a long time. You've been doing this lineage thing for a long time, if I recall. Willy Lulciuc: Yeah, I corner myself in data processing. I'm very specialized, and I do data lineage. It was something that started about seven, eight years ago when I joined WeWork, work alongside Julien Le Dem. But it's kind of crazy, I've built this career out of it and been doing it for a long time. Super exciting as well. So yeah, happy to get deeper into it. Eric Anderson: So I didn't realize, I've heard of OpenLineage, I didn't have the lineage on OpenLineage. It sounds like it all started at WeWork, or was it earlier than that? Willy Lulciuc: The idea started earlier. So I just happened to join WeWork, and Julien Le Dem was the lead data architect. At the time, Laurent Paris, who was the CTO of WeWork, they worked together in the past at Yahoo. So, the idea really was formed by the two of them solving this data lineage problem. And really it wasn't until they both joined forces at WeWork, they're like, "We're going to do the data platform." And that idea formed into a data model in a room at WeWork in a whiteboard, where we did jobs and data sets and how we were going to version metadata. And I took that and started writing code, and that's how the story happened. Eric Anderson: Yeah, time stamp that for us. It sounded like the other two folks had aspirations here. Were you similarly like, "I want to solve lineage?" Willy Lulciuc: That's a great question. I was out in New York. I worked for a few startups. One was Canary, which was IoT. So I was building out streaming platforms to ingest IoT data, so like sensor data, doing monitoring on all the different sensors that were coming from the device. But it wasn't until I joined BounceX and they were similar problems, stream processing. They were doing a behavioral marketing software. And the problem that we had internally was I wasn't able to communicate, at the time, I didn't have the right vocabulary, but it was data governance, understanding how different scripts randomly impact your dashboards because sometimes that script doesn't work and you're like, "Okay, why don't we have this observability?" So it wasn't really until I went out to SF, thought about moving out there, and that's where I got connected with Julien La Dem. And the way he spoke about how to solve these problems and the vocabulary that he was using and this new concept that was called Data Lineage is where really my interest just peaked. Eric Anderson: Yeah, there was a time as an investor we would follow some of these big scale-ups, like Uber, Lyft, Stripe. They would churn out open-source projects and really interesting infrastructure. I don't know that I have WeWork at the top of my list there, but I think it is kind of in that universe. Have I misunderstood, or is this kind of unique, that WeWork was pioneering some open-source? Willy Lulciuc: They were. I think our team was fairly fortunate because so much of what we wanted to do with our gen data platform, to ingest sensor data from WeWork spaces, really what we were doing internally. There was a huge migration from Redshift to Snowflake, and we were early adopters of DBT. So there was a lot of core tech, and this could be a separate discussion or podcast altogether, the tech that was built at WeWork. And if you follow the lineage of the engineers that were there, some of them actually have built some great startups. But internally, we had really big visions on the type of tech platform we wanted to build. And we were just in an area where we knew engineers at LinkedIn, we knew engineers at Slack, and we talked to them, "Hey, this data lineage problem, do you have it solved?" And they're like, "No, but there needs to be a solution." And really, that's when the doc I wrote internally, that described the metadata layer, which eventually became Marquez. So, we eventually named the open-source project Marquez. And some background there: it was actually named by Stitch Fix. So there was a data team internally at Stitch Fix that used Gabriel GarcĂ­a Marquez, so One Hundred Years of Solitude as a way to name the project. Very similar with Kafka, the naming there, so after an author. And that's how the name became about. But yeah, we crowdsourced our solution and our data model, and there was a lot of key players that said, "You're on the right track." And happy to go deep into how some of those discussions were or the model, but there was a lot of traction that we were seeing. Eric Anderson: Just to finish out the history, so Marquez was happening inside WeWork. And then when you left WeWork or at some point, OpenLineage forked off? Willy Lulciuc: Oh yeah, yeah, no, it's a great question. So, we prototyped or proved out the concept of Marquez internally for about two years. And around that two-year mark is when WeWork was trying to IPO, and we realized, "Hey, that might not happen," or there was just a lot of internal stuff going on. So, then engineers started looking for different roles, and that's when we started Datakin. So, it was really the startup around Marquez, which was operational data lineage startup. So we donated the open-source project to the Linux Foundation, so LF AI & Data. And then shortly after that, we started building a POC, which eventually became Datakin. And then, really, if you look at the OpenLineage spec, which is all it is, it's just a JSON spec with entities like runs, inputs, and outputs, and also jobs, if you look at what the model of Marquez was, and also what the spec of OpenLineage is, it's actually just a subset of that. So with Marquez, you have jobs, data sets, and runs, but also you have data set versions and job versions, and we collect a lot of telemetry around just your general runtime. So, we looked at Marquez and we had this issue with Datakin, where we wanted to integrate with open-source frameworks like Airflow, like Spark. All of them needed their own custom integration, and that really became difficult. So, you always had the problem where we were able to collect all this lineage metadata from Airflow, but they'd bumped the version, and it breaks. So, we were pretty much in integration hell for a very long time, and we realized that there needed to be a better solution. But not only that, get key players looking at the spec and help driving and building out some of these integrations. Eric Anderson: Presumably, you'd want the Spark team to own or inherit or claim the integration and maintain it. Willy Lulciuc: That's correct. So, Ryan Blue, we got early feedback from him, he's also a TSE member of OpenLineage. So very early days, he believed in what we were doing. And it wasn't really until I spent some time at Apple. So I was there for about two years, building out the Spark integration for the AI/ML infrastructure team. That was building out the data sets for the Siri team, and the data sets used to train their model. So, that integration was built out for quite some time. But ideally, yes, if you think about databases, if you think about integrations with schedulers, they all should be emitting lineage metadata, and hopefully using OpenLineage. But over the years, we've seen some really, really great adoption. Eric Anderson: Okay, so I think I'm understanding the project more now. So, in some ways, you build a spec, and then you also, in some cases, build these integrations or SDKs that emit telemetry in the form of the spec. And then, presumably, maybe you build some managed collectors, some services that can be a receiver for the spec. Is that right? Willy Lulciuc: That's exactly right. If you take a step back and you look at what's going on, really the events represent a snapshot of what is happening at the time as your pipelines are processing data. So, there's a start event, it's like, "Hey, I'm reading from this table." There's a few run states for OpenLineage: so you have start, running, abort, complete. So all those things are very, very common across different frameworks, as well as for streaming platforms. And we capture the SQL, and we do some SQL parsing and we look at what the input and output tables are. So all that information is sent to a backend which needs to handle these events. They're ordered; sometimes they're out of order; sometimes they're really high scale. So, you do need some sort of a consumer, which Marquez was the implementation reference for OpenLineage. And now, we have Google has support for OpenLineage, Microsoft, and a number of others as well that have built out the backend. Eric Anderson: Okay, so back to the history: you're at Datakin with the core team out of WeWork for Marquez, and you're building a company to commercialize Marquez now as OpenLineage. Is that right? Willy Lulciuc: That was for the thing. Really, Marquez has a Postgres instance, has a REST API for ingesting the lineage metadata. And at the same time, you can access the run history, you can look at the datasets. So, it's a lightweight data catalog. But the productionization of OpenLineage, I would say that the only way that we realized success could actually happen if people bought into our vision. Which, again, it's an odd thing. You're building a startup, but you need these integrations that are really, really important. So, we ended up funding a lot of the integrations that eventually got built out. Eric Anderson: Got it. And what would you like to say about Datakin? So DataKin eventually is acquired, is that right, into yet another data infrastructure company? Willy Lulciuc: Yeah, that's right. That's right. So, I was a founding engineer. We scaled to about, I think, six or eight engineers, and within the two-year mark, we got acquired by Astronomer, which was interesting. So that was a fun ride. They eventually looked at OpenLineage and saw a strategic advantage, where you have Airflow deployment. So, you have customers that are using Astronomer, deploying their infrastructure on Astronomer, but there are some customers who are just running on MWAA. But how do you start building out an observability product specifically for Airflow? So, we folded our product in, and we became Astro Observe, which is collecting all these OpenLineage events across deployments and showing a full end-to-end lineage graph from the different operators that can be used within Airflow. So we productionized it and got it to a point where, now, I think Astronomer sees some pretty high usage of the product. Eric Anderson: And bring us to today. So OpenLineage continues and you're at it again? Willy Lulciuc: Yeah, I'm at it again, and I wouldn't say it's Datakin 2.0. There's a lot that we're building on top of OpenLineage. Now, I'm a co-founder, I'm CEO of Oleander, and we're building your always-on, on-call data engineer. The way we see it is that we view the data platform as a graph. So you have nodes and edges, and those edges represent relationships. So, a job run will produce dataset version, but it would also have an input dataset version. So there's this great relationship that we're able to build on the backend, and using an LLM to give the full context and understand why your pipeline is failing, why is the runtime slowly creeping up. So, we're building an entire product on just doing automatic root cause analysis, and hopefully get down to the core of why your pipeline is failing. Eric Anderson: So, when you were describing a little bit of OpenLineage earlier, you talked about emitting events and then capturing those. And it sounded a little bit to me like a data-specific form of OpenTelemetry, and I believe that you're utilizing some OpenTelemetry now. Is that right? Willy Lulciuc: Spot on. Yeah, absolutely. You've done your research. So, yeah, I would say OpenLineage is analogous to OpenTelemetry, but for data flow, understanding really how your data enters your data platform, how it's being processed, what are the derived data sets that are coming from those raw data sets. So, being able to trace that. But one of the key things is we're now joining it with OpenTelemetry data. So for specifics, at the moment, what we're working on is if you're using Spark, running Spark, we're analyzing your Spark plan, understanding the lineage metadata, but at the same time joining it with tracing and spans of your Spark job. So that becomes a very, very powerful way to understand, one, cost optimizations. But if you look at the Spark plan, you're going to start seeing that you might have a skewed join, maybe one part of your Spark job is processing way too much in that partition, and then you have the other processes waiting. So there's a lot of key things that we're able to then start doing as we also have code-aware context in your Spark job. Eric Anderson: I can imagine a scenario where a Spark job runs unusually long, and that has downstream implications on other jobs. And so OpenLineage is detecting these failures that are associated with your long-running Spark job, and then Open Telemetry is telling you why your Spark job is running too long. And together, you get this root cause and full impact diagnosis. Willy Lulciuc: That's exactly correct. And sometimes it's not that your pipeline is failing, sometimes it's just taking too long. So let's say, on average, also with runtimes, there's seasonal runtimes. So if you look at sometimes when your pipeline's running on a Tuesday, it might take a certain average, but on a Wednesday it might look different. So you do have to look at the previous week, and that's a lot of things that we're building into the product. But to your point, if your pipeline is taking too long, that will possibly impact billing, that's a really, really big one. So working with incomplete data, you might be overcharging, you might be undercharging. So you want to be able to short-circuit that pipeline or at least notify the team that you might be working with late data. Eric Anderson: Switching gears a little bit on topics. So, Willy, you've, I don't know, a decade or something in lineage and data processing. AI emerges, and you've got an opportunity to start a company, and I guess you could look within the data pipeline and be like, "Does AI play a role in the processing?" And I'm guessing the answer is maybe not, but you see an opportunity for AI to play a role in the interaction layer between people and the processing. Or how do you see it? Willy Lulciuc: Yeah, I see it in two ways. If you look at any data observability company, they're just slapping AI on top, which is great. I think LLMs are very good at finding patterns. So, if you give it the right context, it could look at your logs. So from OTEL, we get these events, we get these logs, and if we also provide this correlation ID... So the one thing that we built out is all the OTEL data that's coming from Spark, we're able to correlate that with Open Lineage events, and LLMs are very, very good at doing a GREP and seeing patterns. Where it comes a bit more difficult is the chatbot approach. You could ask it questions, and we're not necessarily building out our own LLM because the general ones are really good enough. But the more interesting part is: how can lineage play a role in training your model? And that's really when we were looking at starting only Oleander. One of the key discussions that we had with Laurent Paris, who's now a Datadog, really high up in the exec hierarchy, but he looked at what we were trying to do and, "Can we extend OpenLineage to also support training your models?" And that's exactly what we're doing. So we recently opened up an issue in OpenLineage to extend the spec to support capturing hyperparameters, so how does hyperparameters impact your model training? We also have it where we look at the environment that you trained your model on, what is the model version. The one thing that we're looking at is NML and AI pipeline is really just a data pipeline, but now with additional context. So being able to do model artifact data lineage is what we call it, but really it's the same thing. You have the data engineering team on the left, you have the ML/AI engineers training the model, the bridge that we see is going to be lineage, being able to track the data sets that eventually produce your model version. And that's a key vision of what we're looking to do at Oleander. Eric Anderson: And can I call it OLIN? I got OTEL and OLIN. Has that caught on yet? Willy Lulciuc: No, absolutely. I'm just so used to saying OpenLineage, but OLIN is the de facto shortened version of that. That's correct. Eric Anderson: Oh, okay. I didn't come up with that. Somebody claimed OLIN before me, huh? Willy Lulciuc: I heard it in passing, but now we could make it a bit more official. But yeah, it's OLIN. Eric Anderson: If it's not on the record, I think it happened here, Willy. Willy Lulciuc: No, it happened here right now, timestamp it. But yeah, that's what the shortened version is. Eric Anderson: Who gets really excited about Oleander today? Is there a good fit? People listening to this show, what sort of attributes would be like, "This is the project for me"? Willy Lulciuc: Yeah, I would say if you're looking at optimizing your Spark jobs and also using Iceberg, so one of the key things that we're doing is joining the Spark job and also capturing so much telemetry data from your Iceberg queries and also your tables. We want to be able to do cost optimization. That's a key thing that we're looking at at the moment, and we're in discussion with Databricks. There's early discussions that are happening there. So, if you're running a Spark job and you're paying too much, we can help drive down a bit of that cost and begin to understand a breakdown of your Spark applications. Where we've seen a lot of our early design partners have been in the computer vision space. I think the LLMs that have been out there, they're very good at text, they're very good at summarizing, but when it comes to video, it's very unstructured. And how do you start understanding all those little frames and what they're doing? So, in the computer vision space is really where we've seen a lot of design partnerships, which also has driven the extension of OpenLineage to support model training. Eric Anderson: I spent some time working with your counterpart, Julien, on some Apache Foundation work that my group at Google had donated Apache Beam, the Dataflow SDK to the Apache Foundation, and they're a unique organization. I didn't appreciate just how idiosyncratic they are. They have this thing called the Apache Way, and they defend it religiously, which I think is the right term for it. I don't know if you've spent much time in Apache and have an appreciation for it, but you spend a lot of time in the Linux Foundation. You're going to have two projects there. What's the Linux Foundation like? Willy Lulciuc: It's absolutely great question. So the story on how Marquez actually got into the Linux Foundations, or LF AI & Data, we thought we were going to donate to Apache, but Julien had suggested, "Hey, there's a new branch of the Linux Foundation. They're doing a lot of really cool stuff around supporting data, open-source projects." And it ended up being very, very frictionless. So one of the things with Apache, you do have to move your code over. It becomes incubating. There's this whole voting process, which absolutely makes sense, and then eventually you have to move your code over to become a top-level project. With the Linux Foundation, it was: you give this little pitch, just, "Hey, this is the metrics of our open-source project, this is the growth, here's adoption, here are how many committers we have." So it's a presentation for about 10, 15 minutes in front of the board. And if the growth numbers look great, then they give you a stamp of approval. But one of the key things was you didn't have to move your code over. They really provided support in every way you could imagine, Slack and so forth. But a lot of it was very much hands-off. It's like, "Hey, you're doing things well, let us not add any friction." And at the time, I think we were project four or five in the LF AI & Data, and it really made a lot of sense six or eight months later to then introduce OpenLineage as part of that foundation. So now, if you look at, I think one of the key things was Databricks Unity Catalog is now part of the LF AI & Data. So you start seeing this snowball effect. Apache is so foundational, it became this badge of honor: like, "Oh, I'm an open-source engineer, and now we've gotten as a top-level project." Which, by default, you want to make that process a little difficult. But with LF AI & Data, still similar process, but it ended up being where you don't have to move your code around, which was kind of nice. Eric Anderson: I don't think I appreciated that. Yeah, you're right. You usually have the Apache/Kafka or something, and I guess, in a sense, you have your own mini foundation for your project that's a subset of the larger foundation and you get your own branding and all that. Willy Lulciuc: The cool thing is, once you become a top-level project in the LF AI & Data, which is just by default... Well, there are different steps, though. Before you graduate, there's different stages. So in order to reach graduation, you have to show metrics of growth. So if the numbers are there, usually in front of the board you do get approval. But once that graduation happens, you now have voting rights at any other projects that want to make its way into the foundation, which is great. Sometimes they're a bit too early, and I don't make them all, but you can also have someone substitute for you and do the voting. But it's been great. It's been great. Eric Anderson: Cool. There are other metadata projects out there. Data Hub is one I've heard of, which sounds like it had a similar backstory, came out of LinkedIn or something. Is that relevant to OLIN and the work you're doing? How is it similar or different? Willy Lulciuc: Yeah, it's similar in the way that... So Data Hub supports ingesting OpenLineage events, and you're able to see the lineage graph, but it's static. In our case, we go at the time when the pipeline is running is when we start collecting the runtime information. So we're focused a lot on the data engineer, while the Data Hub and more of those data catalogs are focused more on analytics. So we're looking to say, "Okay, you're using Sentry, you're using Datadog, but there's really no strong vertical or solution to provide a data engineering-focused observability platform." And that's really where we come in. So we're going to look at your code base. We're going to start being more code aware. We're going to start looking at your data infra, really understanding the runtime information, and correlating that with OLIN. And at the same time, provide monitoring that's very much driven by the LLMs that we're using to analyze your events. So we're a layer or two below. We do have a lightweight catalog because you're going to have to catalog your data sets in Snowflake and Redshift, but for us, we're looking very much at the low-level runtime of your pipeline. Eric Anderson: I get the impression that a lot of people want to aspire to use Iceberg, but they have a managed data warehouse at the moment. And it's expensive, but it works. And I suspect that there's some fear that if I go to Iceberg, now I'm just like, "I'm owning all this mess." Does Oleander make that move a little bit more comfortable, like there's less stuff to manage? Willy Lulciuc: No, not really. There's a risk catalog, obviously, with Iceberg and stuff, and AWS has their own. I'm sure Google and the others as well, and you could host your own. But with Iceberg, it's still fairly new. It's become an industry standard, but as of six, eight months ago, really because of a huge acquisition that happened, like Databricks of Tabular, it won't make it any easier to adopt Iceberg, but it will give you scan and commit metrics. So if you want to be able to understand missing pushdown projections or just understanding the amount of data that you're processing, we're able to surface that: rows written, bytes written, and correlating that to a Spark run and analyzing the Spark plan and how that impacts generally how you're processing data. I don't know of a solution that will be able to provide such a clear understanding of how you're processing your data, but we're not going to be a catalog. We're not going to provide a REST endpoint, at least not at the moment, where you can onboard and start using Iceberg. This is sort of a hot take, but if you're not working with terabytes and terabytes worth of data, I don't know if you necessarily need Iceberg. But sometimes just a Postgres database, or just storing your data in ClickHouse or something else, could be enough. But it has become this industry standard, and people don't want to feel like they're not adopting the new tech. I don't know. Iceberg is great, I think what it's able to do, but at the same time, there's a lot of lift, and the spec itself, it's evolving. And you have Databricks backing it, so naturally, I'm sure it's going to be one of those open table formats that will be around for a very long time, but there's also a few others. Eric Anderson: Yeah, so I hear you on the small data thing. You could just put it in Postgres or a DuckDB, and then on the big end, you go to Iceberg. Is there something in between? Do you scale out of Postgres and into something before Iceberg, or what would you say on the middle area? Willy Lulciuc: Yeah, so DuckLake has come out, and that's actually something we're looking at very, very heavily. So you're able to use DuckLake, which you could just point at a S3, and they're in an Iceberg format. But you were asking if there's anything in between. Eric Anderson: Or just does there need to be? Or are you like once you scale out of Postgres, you're probably in Iceberg territory. Is that would you say? Willy Lulciuc: Yeah, that could be the case. DuckDB can do quite a bit, and obviously with DuckLake, if you do store... And you start seeing this as well, a lot of these warehouses are just saying, "Oh, bring your own bucket." And that's really the same concept we're taking with our telemetry lake. So one of the things that we're working on or our team's working on is being able to provide a SQL console that allows you to query telemetry data. And you could bring as much as you'd like, and it will scale very well. But at some point, we're looking at using Spark to handle some of these more expensive queries. Just like AI, there's a lot of evolving tech that's happening also in the data space. I think it was a lot slower before. You had all these open-source products that came out of Uber, Lyft, and there were just some tough problems at scale. And then startups formed, and all those startups... So Mark Grover from Amundsen, he started doing his own startup around data catalog, and then you start seeing Data Hub. And now the data industry is, there's so many options that you don't know what to use anymore. I'm in New York now, and I was at a meetup for Iceberg and stuff and Spark. And there is just now, I think, three open table formats, and now there's going to be an open table format for the open table formats. So at the end, you have to look at what's your scale, how big's your team. If you start seeing this exponential growth in how you're processing data and storing it, you could start looking at these tools. But I think everyone's just looking for guidance, there's just too many tools to adopt. Eric Anderson: How AI maximalist are you, Willy? So you've been spending some time bringing AI to people, addressing needs. Are we in the early innings of full automation, there's not going to be data engineers when our kids grow up? Or are we seeing most of what to expect and Oleander is the future, you'll have this assistant to a team of data engineers? Willy Lulciuc: That's a really great question, and one that I thought out about. The core thing of what is required to train these models is data. Data engineers are always going to be there as practitioners. What it comes down to is: comes you could code fairly well and spit out code with Cursor, whatever you're using for APIs. But when it comes to data, it's a bit more critical. That one line can process 10 x more of data than before, so it's less forgiving than if you just start generating endpoints for APIs, like, "Okay, that query's not performant. Let me understand why." But when you start generating code for data pipelines, there's a lot more guardrails that need to happen. So with Oleander, we want to be able to say, "Here's the SQL that executed within your pipeline. These are the impacts that it had downstream." Maybe one team downstream that's consuming your data is processing more than it should be because you forgot some filter. So that's really where Oleander comes in: surfaces that, does the root cause analysis, but also you start interacting with our AI agents. Eric Anderson: I spoke recently to a data scientist at Google, and I showed him this foundation model for data science that would automate the work he does normally of generating scripts. He's like, "I don't even have that... Writing the notebook script is not even the hard part anymore." He's like, "Cloud code already automated that. The hard part is negotiating with my counterparts on what data we need, what data are we not looking at, what's working or not working in our data set." He's like, "Generating the script is easy now." Willy Lulciuc: Yeah, I think a lot of the scripting that engineers did, or data engineers did, that could be done. I remember doing a lot of batch scripting. Now, most of it is generated. And not to say I was an expert, but when you look at the high level, when you're going to start processing data or consuming it, you still need humans in the loop, and you hear that a lot. You still need people to communicate: what data am I looking at? I could ask about it, but there's this whole context that someone might have in their head that, "Oh yeah, that data, it refreshes every hour," which you could take a look at from just running a query. But there is still that understanding that needs to happen. And data catalogs are looking to solve that and allow engineers or analysts to annotate their data. Just like with software engineering, you need to define what the spec looks like and also what are the outcomes. Very similar in the data engineering space. What is the data I'm looking at? What is the schema? Who owns it? Are there any data quality checks in place? The one thing we had at WeWork was a bronze, silver, gold data set. So gold meant, "Okay, you have a complete understanding of ownership, you understand the schema." And as you get lower down in the different metals, there's lacking documentation, or, "We don't know how often this data set gets refreshed, so be careful, go in at your own risk," that type of thing. But communication, when I talk to different customers or just people in the space, those are still things that are critical to understanding how to consume data and the quality of it as well. Eric Anderson: Good. Willy, that's everything I wanted to discuss. For anyone interested in Oleander or OpenLineage or OLIN, where do they go from here? Willy Lulciuc: Yeah. Oleander, just Oleander.dev. Check us out. Obviously, book a demo with me; I'm happy to walk through what we're building. But more specifically with the OLIN community, we have a Slack channel, so check us out there. Really great community, and we have a lot of investments in different integrations. So we're always accepting and growing our community. Eric Anderson: You can subscribe to the podcast and check out our community Slack and Newsletter at contributor.fyi. If you like the show, please leave a rating and review on Apple Podcasts, Spotify, or wherever you get your podcasts. Until next time, I'm Eric Anderson, and this has been Contributor.