Gideon Mendels: Almost 99% of the time when it comes to dev tools, open source wins, right? Yes, you have some proprietary software company that are doing really well, but in most of the categories out there, at least in our space, it all comes from open source. Eric Anderson: This is Contributor, a podcast telling the stories behind the best open source projects and the communities that make them. I'm Eric Anderson. We have Gideon Mendels of Opik on the show today. Gideon, thanks for joining us. Gideon Mendels: Yeah, thanks for having me. Super excited to chat more today. Eric Anderson: So full disclosure, Gideon is one of our investments. He's a portfolio founder, and an excellent one. Maybe just to jump into it, how did Opik come about? Gideon Mendels: Yeah. So perhaps it's helpful, I'll give a little bit of background because we've been in this journey essentially as a company for seven years, and as an individual for ... I don't know, 12. So I'm the CEO and co-founder of Comet, the company behind Opik. And when we started the company, our focus was around machine learning experiment tracking, and I can share a little bit more about my background, but that's essentially based on what I've seen as a data scientist, actually training language models before they were cool. And over the years the platform expanded and then the industry was rebranded as MLOps, but we've been doing that for a couple of years before that. So very heavily focused on teams building machinery models, both language ones, but also vision, financial services and so on. We power Netflix, Uber, Etsy, Shopify, Zappos, Stability AI, and a bunch of great AI teams. And about a year and a half ago I would say, roughly the same time ChatGPT came out, we started getting requests from our customers. And they were saying, "Hey, we love Comet. We use this for all these things. And now we have this new use case. We're no longer training a model for it. We're trying to use OpenAI, Anthropic, APIs, and we need help around this." Both general guidance and how do other people do that, but also asking for things that are very similar to experiment tracking, but they're no longer iterating on the hyperparameters and the network configuration, the data set. They're kind of iterating on other pieces, right? Like the prompt and the RAG parameters, chunking, all that stuff. So we started building some of that functionality into the core product, the experiment tracking product, prompt versioning, prompt tracking, that kind of stuff. And transparently, this is a nice surprise. I wasn't a big believer. We built it. I thought it might be cool, but it just blew up, right? So many of our customers started using it, and it wasn't a very impressive functionality transparently back then a year and a half ago. And then through that, we just got so much closer to these new use cases and we realized A), that there's a huge opportunity to help our customers. And B), that it's actually really hard to build a GenAI app successful. You can get a POC running in three hours, even faster these days, but it's actually really hard to get it to the level where you can put it in production. So with that in mind, we had the discussion around should we just keep building it in the platform? One thing that's interesting, we started seeing a lot more software engineers using this versus our core users, data scientists, MLOps engineers, and eventually decided, hey, let's build this as a separate product. It's still available as part of the platform, but let's build it as a standalone product so we are not bound to technical debt. We can move much faster. We really wanted to do it open source so people can easily run it locally. And then, yeah, we launched Opik about three months ago. So that's the origin story. Eric Anderson: There's lots of words, nouns people use to describe these tools. Do you like LMops? What do you like to use? Gideon Mendels: We call it open source, end-to-end LLM evaluation platform. It's similar to the MLOps world where the terms are a little bit confusing. So as long as people understand what we do or not to ... People talk LLM observability, LLM evaluation. So I think it's all over the place, which is normal for an industry that basically came out of nowhere a year and a half ago. So yeah, we can call it LLM ops, I guess. Eric Anderson: I'm very familiar with the tracing concept. I think it's more popular now to have a waterfall of LLM calls, right? And the prompts of the context accumulate over time. I can see using this for development. Is there a production monitoring use case as well? Gideon Mendels: Yeah, yeah, absolutely. It's truly end-to-end, right? So I think that has actually three pieces, and I think there's one piece that we haven't seen anyone else cover, but by speaking to our customers that actually made it to production, it was clear to us how important it is, is the CI/CD phase, the testing phase. Right? So at a high level, we can talk about the steps that ... You talked about development, but the best thing we have as software engineers or companies building software is testing. Right? It's so important. And it's just like as a software engineer, you push a piece of code, that's what gives you confidence that you didn't break a million things. And I might be dating myself, but I used to write code prior to that time. We would just push files through FTP servers with no unit tests. But that notion doesn't work for these GenAI LLM apps. You can't write a standard unit test because if you do string matching, you might get a different response on the string level, which is semantically exactly the same. So as a builder, you're fine with that, but your test will fail. So Opik actually handles that. We built a really cool extension to pytest, so you can run these tests on the semantic level versus just string matching. And then yes, it also does everything in production. It's really about closing that loop, right? So observability and production evaluations and productions and really taking what you're seeing in production, usually doing some annotations to your data and going back to the development phase. Eric Anderson: So I rushed us through your introduction. You said you did models before it was cool. Tell us what that was like. What were you doing exactly, and are you surprised where we've ended up? Gideon Mendels: That's a great question. So originally I started as a software engineer, but then I shifted to working on ML about 12 years ago first as a grad student. And it's funny because I worked on language models. ML was cool back then. Language models were considered definitely not the cool part. So I don't know if you know that, but language models started as a subcomponent of speech recognition systems. So in speech recognition, we used to have... It's now all end-to-end, but it used to be an acoustic model that converts the acoustic waveform to phonemes, a list of phonemes and then a language models which converts it to a string of words. I was part of a speech recognition lab, and I worked on that. The language models were not ... It wasn't LLM, it was just LM back then. They weren't large. And then I went to Google and I worked on hate speech detection and YouTube comment. So that was also very much LLM driven. There was a big paper at the time, the one billion ... This was the dataset, not the parameters, language model. And we used essentially the embeddings from that language model to do classification. So it was the wild west, right? Similar to what we're seeing now a little bit with the GenAI development. In software engineering, we have this amazing practices both from processes like agile scrum tools, databases testing, monitoring evaluations, all that kind of stuff. But then I joined this team at Google. And when I was trying to onboard through the project, the first thing I got was an email with a Jupiter Notebook and an email with some test results from three months ago. No way to reproduce it, no way to go back to it. So it's really cool to see, at least on the ML side, how fast the world evolved. Per your question, am I surprised? So our thesis starting Comet was that every software engineer is going to do AI or ML one day. Right? So I think we were right about that, but we were essentially wrong on how it plays out, right? So instead of software engineers training models these days ... Some do, right? But instead of them training models, they're now building on top of LLMs. So I definitely did not foresee that, but it's much better. It's more powerful, and I truly don't think everyone should be training models these days. Eric Anderson: Yeah. I mean, the old world was we would train a model for a specific use case with specific data, and now we just have these general purpose models that can do the 90% of everything pretty well. Gideon Mendels: Vision, other areas maybe not as much, but in the NLP space, yeah, absolutely. I'll be careful here, but I'll try to make a prediction. We can go back to this in the five years and see. But I think if you look at the world of model training, I think on one end of the spectrum, you're definitely going to still see the simple, really good linear basic models like logistic regression, linear regression, these kind of stuff. Very good for simple tasks, easy to train, fast to train, easy to productionize, explainable in most cases. The other extreme of the spectrum, you're definitely going to see foundational model builders, training models, autonomous vehicle companies like the ... Call it the hardcore ML training, but I think everything in the middle is going to get eaten up by ... I'm simplifying it, but prompt engineering essentially. Eric Anderson: And then you're new to open source, right? I mean, you were consuming a ton of it, I'm sure, as we all are. But Comet started out as kind of a normal SAS proprietary application. What's been the journey to Opik, and tell us about the decision to go open. Gideon Mendels: So we released ... call it a data visualization library for ML engineers called Kangas about two years ago, open source. But you're right that the core experiment tracking MLops platform, it's free for individuals, and there's 100 000 data scientists on it, but it's not open source. When it came to Opik, there are a few variables to that decision and definitely not an easy one. There's a few things that we've seen in the last seven years not being open source, right? So first of all, almost 99% of the time when it comes to DevTools, open source wins, right? Yes, you have some proprietary software company that are doing really well, but in most of the categories out there that at least in our space, it all comes from open source. So that's one component for that decision. The second one is I think that challenge for a lot of open source companies, like, okay, but if this succeed, I still have to figure out how to monetize it. And with Opik and being running common for so long and having a strong customer base and user base, we're not looking to monetize Opik as a core revenue driver. Yes, customer's already paying for it. Those that don't want to manage their own clusters and want to use the managed version, yes. And it's great. We love that of course, but it's solved that half piece of the puzzle of this could reach ... there's some tools in this space. Did really well on the open source, but unfortunately the companies never managed to make revenue out of it, which is not good because then they stopped maintaining it. So that was in a high level. There's also a part of wanting to give back to the community. Like you said, we've consumed so much open source in many different ways. We integrate with so many open source tools. So it was like a mix of all of these things. It wasn't a trivial decision. Definitely something we debated at length, but I'm very, very happy with where we landed. Eric Anderson: You've said you're a little surprised at the reaction, the response, the growth of the project. What do you attribute that to? Why do you think people are so excited about Opik? Gideon Mendels: Yeah, so there's definitely other solutions out there trying to solve similar or even the exact same problem. Right? I think anything in the Gen AI space, there's ... it's relatively crowded. But I think the main reason ... So obviously you've got the proprietary ones, non-open source ones. Let's put those aside, I think per our previous discussion. I think when it comes to open source, there's a few things. First of all, and this is the thing I have severe allergy to is the fake open source offerings where yes, you can see the code, yes, the license is permissive, but there's no freaking way you're going to be able to run it. And then they call themselves open source. That's I think really abusing the spirit of open source in my opinion. And for those that are, let's call it truly open source, we identified a few things that I think this is why people are so excited about Opik. So one big piece is just scalability. If you want to get these things to production, it needs to support pretty impressive scale. And because we've done model production monitoring, not for LLNs for so long, we just had so much experience building these systems that we really designed this thing for scale, no pun intended, from the get-go. So we just ran a benchmark yesterday against two kind of ... the main open source contributors and we're a magnitude faster and ingestion and processing of the data from one magnitudes from the other. So I think people recognize that when they look at the code. The review is a very serious project designed for not just for playing around in a cool Gen AI repo, but truly something that you can use. And this is not just true for us, it solves the biggest problem when it comes to getting these Gen AIs products to production or the level that's good enough. Eric Anderson: And then the product covers a lot of ground. You've got evaluation, monitoring, I guess tracing is maybe related. Do you see the scope increasing from here or decreasing from here? How do you think the category shapes up? Gideon Mendels: Great question. And when we started with experiment tracking, we were like, laser focus, just do one thing, do it better than anyone else. And I still believe in that in many different categories. But here when it comes to LLM evaluations or LLM ops, if you may, the only way that I've seen customers succeed getting production is really successfully closing that feedback loop. And it's hard to do when you're assembling a stack out of three, four different tools because it's the same functionality. Transparently, it's the same functionality. Yes, you need to support production use cases and such, but the first level of, like you mentioned, the tracing, the observability, you just want everything in a systemic record where you can see it. So that's true for development. It's true for CI/CD. It's true for production, obviously different scales, but that's the basic. And then you want to do human feedback by just looking at a couple dozen examples. Typically, you'd find things that are broken and in some cases very easy to fix, but it's hard to do when you can't see these things. But then you annotate some of your data and truly, 20, 30, 50, even 100 is great, but less than that example. So you annotate this stuff. Once you have annotated data, you can finally run experiments, which is something we've been doing for so long, but you want to be able to change the prompt and not vibe check, actually see what's going on using hard metrics. Now, not necessarily accuracy or something like this, but LLM is a judge, distance metrics against your data set and so on. And then you do it in development. You push it to production. You get new data, new sessions and so on, and you keep going back. So I mean, look, it might play out that way, but I don't think this is multiple categories. I definitely think it's the same one. Eric Anderson: And then maybe we can talk a bit about how you've found some success in part because of the quality of the product, but how do you mobilize an effort to grow an open source project? Is that a new muscle for your team? What are some of the learnings? Gideon Mendels: Yeah, I mean there's definitely newer things for us and mostly on the building and the open. Opik roadmap is fully public on the repo. We're getting contributions, a bunch of feedback. Everything is public. And that was a muscle that we needed to learn essentially how to use because our engineering team is not necessarily ... Some of them worked on open source project but not necessarily used to it. And call it the go-to-market side, growing the project. Definitely there's things that are different, but the majority of what we do on that front is similar to what we've developed and built over the years on the non-open source side of things. So it's having best in class integrations, best in class with the best documentation. So that's not very different. The events changed a little bit, but still doing a lot of those and then just great content. So yes, there's differences, but from a high level programs that we're running, it's not very different, mostly because even though we're not open source, we're always a developer tool. It's not like developer, you put an ad as to a landing page and then someone pays you $50. It just doesn't work that way with developers. You always have to show the value in a non-markety salesy way. So I would say there's more similarities than differences. Eric Anderson: So Gideon, what about agents? That work gets tossed around a lot and the tool you've built, Opik is useful prior to agents and if you're just stringing together prompts, prompt engineering more generally. But it seems also particularly useful if you're building agents. Is there anything within your abstraction layer that's specific to agents? Could there be? Is that even a thing? Gideon Mendels: Yeah, yeah, absolutely. So we already support agentic workflows, and we have some customers building them. Look, it's such a let's call it an abused term that I'm not even sure what it means this way these days, but if you're referring to a DAG that has multiple steps, some of them are LLM calls, some of them are tools. Then we have full support for that already, and we have users and customers building such things. We are adding additional functionality. I think it should be released in a few weeks. Again, everything's public, which is like an agent replay ability, because it's one thing to see all the steps that happen, but to really debug these things, it's sometimes helpful to just replay the entire motion and everything happened because a DAG, right? It's not necessarily sequential. It can go back in steps and such. So there is an opportunity to do more there. But in reality, I think it's definitely all the hype and the term is, but I don't think it's fundamentally different than chains, if that makes sense. Eric Anderson: Sense. Yeah, we had workflows and chains and now we have agentic workflows and chains, and we call them agents, but it's more or less same thing. But what about the tooling that developers need to build them? For one, they need something like Opik, but also maybe there's other parts of software development that feel different now. Gideon Mendels: Yeah, I mean, yes. So as you probably know, there's a bunch of open source libraries trying to target that land graph and out of flow. So there's definitely people trying to do that. The concept of DAGs as execution. Mechanisms are also not new, airflow that. It's not that new think ... I don't know exactly. I think there's definitely space for these kind of libraries and tools to ... I'm talking about the code first one, not the point and click. There's definitely space for that. But I am still not sure of how much of ... Does it do any heavy lifting truly, or it's just a nice abstraction? Which is still super valuable, nice abstraction to easily write these chains together or DAGs together. But look, the space is moving so fast, right? Three months from today, we might be talking about something completely different, which is exciting and cool, but it's to tell. Eric Anderson: How prevalent is this? When you're talking to prospects and customers, do you have to qualify people or is everybody doing some level of prompt engineering? Gideon Mendels: Yeah, I mean, so we definitely have to qualify. So I think there's still a large amount of teams and organizations out there that are still trying to figure out how can we build with Gen AI. So that's a huge, and that's I think the Gen AI consultants are doing really well these days because of that. But once you go past that stage, assuming you have the right technical team to be able to do this stuff, it's very quickly ... So look spinning up, let's call it a RAG using something like LangChain and LlamaIndex, we have less than 30 minutes left and we can probably do it at a quarter of the time. They made it really easy. So lots of team gets to this phase, to this POC phase, but then you ask it a question and it returns the wrong answer. And what do you do next? So that's typically where we meet customers, which is awesome. That's what we focus on and that's where we help. But there's still a lot of noise in the space. Per your previous question about agents, everyone's talking about it. But while we have a handful of customers actually with agency production, I would say that's a very small percentage of the use cases out there. Eric Anderson: And then I want to go back to ... you said this interesting prediction. I guess that there will be people that build small models for forecasting or specific use cases, the traditional stuff we've been doing for years. And then there's going to be the foundation model builders, and everyone else is just going to use one or the other. Say more. It's odd that in the era of machine learning, the number of machine learning engineers would go down, if that's what you're suggesting. Gideon Mendels: Generally speaking, yes. I think the paradigms are converging. Look, even starting three, four years ago, every CS curriculum in the country had a bunch of ML courses in it. So I think call it the future software engineers and maybe they would have a different title, would know how to build these things, because this is not software engineering, even if you're using an LLM. It's not, yeah, there's components, but the methodology of creating a data set, testing against it, that's from the machine learning world. But you don't need to know how to optimize backpropagation to do that. It's completely abstracted away from you. So yes, I think the researchers, right, the people training these financial models, self-driving cars, maybe it won't go down, but I don't think we're going to see a huge increase of those. Eric Anderson: Which is good for companies who want to do more of this because there's only so many of them. So data science, if we want to call that the kind of basic ML that continues to exist. And then we have this new large ML AI, I guess AI researchers that continue to exist. Gideon Mendels: I saw a paper today. This is how fast this is moving. So maybe I'm already pushing back on my own prediction where they tested, I think it was an entropic model with a prompt to do time series forecasting, and there's no model training. It's like, here's the time series, give me point N plus one or whatever, N plus 10, and it beat every single benchmark. So look, it's one paper and TBD. But even if it does work this well, doing this kind of stuff with a logistic regression model is so much simpler and faster than making an LLM call. So I do think we'll continue to see those. Eric Anderson: So you help manage the prompts, but the RAG pipeline I think is elsewhere. Also, memory. I don't know if agent memory is the right thing to say, but these little agentic functions could persist memory over time. Opik wouldn't be the store for that data presumably. Gideon Mendels: So let's say on the majority of the use cases on Opik are RAGs today. So yes, the pipeline runs somewhere else. We're not trying to replace vector databases and that kind of stuff. And you're right that we manage the prompt, but it's so much ... The successful RAG system have so much more into them. First of all, when building them, there's so much tuning that goes into the vector database parameters. In many, many cases, it's a combination of both semantic search vector database, but also classic BM25 elastic. So there's so much moving pieces for those systems that actually made it to production. And then often there's a combination of more than one LLM, especially when it comes to evaluation. Opik does add value and integrate to every piece of this thing. When it comes to the agent side, all I can say is we're just staying very close and trying to understand and seeing successful use cases and what people are building with it. And then based on that, see if we can add value. Does it make sense for us product business perspective? But like I said in the beginning, it seems like you're on Twitter and LinkedIn, you think everyone have agentic systems in production. For those listening and feel like they're left behind, the number is very, very small. Eric Anderson: And I'm also curious, I think there's this pipe dream we have that these agents will learn, which they might get better at what they do over time. You talked about how you want this feedback loop. Is that feedback loop for the purposes of self-improvement, or is it really just about knowing you're getting the outcomes you want? Gideon Mendels: Yeah, yeah, so absolutely. So this is an area that again, on the roadmap public that we're very interested in. So when you say self-improvement, generally speaking there's a few ways you can get it. You can get it in pre-training. You can get it in RLHF post-training. You can get it through prompt engineering. And then there's some in-betweens that ... options. But that's the main three options. Pre-training, I don't think it makes sense for ... unless in a very rare use case for companies to do their own fine-tuning, I'm going to say something that might surprise a lot of people. I have yet to meet the team that fine-tuned an LLM and got better results. Again, it might change and I might not met the right teams. But when it comes to call it prompt engineering, right now, that process is manual. So even with Opik today, you can run your automatic evaluation. You see how you do an answer relevance. You see how you do an accuracy if you have it. Hallucination detection, all that kind of stuff. But you still have to go and manually change your prompt or the configuration, the context, all of that. But if we have an ability to systematically and automatically score how well we're doing, and we know what is the space that we're searching in, RAG parameters, prompt, all those kind of things, can we start bringing techniques that we are familiar from the pure ML world, like Bayesian hyperparameter optimization and such to automatically improve these things? A simple example is you run your evaluation score. Let's say you have one metric and you get 0.75, you take the five responses that performed the worst with the prompt, you give it to an LLM and say, "Hey, this was a test score. These are the five worst responses. How will you change this prompt? Run it again." So there's risk of over-fitting and all the same stuff you have in ML, but that's something we're very excited about. There's some research work going from our research team. But yeah, I don't think humans need to type these things or tune these things exclusively. Eric Anderson: Along these lines, I think there's this ideal scenario where if you have really good evals, a new model comes out and you just slot it in and know if it's better or not and if you should run with it. Are people there? Are they just hot swapping models and choosing the best one pretty easily? Gideon Mendels: Yeah. Most people stick to their LLM provider. Now obviously these LLM providers provide new version and such. And yes, when there's a new version that comes out, they do test it and verify. And while it's usually better, it typically requires you to change a bunch of stuff. So the prompt that worked for the old version might not be the optimal prompt for the new version. Another thing, and this is starting to disappear as companies realize that the inference costs are going down 90% year over year, which is like ... I was just saying, I hope the cogs there makes sense, because otherwise we're all in trouble. But is, I've seen companies say, "Okay, I'll do production inference with my LLM app or Gen AI app on a cheaper model. Because it's cheaper, it's faster, all that kind of stuff. But I'll do the evaluation with the premium model." So I had one customer that did that, and I checked in with them two months ago and asked how that's going. And then they were like, "Oh yeah, we just shifted everything to the premium model now cheaper than what the cheaper model was when we started." So when I meet a team and they're talking about cost optimization, assuming they don't have massive, massive scale, I always tell them, "This problem is going to solve itself if you follow the trend." And I think this year we had twice the drops. So OpenAI dropped their prices 90% earlier this year, and then Nova that came out from Amazon, and Reinvent is another 90% cut. So again, I hope the cogs make sense, but it's very exciting. It just a bunch of use cases. Eric Anderson: The Amazon announcement is what got me thinking. I mean, Amazon is famous about driving down margin competitively, and if they're willing to just drop the price, there could be a race to the bottom. And maybe that benefits everybody. If they have good evals, they can just keep choosing the next best model. In which case, the model providers then they're not very sticky. But up until now, you've seen that people generally stick with their model providers. Gideon Mendels: I do think it's a commodity already. Every week there's a new one in the leaderboard and so on. So you're a VC. So just a side note here, but a lot of times I meet founders and they're like, "Our competitive advantage is our IP, and we can build this stuff." And then you look at OpenAI, which built the hardest thing with the hardest, most difficult talent to hire, the most difficult compute or infrastructure to set up, and probably one of the most expensive ones too. And then it took, let's say Mistral, which is a 30, 40 people team to reach parity 18 months. So in terms of competitive advantage when it comes to your tech or your product, I'm not talking about network effects and other stuff. I think that's a good lesson. But yeah, I think one of the areas we haven't talked about is these AI gateways that a lot of companies are now looking into and starting to introduce. It's still early days, and I do think that once these gateways are more established and widespread, then we'll see a lot more switching, because that's truly trivial. Right? You need evaluations to know if it's still good for you. And having a gateway could be just a ... Hey, Entropiq released a cheaper model, let's switch to them. But also dynamic switching depending on the session, depending on the use case. Some goes to this LLM, some goes to that LLM. We're not there yet, at least not as widespread, but I do think it's coming. Eric Anderson: And that would be good for Opik, I assume, given you need heavy reliance on your evals in order to do that well. Gideon Mendels: I don't think anyone can succeed building a GenAI app without good evals. Obviously there's multiple ways to solve this, and this is purely the focus of Opik, but it's just like it's the only way. It's so hard to get these things working well, much harder than it may seem. And we're trying to bring some of these methodologies around experimentation, over-fitting. How do you test that you're doing well from the ML world and introduce it in a super trivial way to the people building it, which is these days a lot more software engineers. Eric Anderson: Gideon, I'm running out of questions. Anything you wanted to cover that we haven't covered? Gideon Mendels: Well, I guess I'm curious, right? Maybe you've obviously been in Venture for a long time. I feel like this space is essentially on steroids. Not that there's just a lot of money and excitement about it, but that just everything is happening at an extremely fast speed. We talked about commoditization of LLMs. How do you guys think about it, and what does it mean for Venture to invest in this thing where it just behaves so much different than I think every other software category. Eric Anderson: Yeah. One interesting thing is that it means that we have expectations around growth rates, benchmarks. I think we've focused on the fact that things can grow a whole lot. You can de-commoditize quickly, and maybe that means your growth goes away, but it also means you can grow incredibly fast. Bolt.new I think went to 20 million in a couple months or something, and together reached a hundred million pretty quickly. So you almost can't use the rubrics you used before that said, "Oh, yeah, the triple, triple, double, double." That's out the window. You need to find something that could go to zero to the moon pretty quick because the alternative, by the time you reach 10 million, it feels like it's been three years and the world's changed. And so I think maybe the easiest way to play it is just to try and catch fire. The safest way is to just grow incredibly fast. But the other thing I'm thinking a lot about ... We used to focus on, you mentioned this already. A good investment had intellectual property or moots and they had a great team, and they had a strong brand or some combination of that. And today, I don't feel like investors talk about moots anymore. It is just like, "They're a great team and the market, big market, great team." And it's to consider what makes a persistent business anymore, because we've seen the holding period is a lot longer. IPOs aren't happening. Companies now need to go to 300 million in ARR to go public. That can be 10 years or more. And people go through a whole technology lifecycle. They become the thing, they scale to hundreds of millions of revenue, and then they start tapering and then you're like, "Wow." If the holding period is longer than the technology cycles, that's a little scary. Gideon Mendels: Yeah, I mean potentially some of those macro stuff will change. I don't know. But I think that on the moot thing, obviously that's true in general, but it just comes to go-to market execution because if people can build the same product so fast because of know cursor AI and all these kind of stuff, and you're seeing commoditization in sense, it's just that's ... We talked about OpenAI and commoditization, yet they're still, I think from a run rate perspective miles away from, maybe Entropiq is coming closer, but I think a $4 billion run rate. So they're still miles ahead. So I don't know. Part of it is obviously the first mover advantage, but they clearly execute it extremely well and go to market. So yeah, it's interesting where the tech and the product are longer ... it's no longer just who has the best product. It's like, who can execute very, very well on the other fronts? Eric Anderson: I think Gideon, that's as good a spot to end as any. Really appreciate one, Opik, the gift you've given the world, you and your team. Two, it's been great to collaborate with you over the years in the portfolio. And three, thanks for your time today. Gideon Mendels: Of course, thanks so much. And yeah, if you haven't had a chance to please check out Opik, O-P-I-K and GitHub. We are actually the number two trending repo on GitHub today. This is December 19th, but so hopefully by the time you watch this, we'll be there again. But yeah, Eric really enjoyed it. Awesome conversation. And yeah, we'll catch up soon again. Eric Anderson: You can subscribe to the podcast and check out our community Slack and newsletter at contributor.FYI. If you like the show, please leave a rating and review on Apple Podcasts, Spotify, or wherever you get your podcasts. Until next time, I'm Eric Anderson, and this has been Contributor.