FINAL VIDEO === ​[00:00:00] em: Welcome to Launch Pod ai, the show from Log Rocket, where we sit down with top product and digital [00:00:15] leaders to talk real practical ways. They're using AI on their teams to move faster and be smarter. On this week's episode, we're bringing you our top four AI workflows directly from the product leaders who created them. First up, we have Sierra Han Ventrell, director of Product [00:00:30] Management at Apartment List sharing how she rethought her entire product strategy because of how AI brought her new customer insights. Jeff: You guys had a really cool story there about. Using some of these LLMs and other AI tools to synthesize a lot of call data and interview [00:00:45] data and kind of all this stuff. And it actually helped you avoid making what may have been a big misstep or kind of a, a wrong directional step product-wise. And you were able to kinda find it fast and, and move forward in a different way. Sierra: So we looked at bringing a new product to [00:01:00] market earlier this year. We went through, prototyping, we'd done all the research. We sat in front of a bunch of partners and brought the prototype up and got feedback and did insights. And we went through all of that in a matter of a couple months. It was pretty quick. And then we went to synthesize all of our findings. [00:01:15] And the goal was, all right, we're gonna come up with a long-term product strategy. We're gonna identify product market fit, and then we're gonna go to leadership and say, Hey, we really wanna invest, go big on this product. Well, we used AI to go through our research synthesis. It looked at all of our notes. It recorded all of our calls it analyzed the [00:01:30] prototype. And when we were going through. We were getting these results that were kind of interesting, it would be like, partner X ask for this, which is very , similar to this company that exists. They do this and it would keep coming up with that. And I was like, this is not a good sign. And I think our whole team, our whole team was [00:01:45] kind of like, okay, why does it keep saying these things? And we kind of felt like there was a little bit of a gap as we were going through interviews. Like we weren't able to validate this. One piece this like product market fit piece just felt wrong. It felt like they kept comparing us to competitors that we didn't wanna be [00:02:00] compared to. And it was a really saturated space. And so I think when we went through, we synthesized all of our findings in AI and we obviously played with it a bunch to really understand like what was under there. And then we used that to build our product market fit documentation. And at that point, when we were kind of [00:02:15] translating over and having these conversations, we realized like there was. Bigger gap than we were expecting. And I feel like AI just shined a giant flashlight on it. We probably would've gotten there eventually, but yeah. To your point. It was way faster, but it also like really forced us to [00:02:30] have some tough conversations because I think everybody had a idea in their mind in what this product should be. And we were like, yeah, we're gonna go validate it really quick and then we'll move on. And that really was not the case. And so it was much more like, we need to take a step back and we need to rethink this. And that's where we're at. We're [00:02:45] completely changing our product strategy and we've kind of scrapped the entire original idea. Jeff: But it's amazing because you found that, because even going through and by hand kind of trying to draw out findings and, and synthesize it, you know, the old fashioned way, and one of my things [00:03:00] has, has been you can't edit your own work. You always need kind of like two sets of eyes. The reason for that is, is I think at some level. We are bad at checking our own assumptions . What comes out is, is if you can have a kind of impartial third party [00:03:15] just start to draw synthesis of findings out. You'll find sometimes it tells you something really different than you thought or maybe a little bit different, but enough that it matters. Sierra: Yeah. And that's exactly where we are right now. Like I think when we went through our research, and this is when we started to poke at it and we're like, how [00:03:30] does this compare? What's unique about our offering? And, and we were right, like there was a need for the product in the market, but it already existed in way too many places and we had nothing different. And so we were like, okay, why are we doing this again? So, it was there, but I think the more you can poke at it and [00:03:45] Right. Allowing it. Unbiased and unobjective, like all of us wanna build this product. All of us have an idea already. Designers already got screens mocked up as an in his head. I've already got a strategy of mine that we wanna go with. Having it kind of like be completely independent. It [00:04:00] can't hear things that didn't exist. Right? Like I think I've been in interviews with people where you ask them to synthesize on the findings afterwards and they say something that, the user may never have said, but it's just in there, it's in their head. And AI doesn't have that bias. And so for us, [00:04:15] hearing all of that and seeing all that and really pushing it, I think that was also a really good. Move for us was really pushing the AI to continue to try and make it better and try and poke holes and ask these tough questions that we were getting from our team and our leadership really allowed us to shine, like I said, [00:04:30] big spotlight on the gap. And yeah. So we're halfway through that process that you just talked about. We're completely reshifting, but super excited to get those light bulb moments with the team when we re it. Jeff: Like, I do think this goes back to a non-AI, just basic about companies, which, and I've said this a million times, like I firmly believe [00:04:45] the best companies are not the ones that succeed more often necessarily. It's, they're the ones who kind of pick the very few things that they're gonna succeed. Incredibly, amazingly well. And they pick a few things to just be world class at and succeed really, really well. [00:05:00] And part of that is saying no to a lot of things and, and being picky and being right about where you're picky. And this can help you do that. That means everything you say yes to is a million things you say no to. And now you can say yes to potentially the next big thing. For apartment list Let's dig in here from a tooling perspective, was this [00:05:15] a thing? Were you just feeding into a chat GPT instance? Were you like, what'd that look like to actually kind of have the, the tools to give that feedback? Sierra: This one was really simple. This was just chat GPT, and we were feeding it all of our transcripts, we were feeding it all of our notes, all of our conversations. And then we were just [00:05:30] continuing, like I said, to push it. It wasn't just, synthesize, give us the findings. It was, okay, now compare this. To products in the market. It was, okay, now that we've taken this synthesis, we're gonna go create a product strategy, we're gonna create a product for fit. And then we gave that back to [00:05:45] chat GT and said, Hey, assess this. How does this compare to what we heard? Is there willingness to buy? Like those kind of things, look out in the market and tell us what people are saying about similar products. Like you said, doing that sentiment analysis. So I think it was the follow ups as we kept pushing and we fed it with more of our [00:06:00] perspective and strategy and saying, how does this compare That really allowed us to shine the gap. Jeff: Did you find that? You had to correct for any kind of maybe wrong or slightly inaccurate beliefs that the AI had, like I've seen a couple of times where it, when we've asked to do like competitive analysis [00:06:15] or compare this to what's on the market, it will kind of maybe either a, it takes competitor positioning or something that, that is maybe not a hundred percent literal and take that literally, or it will maybe be outdated in a couple areas end. We have to kinda go [00:06:30] in and correct it and or build up its knowledge a little bit to make sure it has kinda like the full view of current world. Sierra: Yeah, you definitely have to guide it. We definitely had to do a lot of guiding is like focus on these competitors and don't talk about these kinds of things. It would interpret, market website [00:06:45] positioning always sounds a lot flashier than the product may actually be. And so we had a lot of those where it was like, Hey, this company says they can do this, and we dig in and do research. And we're like, yeah, that's not actually what they're doing. And so we'd say Hey, ignore this. It's not actually a competitor, but it was a good flag for us to go through those exercises. [00:07:00] So yeah, I think for us it was more like a check. Like I said, it was a checks and balance kinds of things. Not as much as a fear flat out wrong in some instance. Yeah. But more like a, Hey, we need to look into this. em: Next up we have Neha Manga, former CPO at Lattice, and how she built her own [00:07:15] AI Slack intern. Jeff: You guys launched. I, I think one of the cooler things I've ever heard of, uh, in Slack, one of the big problems in general, I feel like with AI has been, there's all these tools, but you have to change all your workflows and there's so much inertia to how people operate. If you can get the productivity and the capability [00:07:30] where people already are, you can create magic and , you and the team built basically like a PM intern into your slack. Let's talk about that. 'cause , I am still like my hair is still blown back by this Neha: yeah, no, this is actually one of the most powerful and most fun things ever. I love custom [00:07:45] GPEs, right? So I write my custom GPEs for everything, and I was like, Hey, we have this amazing data. Our sales team captures this amazing data in Salesforce. We have the customer success team, we have these gong calls, and like all these different things. Can we just throw them and create custom GBT to like, just [00:08:00] give us data about, oh, what are the biggest requests in analytics from our. Things like that, being able to answer some very simple questions, so I created. Custom GP that was like PM intern. Super, super, super useful. Then [00:08:15] my head of product operations came up with an even better idea. I was like, Neha, could we, uh, do something like, uh, and I think he used Zapier to hook this up in a custom slack channel where you can just go in like, Hey, you know, PM intern. Like, [00:08:30] what are the biggest requests from X, Y, and z? So and so customer just said this. , Are other customers also asking the same feature or you know, we have a product called engagement. Like for example, in engagement, were the top customer requests that we get from smb, small, medium business customer. that has been game changing. It's such [00:08:45] a powerful way of Democrat data and putting this into your fingertips and making like really good decisions. And especially when it comes to roadmap planning or even in between, right? Like you get some customer escalation. They're like, Hey, we want this feature. Like, oh, who else wants this feature? Let's learn more [00:09:00] about this. And instead of waiting through five different data sources. all in one place, super accessible to anybody in the organization. I think that is very, very powerful. Obviously, you know, still have to use a lot of spec sheets, for example. It doesn't connect directly to Salesforce or it doesn't connect to like some of [00:09:15] the systems we use. So we are doing some kind of automated Exports out to like a folder from which GBD reads from, and this is where like, I love like MCP connectors, all those things to be coming true so that like all this content can be pulled in from multiple sources and create this very [00:09:30] useful picture for that is roadmap, planning, big bets, small fixes, and making it super accessible to everybody. Jeff: OpenAI just launched a chat GBT agent, right? Maybe very soon. It will just be the things that you can't get through some native [00:09:45] API is just ask the agent to go fetch. Specific data you want and it, it adds 30 seconds, but it, it's fine. It's way faster than what the old world was. You had, like you said, five, six tools that you were manually going through, pulling data out, correlating it, figuring out is it relevant, is it [00:10:00] not? Does this support this, you know, thesis and, and do these match up and, you know, God help you. If you found a difference between data across a few tools, like now you Neha: That is true. Jeff: again. Neha: One of the companies I'm advising right now, they're also very big in Latin America, and some of their customer feedback [00:10:15] comes in a different language. Sometimes Portuguese, sometimes Spanish. It's like, I don't understand this and I wanna see some of these things. And just having like, oh, this can be automatically translated. I don't have to worry about pasting data and Google Translate or some of the tool and kind of figure it out. I think it's just so [00:10:30] fast and Jeff: Yeah, Neha: the way it's emerging. Jeff: that is I, I think a drastically underrated. advantage that AI has, has just quietly solved and no one talks about there's things that would've been revolutionary on their own that could completely buried under the [00:10:45] bigger things that they do. It's, it's nuts. . So looking at the team and using this, agent how, how did that kind of play out? Were there surprising areas where , that showed value? Neha: they were not surprises because the product team has been really good at like, just looking at 15 data sources. But I also know if I asked [00:11:00] like my principal be, I'm like, Hey, I'm looking at this. Can you gimme a report out? And, and the amount of time she used to take, for all the right reasons to pull everything together, it was just massive. And now it's just at finger tips. Like, it's just like within seconds. That has been just a game changer. And the best part is it's not just a product [00:11:15] team, right? Anyone can use it. So it's like a marketer and you know, they are, or like a salesperson trying to go into a customer meeting and they wanna pull out a few things. I think that is like, you know, just democratizing access to product feedback from everyone. My goal is like [00:11:30] just making sure everybody in the organization is think very product minded and customer minded. So I think this definitely helps achieve that. Jeff: We had a woman named Sierra on who is over at Apartment List, and she was talking about kind of this exact use case where they had hooked. All these kinda feedback sources together. And they were [00:11:45] running interviews through, you know, parsing in GPT and using that to kind of make sure they were kind of lining it up with all the feedback and all the details they had. And one output that they had from that was they realized that kinda what they were really gungho on building, it wasn't that there wasn't product market fit. There [00:12:00] was like everyone, everyone really liked the feature capability. It was just. It existed like everywhere. And there wasn't really a big demand for it. 'cause it, it was really strong, strongly done elsewhere and it didn't fit in the model of kind of what they wanted to be as a company. , And they were able to, not just their speed of going through this [00:12:15] data, but they were actually able to check the humans and provide the kinda a second set of eyes to make sure and, and rationalize their thought. And they, they saved a lot of time on potentially working on something that ultimately would've probably been a, you know, bad road to go down. Neha: Yeah, that is true, right? Because I think humans, we suffer from [00:12:30] emotions, which is also strength, but also sometimes a weakness and having a more data driven approach, which, uh, I can kind of second set of files, like you said. , Providing the perspective can be Jeff: Yeah. so what went into building, the intern here? Like was it just uploading a ton of all your kinda [00:12:45] customer context, uh, from your end or was there more to it where you were kinda training it on what the output looked like? How complex was it to actually build what sounds like a really, really useful tool? Neha: It's actually super simple and I'm surprised like first of all, that it's very simple. Secondly, [00:13:00] people still don't build enough of their like personal GPTs to make themselves more productive. It's just giving the GPD like instructions on like. Hey, here's, uh, here the inputs I'm giving you. Here's the output I want. This is what I want. And funny enough, adding things like, don't [00:13:15] hallucinate, don't give answers, don't make up stuff, works really well and testing it because it's almost like using a rag model. So it's not like training, training. It is kind of like retrieving from the source of truth, which is what I want actually, and actually turning off like, Hey, don't search [00:13:30] internet. Search only this data. This is the truth. The search function is also very useful if you're doing deeper market research with some folks do, but not in conjunction with this was less than half an hour to set this up. And I don't know how much time they , used using Xavier to kind of hook this up in, slack. I remember [00:13:45] him asking like, Hey, Neha, can I, can you shoot access with me and I'm gonna, uh, hook it up? I was like, okay. That was morning. And by after he's like, oh, by the way, he's a Slack channel I created. I said, okay, great. Jeff: the one inquiry that you were talking about earlier that your director did into like how something [00:14:00] went and having to go across five or six different tools, you probably saved enough time in just that one, research mission to pay back all the time it took to like build it, it sounds like. Neha: Yeah. Or not even like, it'll not even take us like one day to ever like look through like five different [00:14:15] data sources and Jeff: Right. Neha: yeah. Jeff: But that's, that's the thing is it's quick to do that. But you've done kind of all the previous work we've talked about to understand how this stuff works I think this came up once I was talking to people before and someone's asking like, well, how do you get going? How do you get [00:14:30] started? And it's literally just. Pick a problem you wanna solve. That sounds good, or even doesn't sound good. Just try. Just try and solve things. Just everything you do. Take a little bit of time and go like, could I, how could I solve this using ai? Or how do I approach this with that? And try and it's gonna suck the first couple times. Like, [00:14:45] you're not gonna have good outputs the first couple times, but keep at it and you'll just get better at it, right? em: Hey, PRDs Roman Gunn, VP of Product at Zeta Global taught us how he was able to automate PRDs in his own org. Jeff: I have never met a PM [00:15:00] who enjoys writing PRDs. And every time I've ever brought it up, people laugh and cringe and, and all those things. But this is one area you've already. Made a ton of progress on 'cause like every normal sane product person on earth, you don't like writing [00:15:15] PRDs. Roman: If you find a product person that likes it, run, there's something not, Jeff: They definitely have a hand in their freezer if they like, if they liked it so. Roman: Bundy vibes there. Just don't do it. Jeff: So how'd you get out of it? Like, [00:15:30] walk us through this process how's it work and, and what are the unlocks that you found that really made it possible? Because I think, there's a lot of people who say like, oh, it can't do that. It can't write that well, it sounds like trash. And, and I think there's a lot of ways you can make this stuff work Well. Roman: . Well, I, I think a lot of it is just going back to the core of it, like [00:15:45] what is the purpose of an Epic or a one pager or a PRD? And at the core of it, it's communication. It's a really a human soft skill, Jeff: Mm-hmm. Roman: I. I find that teams who are really on the [00:16:00] artifact of, I have this PRD or I have this epic, or I have this one pager, do it for one of primarily two reasons. One they want a security blanket. Jeff: Mm-hmm. Roman: to the process having been done that way. [00:16:15] or two, they just wanna check a box, right? And, in some organizations that works, but to me that feels very waterfall. So having all this documentation. Before you collaborate and, and really express the why [00:16:30] and how we're gonna get there always felt a, a little disingenuous to the Jeff: Mm-hmm. Roman: So what I always did is I had conversations with people and then we would build these things dynamically based on what we decided is going to work and not going to work. So this thing doesn't have to get revised [00:16:45] 13,000 times, but like, 9,000 times. that's where actually the basis of this is with how I build PRD. So I have an agent for epics one pager, PRDs, weekly updates, what have you, all that good stuff. And essentially I take examples [00:17:00] of PRDs or epics that are really good or in the format that a company likes. As well as that I believe makes sense, Jeff: Mm-hmm. Roman: fed those examples. Also just. From there I indicate what [00:17:15] I want to get across in a Jeff: Mm-hmm. Roman: A PRD. And the way I express that via the agent for myself is A-T-L-D-R. Jeff: Mm-hmm. Roman: of the time when you're speaking with a higher level, person, you wanna just go and dive in and say, Hey, what are we really achieving here? Jeff: Mm-hmm. Roman: So I start there [00:17:30] even when I'm uh, doing this. Jeff: Right. Roman: never actually type this. I'm always communicating back and forth. So now the agent knows what kind of content it expects from me, and it can indicate right back to me, Hey, I don't have this piece of information, [00:17:45] and then I can go ahead and provide that. So now I'm having a conversation the same way I would be having with potentially an engineer or a QA manager or a designer. And by the way, anytime you do this, you do still have to specify. This is for design, this is for qa, this is for [00:18:00] engineering. And. Back an engineer and a front engineer might also want these things differently. So it's right back to communication. Once you do that, you make sure that it's outputted in the same format. So if you're using say, confluence or Jira and [00:18:15] things come in a table form or broken down into specific sections, already have that formatting so you can copy and paste it where the people are going to be reading it. Advanced points for using an action to. that right to your specific [00:18:30] instance. So you can automate that directly in. And then from there you communicate with people the good old fashioned way. Say, Hey, this is what I have, this is what we're trying to achieve. Do we have any questions? And from there you have that transcript running while you're having this conversation. [00:18:45] See that right back into it and say, Hey , these are the conversations we had. How do we update this based on what we discussed? And now you have a living, breathing document that's constantly being iterated on. That to me is how you evolve. The [00:19:00] PRD. Jeff: Yeah. Roman: it's a good artifact to checklist against, but it needs to be built organically and we're building some synthetic and organic synergy here. Jeff: I mean, I do think that's an important part. It's like we didn't start all doing PRDs for no reason. They serve a purpose. You want them [00:19:15] done well. You want the right information so people can all be on the same page about what we're building, why are we building it, what are great outcomes, et cetera. But like, I, I think there's a couple pieces in here about that you went through that I just wanna kind of dive in on and clarify how you're doing. It's like a, are you actually [00:19:30] using the, the voice interface? Do you mean by, you're saying you're not typing, or is it just more like you'll have the conversation, record it with, peers and you'll upload the transcript or the recording or like, what does that actually look like? Roman: Step one is I use the advanced uh, voice functionality Jeff: Yeah, it's [00:19:45] so Roman: and I can just keep going and it synthesizes it. It's wonderful. So step one is that then when you have that baseline, you then go and have say a team meeting uh, that you record and you have a transcript for. And then after that, you feed that transcript into that same [00:20:00] agent where you had that conversation where you created that PRD and you say, Hey, based on these, what are the main takeaways? What are the things that need to be updated, revised, et cetera. And that's when you do go to text when you're doing the copy and paste. so then it spits that out, and then you can go into right into conversational mode with it right after that [00:20:15] to say, okay, this is how we should tweak it. This is something that we can fade out, et cetera. So you go from talking to the ai, talking to people, talking to the AI again, and then you ship that right back to the people. It, it's a Jeff: Yeah. Roman: of human [00:20:30] synthetic, human synthetic, human, synthetic. Jeff: You talked about a little bit earlier in the process, you fed in what you wanted, right? Like this is examples of great output. Maybe this is examples of bad output. I don't know about you, but I, I found in that instance it's training A GPT to kind of write the way I want it to [00:20:45] is almost at times like working with a 6-year-old or something, where early on it's a lot of. You put in what you want, you put in what you don't want. You kinda label that, and then you do some test outputs and you kinda have to go back and go like, no, no, [00:21:00] no, not like that. I said this. Here's what was good, here's what was bad. And you just go through that cycle like 20, 30 times. The more you can do it, the more it dials in and just each time be like, that was really good. This was bad. Don't do that again. Roman: My [00:21:15] GPTs and my three-year-old, very similar in that way. It's Jeff: Yeah. Roman: like constant loop, constant reinforcement and then cycling through it. So a hundred percent. Jeff: And then once in a while it forgets and you have to be like, no, no, no. Remember we don't do that. Roman: The, the AI doesn't give you a cheeky [00:21:30] smile, though. The, the three-year-old will be like, oh, I actually knew I was just testing my boundaries. Jeff: So through this, right? You train, you train the GPT on what you want, what you, what's, what's not, right? And then you kinda have this conversation, but it is this kind of cyclical thing. But you're able to build this now [00:21:45] living, breathing. Output of A PRD that updates live and, and, right. You talked about some integrations with, JIRA. I think that's what you said you guys use. But this is, this is something almost anyone could do. It just takes a little bit of time and focus, right? Roman: [00:22:00] Again, a little bit of time, but it saves you so much time Jeff: Yeah, Roman: I think a lot of these tasks are, hey, in invest, I, I don't even wanna say a, a day, Jeff: no. Roman: maybe a day. And all of a sudden you're saving hours and hours and hours [00:22:15] every week. It's very much worthwhile. And when you do that, you actually learn things like, maybe when I did this PRD, gPT had actually helped me then build a great bartender, GPT, which we actually have for a company event. We had a station where you would talk to it and figure [00:22:30] out the best cocktail to make to you, Jeff: Oh, that's awesome. Roman: you would go over to the bartender and be like, Hey, the AI recommended, this is what I drink right now based on my mood. Jeff: Right. You might take a day to do this, to train it. I think that's true. If this is one of the first things you're doing with AI and kind of trying to use it in this way, it, [00:22:45] it could take you a day. But I, I read a post recently talking about like the, I forget what five steps to, to get started with ai if you haven't or something. 50, I dunno. Some number of steps. But the first one was. I loved this bit. The first one was just start using it because [00:23:00] everything you do builds on itself and it's going to get easier, right? Like you could, the bartender one was a lot easier to do 'cause you've done the PRD one and there's a lot of these learnings and practices that you will pick up and, and carry forward. So like each time you're just gonna get faster and faster and faster and better and better and better. And it [00:23:15] just, if you haven't done something yet and you want, and you're a pm, automate your PRDs. Let's not be perfect first, but you know, like you said, a a day of work is going to save you exponential time down the road. Roman: Again, harder, better, faster, stronger. Cue the deaf Jeff: Exactly. em: And finally, [00:23:30] we have David Farr, former CPO at Sparkle, and how he was able to prototype in a single afternoon. Jeff: You also had a great kind of story around prototyping and rapid prototyping and Right. That's, I mean, that's one of the huge use cases I've seen for this [00:23:45] is it's lovable and bolt and V zero and a million tools kinda popping up every. Every day it seems like to do AI coding. I know I've built a, I think I've talked in the show about, I've built a couple things that we use internally now. They're never, ever going to be [00:24:00] production ready for, anything more than five people here. But it gets us a lot of value and it's something an engineer never would've built. But at the same time, you were also able to test out some new ideas and kinda quickly validate, you know what, this is a bad rabbit hole to go down. This is not gonna yield fruit. Let's avoid it. It got you the answer a Derek: Yes. Yeah. I [00:24:15] mean, so years ago, like a decade or more, we had like, we had this game idea of something we wanted to do on the site. And you know, for something like that, like I have to spec it, it goes through design, it goes through, uh, engineering and qa. And we had something [00:24:30] that went all the way to QA and we're testing it and we're like, oh, this kind of sucks. Like, we don't like this. And so all of this like. Weeks and weeks of developments and work went into this thing that we scrapped and we just, it went out the window. So flash [00:24:45] forward to, a couple months ago I had a different idea for some kind of like cool word play trivia thing, but I, you know, we're, we're resource constricted team. I mean, we we're, we're busy all the time, as is everybody. And so just [00:25:00] one afternoon I like popped into I think I was using Rept at the time in tandem with cloud code and like I took like screenshots from an old book, uh, that had something that's kinda similar to what I wanted to do. And then I like sketched some stuff out, like on my iPad.[00:25:15] And then I brain dumped all these things and said, look, look, just build me a prototype. And I didn't spend that much time, I mean, under an hour and went back and forth with it of just okay, now try this and what about that? And feeding in, you know, all this kind of answers and then play tested it [00:25:30] and. It wasn't good, right? Like it wasn't a good experience. And I, Jeff, I was so excited 'cause I was like, man, in an hour I did this from idea to proof of concept and ruled it out. And I didn't waste [00:25:45] anybody's time, including my own, because I learned a lot through the whole process and we're not gonna do that thing and like that, that feels really good. The other thing I'll add is that like, through the course of just even building some of these things on your own, whether they're good or not. It makes [00:26:00] you a better product manager because then you really, it puts you a little closer into the shoes of an engineer or a designer who has to build what's in your, in your head, and then you start to realize like, oh, when I'm giving them [00:26:15] requirements, I actually need to make sure I'm accounting for this piece that I didn't really think of before, that they just do because they're good engineers or they know me, but you get more empathy for other people in those type of jobs. And you really started to appreciate some of those things, like different UI [00:26:30] constructs, data schema, that you didn't just, you didn't have that perspective before because you weren't the one building it. You were just the one coming up with it. So there's so many tools out there to just prototype and build that, just jump in and start building things. ​ [00:26:45]