HH84 mix.mp3 Harpreet: [00:00:06] What's going on, everyone? Yes. To those of you that were hanging out in the room, this should be musiala rest in peace. He was killed earlier this week in Punjab. Definitely lost a great member of the community. He was kind of like a bit of a revolutionary, kind of fought for the rights of Punjabis in Punjab because we are a bit mistreated out there, but nevertheless, great music. Definitely check them out that he's dope. One of my favorite rappers. That being said, y'all, thank you so much for being here. It is Friday, June three, 2022. We're back with artist data science. Happy hour number 84, 84 and stuff. My shit crazy. We've been doing this for quite some time. Thank you all for tuning in. Thank you all for doing it. I hope you get a chance to tune into the episode that was released on the podcast today. I did a episode with Jeremy Adamson and we talked about his book, which was pretty much centered around the theme of how to how to build and lead data science teams. Definitely, definitely recommend that book, especially if you're looking to get into a leadership position or maybe you've recently found yourself in leadership position is an episode that you do not want to miss. Also, last week's episode was with. None other than Nick Saban. We talked about his book, How to Ace the Interview. So if you are in the interview process, if you're currently interviewing, definitely check that out. Harpreet: [00:01:38] I gave him a little bit of a I gave him my pitch kind of my tell me about yourself in that interview. And you broke it down and deconstructed and told me what I was doing wrong. It was great. So definitely check out that interview. I think guys will enjoy that. A couple weeks prior to that did an episode with David Spiegelhalter talking about the art of statistics. So definitely check that out as well. Huge [00:02:00] shout out to our sponsors for this week's episode. It is happening right around the corner. We've got a few days left. June 8th, ninth and 10th. In Toronto, it is the 2022 MLPs, World Conference Machine Learning and Production. Definitely. Check this out. If you're in Toronto, I'll be there as well. Come see how we at the pachyderm booth. I'll also be walking around the floor and checking out as many as many talks as I can. I'm looking forward to meeting a lot of cool people that I know. A lot of community members from the community. Be there, shout out to Dimitri or he'll be building. I think Akiko will be there as well, so I'm looking forward to hanging out with you again. Kiko Yeah, if you're interested in and production benchmarks, lessons learned from field, then this opportunity can be right up your alley. You are going to definitely enjoy this talk. They've got a lot of talks from people, from a lot of awesome companies, meta, Shopify, hugging, face, Spotify, Tech on eBay, Lyft, DoorDash, Vanguard, Pinterest, and of course the company I work at. Harpreet: [00:03:09] They're going to be talking about a whole bunch of different topics in these talks, talking about engineering four foundational models to production, budget stuff on Kubernetes, how to build an ML ops platform from scratch as well. Definitely check that out. There's also a free aspect of the conference now. The free aspect is for the demo day and career fair in the exhibition hall. So that's happening June 9th at 4:30 p.m. Eastern Time. So this is an opportunity for you to just kind of check out what the products are. The companies are are kind of building. And if you're anything like me, like I'm a product head, I love playing with product. I absolutely, absolutely love playing with products. So it's always a lot of fun to go to these demo days and see what people are doing in space. Good way to kind of just put [00:04:00] up your general knowledge, so definitely check that out. The MLPs World Conference, you shut out the dysfunctional podcast almost. So I appreciate you as a support shout out to everybody in the room, in the building shouting, What's going on? We keep that within the house. Russell is a Smith. Tom Hines, bravo. Good to see you all there. Vin is asking the question. Harpreet: [00:04:26] Wait, we have to put our stuff into production. Yeah, so. So let's talk about that. You made a post earlier today or maybe yesterday when you're talking about there's a lot of people telling you how to build a model, but not a lot of people telling you why you should build a model. And it it's kind of tangential to that question. We will circle back to it to just having to put stuff in production. But let's let's let's talk about that. Let's take that off. Why is it that we need to go? Why? Why is that? We need to build models. If you guys have questions, whether you're watching on LinkedIn, on YouTube or even right here in the Zoom, if you've got two questions, please do let me know. I will add you to the queue and we'll get to your questions. I know everybody has a question as well, so I'm happy to get to Abby's question after this, if you don't mind. We'll go ahead. Let me get to it. Is that what you said? Well, if you're in a rush, we can get to your question first. Oh, no, no, I'm not a rush. I'm just hanging out. Okay, cool. Yeah, cool. Perfect. So then let's just go for it. Let's let's talk about, first of all, why are there so many people telling us how to build models tonight? Enough. Telling us why we should build models. Vin: [00:05:41] I think the first one's easier and I guess crazy hard as what we do is I think it's easier to explain how to build because you don't have to have sort of this concept of an end product if you're not talking about why. So I think that's the biggest [00:06:00] for us. That's our biggest gap in the field, is that we talk about what to build. And if you really have that kind of mindset, the academic mindset, which is good because if we didn't have academics, I wouldn't have like we wouldn't have a job. So academia is really important, but at the same time, you once you leave academia, you come into the business world. Now we have to develop something with value. We have customers. We have users with stakeholders. We have people that actually want money. I mean, I'm sorry, we have greedy people that enjoy making cash and so they pay our salaries. And so we kind of have to deliver some value to them. So all of that changes the way data science works because now it's like, okay, don't just throw data at every problem and you start whittling down. What kind of problems do you have to use data on? And it's a pretty small list. And then you go like one step further down the ladder and you say, How many problems do you really need to use models for? It's even smaller. Vin: [00:07:00] And then you get into deep learning and like the really complex there's, I don't know, 2%, 5% of all business problems might need deep learning if that. But and I think we spend like an exorbitant amount of time focused on building increasingly complex, deep learning models because there's a perception that we have actual problems that those can solve when we really don't. We don't have a whole lot of society big problems that need deep learning at this point. And so the use cases are pretty limited when it comes to value. So that's why I think it's we talk more about how to build rather than what to build and why. Like I said, it's just because it's easier. And I remember seeing somebody, I can't remember whose tweet it is, but they were talking about how just vanishingly small the Venn diagram between strategist and technologists were really is I think was Jim Breyer was talking about at Davos how [00:08:00] he prefers to invest in technical strategists. Because if you look at the heart of the largest companies, the most successful tech companies, they all have a technical strategist at the very top of the company. And so that's really and there are just so few. Harpreet: [00:08:17] Then. Thank you very much. Let's go to Mexico. And after Mexico, let's go to Montana and shout out to Joe Reese in the building. Joe, you can see, Matt, how things work. Joe: [00:08:27] Good to see you. Last week, by the way. Harpreet: [00:08:29] Yeah, it was nice. Cool, cool. Hanging out with you, man. Also, Mark Lange in the building. Good to see you here as well. Go for it. Mikiko: [00:08:37] So this is a funny thing because this was a question I actually posed to Joe and Matt earlier this morning, which was like of the problems that exist right now, like, for example, in the MLPs world, like how many of them are pure like problems due to EML versus like problems involving software engineering. Right. Like how many of them are problems that are unique to machine learning projects versus like they've been solved? And it's almost more a question of like, has that knowledge actually been propagated and like adopted and is it relevant to like different groups? So for example, like it into it, they have a very specific like enterprising mindset for kind of how architecture is built there. And a lot of times when I was looking at so we had like a global hackathon and we were seeing these presentations and I was looking at all that. I'm like, that is like a different grade of problem than we are having for that. We're trying to solve for like in the MailChimp unit where it's like we're almost not super concerned. Like we're not looking at the efficiency of like, like auto scaling versus not auto scaling. We just kind of assume that we should just do it because like we just don't have the energy and headcount to kind of waste our sort of resources on that question. [00:10:00] Mikiko: [00:10:00] But to me, it's kind of curious that like I feel like a lot of what I see in like. Like the in-law's world. And I'm not sure if this is true in the data injury world or in the data strategy world. I kind of hear a lot of it. I'm like, okay, I don't know if this is like actually a problem or if this if this is just like, let's try to, like, get money back off of this really big, like, deep learning model that we trained and see if we can monetize it or let's see if we can monetize this like other thing that maybe is only relevant to like five or 10% of people. And this is something that I think about a lot because yeah, like I think especially like if you're trying to learn to become like an ML engineer or an engineer, it's like really crowded and noisy right now because I can't tell like what's actually important, relevant versus like what is like an extremely specialized, like niche use case. Harpreet: [00:11:00] Thank you so much. Also got a YouTube channel that's just launched with Kiko Gazzaley, the engineer. Definitely check that out to make you go through the show. A link out. Put it right here in the chat. I'll be sure to include it in the show notes as well as post on LinkedIn. You guys check out Mickey Go's stuff. Go for. Mikiko: [00:11:24] Sorry. No, I was just trying to do, like, the party emoji. It was the wrong one. Sorry, I. Harpreet: [00:11:29] Don't have any. Oh, good. Well, yeah. So let's go to a shrink, and I'd love to hear from you on this one that we'll go to to to Avery. I have it for maybe shopping. Go for it. Mikiko: [00:11:42] Sure. For the new folks at the table. And definitely not for me. Could we repeat the question? Harpreet: [00:11:47] Yeah, absolutely. So the question. So then what made a post that that I like that that is interesting post and it was essentially we have too many people teaching you how to build a model, not [00:12:00] enough teaching you people teaching you why you should build a model. So I guess the question now you can answer is one of two questions. You could either answer the question Why is that the case? Or the question, why do we build models? Or Why should we build a model? Mikiko: [00:12:20] I think that second one is too philosophical. Harpreet: [00:12:23] And a good philosophical discussion. Yeah. Go. Go for it. Yeah. Whichever. Mikiko: [00:12:30] That's how we understand the world, right? Is by having models of how it works. Yeah. I feel like Vin and Kiko were both giving answers at a at a level that I'm not going to be able to get to, but I will. I'll say the following. So what Michael was saying reminded me of I was listening to Joe's live earlier. I think Jordan was the name of the person who was on. And yeah, he was he was saying something around down the lines of, you don't have big data until you spend more time. You spend more time thinking about how how to efficiently process that data than actually just storing everything. And that's cheaper. I'm paraphrasing it, but but that was the gist of it. Which is which is what Michael was saying, too. Right. And the world of ML and ML ops is I think there is a big trend right now of adopting machine learning operations, and it's not always warranted. I think. I still think that's kind of like a tangent. I think the reason no one tells you why you should, why you're building models is because no one like it's not a one specific problem. [00:14:00] It's not a problem that someone else can solve for you right then like they're doing, they're doing your job. And it's not something you can address generally and say, Oh, this is the reason to do, other than yes. I mean, we build models so that we can understand how our users behave or or XYZ. So it's like it's much easier to talk about, well, once you have a model that does X, Y, Z, then this is what you can do to make it more robust, which is a totally valid thing. Right. I mean, like, it's good to know the best practices around how, how you have a model serve its, its proposed value sustainably and in perpetuity and so on and so forth. But yeah, I think that no one can, can tell you in too much depth how, how to build a model because each case is so different. Harpreet: [00:14:52] Thank you so much. Appreciate that, Joe. Let's hear from you, by the way, if you're watching on YouTube or watch on LinkedIn, and I know there's nobody watching on Twitch, you can stream there, but if you have questions, please let me know. You can drop the question right there in the comment section or in the live chat section on on YouTube. If you're in the room and you've got a question, go ahead, let me know and I will be sure to add you to the queue. Joe. Vin: [00:15:19] Matt So I recently was looking at some of the reviews for Gemma, Gemma's book on Data Match, and one of the reviews said, Yeah, this is interesting. There's some cool technical stuff in here, but the book kind of gets off track on talking about organizations. Joe: [00:15:35] And how. Harpreet: [00:15:35] To work with people. Vin: [00:15:37] And I feel like all of these data. Harpreet: [00:15:39] Disciplines. Vin: [00:15:40] Have this. Harpreet: [00:15:40] Problem where, first of all. Vin: [00:15:43] We all really love the. Harpreet: [00:15:44] Technical problem solving. Vin: [00:15:46] Stuff kind of people play chess growing up like very well defined rules, the kind of people that did math. We like to program we even things like ML ops, like we're given this roadmap to define how to do something. And [00:16:00] the second problem is that teaching more that organizational behavior and business value of data is much, much harder. It's like a really tough problem. And I feel like maybe in general we just need to emphasize that more like solving teaching. Harpreet: [00:16:13] People. Vin: [00:16:14] How to how to work in an organization, what the business value of data is without them actually having to get that first job to do it. I feel like this is a huge chicken and egg problem for getting the first job, but even as a hiring. Harpreet: [00:16:25] Manager. Vin: [00:16:26] If I find a really talented candidate but they don't have any experience, I don't want to hire them because I don't know. Harpreet: [00:16:31] How they're going to work with my team. Vin: [00:16:32] And so I don't know. Harpreet: [00:16:33] If you can crack that problem. Vin: [00:16:35] That's that'd be a huge contribution to education for data scientists. Joe: [00:16:39] Data engineers and. Well, I mean, it's kind of like I think there's a lot of data science, larping, you know, so so I mean, you kind of going through the motions of building models and all this stuff, but it's kind of the end of day. It's like a pointless exercise if it's not doing anything. But I think a LARP is actually really we're talking about laughs on a walk just a bit ago and how I don't know how we got into this topic. It's typical a Joe and Matt rant, but no, it does remind me a lot of that though, where it's like everything is sort of a facsimile and there's a lot of ceremony behind it, like, oh, you have to go to building models, but I don't know those. As data scientist, you find out there's a lot of other things you could do to. So. Live action role playing, right? Yes, that. Harpreet: [00:17:22] Larp Yeah, I feel like that's something that we have talked about previously on the show as well with Jo Jo The Secret Life of Jo Harper. Does that exist? Joe: [00:17:39] Do some medieval larping after this. So, yeah. Play dates. Yeah, exactly. It's awesome. Harpreet: [00:17:46] Go for it again. Jo was awesome hanging out with you guys. Have a good time. Go for it. Mikiko: [00:17:51] Yeah, it was interesting because back, like, back when, like, I was mentoring, like, Harper, like, with you over a dasein screen job, right? [00:18:00] And like, whenever I would hit, like, we would get questions, right, where people were like, is there a book on like sales analytics or like marketing analytics? Like we'd get that question all the time or like, could you help us find content geared towards like data scientists for sales and marketing? And it was kind of like, well. Part of it is you do have to actually kind of live in their domain a little bit. Like or like, you know, timeshare, I guess you don't have to live there. You just timeshare like at a place like Lake Tahoe or something, right. And like, I think it is one of those things where, I mean, there's two things that I'm kind of wondering. Like, one is the is a sort of dependent on like the influence and how much of a seat you have at the table and how does that correspond with like company maturity and size? Because like when I was working for like smaller startups, it was relatively easier to get like a seat at the table to kind of influence product decisions. Mikiko: [00:19:01] But I feel like sometimes in bigger companies, like you essentially have like especially the Matrix company where you're like part of a squad or I don't know, there's other fancy terms for it, right beside squad or Pod or whatever. Like a lot of times, like what's the kind of pushback if the incentives are to just sort of like deliver stuff, you know, like irrespective of like the actual impact. And I kind of like wonder about that. And the other aspect too of, of the whole like getting domain knowledge. I feel like all the domain knowledge I had, right, like working in like real estate tech, working in solar, working on sales, marketing, analytics. Like I kind of had to get through like working directly with the business partner. And like sitting and living in their meetings and doing a lot of, like reading in, like, I don't know, like the revenue ops groups or whatever on LinkedIn. Right. But it does feel like sometimes people sort of don't want to [00:20:00] do that. But I. Harpreet: [00:20:02] Don't know. Kiko, thank you very much. Anybody else want to chime in here? If not, we can go to a question. I'll give it. Mikiko: [00:20:19] Quickly. Yeah. I'll just quickly add like I don't I'm not entirely sure you should do analytics on who the audience is for for these happy hours. But if there are some really early career folks and by that I mean like sort of in college and stuff, the way you learn how to build models is, I mean, what you're really learning is how to solve problems. So you should major in physics. Harpreet: [00:20:45] Yes, I absolutely agree with that. That's what I was originally wanting to major in. But then I realized that was not smart enough for that. Tom, let's hear from you. Tom is also a real quick. Well, it's actually multi physical engineering, but officially mechanical engineering. But it doesn't describe me. So I love this topic, by the way, because I don't think we ask why are we modeling? And actually. I think it's more about return on data. You've got to you've got to be in a very rigorous collaboration with your organizational counterparts, your stakeholders, your internal customers, your external customers, whatever. And you've got to figure out, oh, how do we use data that we have on hand to improve the situation that could be done with a dashboard or one off data story? And in fact, if we would do a better job at those things, we probably would be doing a better job. In the data pipelines that lead to predictive modeling. But I think it's wise not to rush to predictive [00:22:00] modeling, in fact, while you're developing your pipeline. And I've said this many times, even on this show, just getting a parade of feature weights for the problem you're looking at. That's probably more valuable to the business than having the actual predictions because they can act proactively on understanding those feature weights. And we have to take a real strong responsibility to make what we do clearer to the business side. They don't have to learn how to do what we do, but they should learn. We should help them understand the insights we gain from doing that work. So again, to summarize, what I'm getting at is return on data. When the next time I hear how many machine learning. Vin: [00:22:50] Models get into. Harpreet: [00:22:51] Production, I'm going to just say Who effing cares? Would you stop asking that? How much return are we getting on our data assets? Now it will be quiet because you might have to mute me if I. Thank you very much. I appreciate it. Let's go. Nobody else having the input here. We can go to this question, by the way. Those we watched on YouTube or on LinkedIn. If you do have a question, go ahead. Put it right there in the comment section or in the chat section. If you'd like to get access to the room. Send me a message. I'll send you a link to. Maybe it. Hey, sweet. Thanks for letting me ask my question. My question is something that maybe doesn't get asked as much in these happy hours. It's a little bit more, I guess, business intelligence, less less data science. And actually really it's more operations research, which is like a field of data science that just doesn't get touched on very often. Albeit there's actually like a lot of jobs and a lot of business need for just operations research. So doing [00:24:00] optimizations and simulations I think is a little underrated. Like, everyone likes to create models, right? But creating some sort of optimization framework can actually help really solve do more diagnostics, analytics and solve business problems. And so recently I've been been tasked I have this huge data set that it's really just the results of a very large optimization. So, so picture, it's really in manufacturing. So picture, you know, thousands of sensors are not sensors, but thousands of different levers that you could you could maneuver. You could raise your temperature on one reactor, you know, lower the pressure on another. And there's literally thousands of variables, parameters for. Mikiko: [00:24:46] This. Harpreet: [00:24:48] Optimization problem. And they all have constraints. They have like a minimum constraint and the maximum constraint. And then most of it is, is an LP cylinder program. So it solves to, to find, to maximize some sort of value. Usually it's money because we live in America and that's what we like in America is maximizing money. So I get the results of this optimization and it's like 1000 variables. And basically you have those thousand variables, the min value, the max value and where it actually landed. And one thing that I've had a trouble doing is how do you visualize a thousand different data points effectively specifically? Because it's not really I don't think it's a I mean, it is multivariate because there is lots of variables, but it's not like I have for one for one run of the optimization. For example, you don't have two different temperatures. It only fixed to one value temperature. But how do you display 1000 different metrics to a user at once? I'll open it up from there. But right now they're doing, they're doing it's just a table and it's kind of messy. So I'm open to ideas. That's first instinct would say, [00:26:00] why would you put that much data in front of some of these different metrics in front of someone at once? That's that's a lot for a human to to ingest. And I would that the first thing I would do is, okay, let's say if we're interested in using this one particular lever, then do we know what other levers move the most when we move this one? And if so, can we hold hold like this doesn't make sense, but can we display those? Right. Oh, I'm interested in lever A and I know that Lever A is really correlated using a word loosely with levers B, C and D and then to visualize those together. Or you can just put on to an auto encoder, reduced dimensionality down to a few dimensions just like that. Let's go to Joe for this one. And if anybody has the insight here, we love to hear from you. Joe: [00:26:59] I guess I need to ask like, what are you what action are you trying to drive with this visualization? Harpreet: [00:27:09] That's a very good question. Most of the time it's it's used. Well. So they use this table to to check one of the thousand different values they have, things that they the user has, things that they care about that I might not necessarily know about. But at the end of the day, it's basically used to try to run your manufacturing plant at optimized conditions. So, for instance, the temperature of this reactor should be around 100 degrees and that be one of the results that comes back to you from this optimization. Joe: [00:27:41] So is this like a what or a wind type problem like I'm describing, like what happens at a certain time? Harpreet: [00:27:50] Yeah. Joe: [00:27:51] It exceeds a certain boundary that I'm or condition that I need to take an action on that. Is that sort of what's going on? Harpreet: [00:27:57] I think I mean, I'll just I'll just be very explicit. It's really [00:28:00] for deciding how to run a refinery. I mean, my background is in oil and gas. Yeah. And so it's like, what crude oil should we buy and what products should we make them into and what temperature should we run our reactors at? It's much it's much more complex than that. At the end of the day, the people looking at those solutions are mainly answering those three questions. Joe: [00:28:17] Yeah, I don't know. I mean, I'll just say like in situations like that because like you said, I actually have experience in factories, that kind of stuff. Like I would always just make basically a really simple control chart. That's it. So just what are my upper and lower bounds? If it exceeds it, take an action. Ideally automate it so you don't even have to intervene. Right? So that's the other piece of it. Like I'm a huge fan of. Like if I can, if it's a what or a when type question reports. Great automating a response to that is better. Vin: [00:28:50] Yes, to what extent? I mean, it sounds like people want visualizations so a human can make the decision. The question is, can you I was saying come up with a model. We'll just giving you an answer. And then the next question is, do you need explainability with that, which has become a big issue in data science, machine learning, and then maybe the visualization is now focused just on explainability rather than on making the decision. Joe: [00:29:11] But even with this data set, I mean, it's 1000 variables you said or something like that. I mean, you know, I'd really just try and focus on what are what are the ones that are useful to you and probably just reduce the problem that way. I mean, maybe you may not even need to do anything like PCA or anything. It might just be like, yeah, these are this is what's driving everything. We'll just look at that. So, so the problem with this kind of thing, too, is you get over complicated and then you get false signal. So. Harpreet: [00:29:38] The interesting thing, like if you look at like dimensionality reduction, something like PCA and going back to a hard preset at the beginning, you know, one thing to think about is this is this isn't a time series. It's it's not like I like like you have like in a controlled chart where you have like multiple values inside of a range. It's like, here's the minimum and the maximum [00:30:00] value that we can possibly operate at. Where should we operate it? It's, it's, it's like almost univariate analysis. Like it's one data point for every dimension we have. And right now I'm not thinking about running it multiple times, although what Harpreet said I think is really important. I think where it actually gets interesting is when you run it multiple times, but initially they just run it. Once it says the reactor for this temperature, is this the pressure for this reactor? Is this the problem is there's just a lot of this's you know what I'm saying. Yeah. Sorry, Michiko. Mikiko: [00:30:32] I remember that we were doing like a problem like this and like the supply chain or class that I was taking. And it's like the way they were describing. It's like it's a multi integer like linear programing problem where you're trying to like you have certain inputs and based off some kind of cost function and constraints you're trying to like find this like optimal combination of the different variables. I don't know if you'd still want to visualize that, though, to be honest, unless it's like the visualization is to help like justify the like the calculation. But I think in terms of actually solving it right, like. There's definitely some examples out there that I can probably I can go search and link. I remember because the sample case we were doing was literally like oil and like refineries. So that's the only reason why I remember even remember this. But in terms of like kind of visualization and communicating it, I still kind of feel like it's you're not going to be able to like, visualize it the way you would in Tableau unless it's like literally as the table of inputs and like the optimization function and like what the actual result ends up being. But this is as someone who is terrible at statistics, and I only remember this because I thought it was really fun and you could do it in Excel, which I thought was pretty cool. But yeah. So yeah, just agreeing with Joe, you know, I don't know if you'd really [00:32:00] want to put that many, like. Joe: [00:32:01] With a thousand variables though. I'm just, I'm just wondering, like, because the whole point of making a visualization is so you can get convey information that hopefully is actionable. Right. I'm just wondering, like, what are you trying to what are you trying to derive with this? The end of the day? Mikiko: [00:32:14] Yeah. And I think like calculating like I think calculating the solution is different, right, from communicating the information. Like I think the calculating the solution is straightforward. I don't I don't necessarily know the information. Right, Avery, that you would want to convey or the story you'd want to tell. Besides, this is like how it works. Harpreet: [00:32:39] Let's go to. Let's go. Sorry. No, I think I think you're exactly right that I can. Basically, what they have right now is just a table and it's a really long table. Right. And it shows the lower boundary, the upper boundary, and then where it solved to as well as kind of a it's called it's called a marginal value. And that is like if it's at a boundary, how much benefit would you get theoretically by expanding that boundary? But I kind of agree that it's kind of a factor of what Joe talked about. It, trying to decide what's actually important. Right. And telling the story of that individual part of the story. And then also maybe giving them access to all the variables like they have now in the table. So anyways, I appreciate your guys's feedback. Then go for it. Go to thin. And then, Russell, anything to add that? I think you have some comment on that for us to hear from Tom and Chuck about this. Let me know or if anybody has anything to add. Go ahead and raise your hand and I will get to you. But then after every session with the market question, again, if you have a question, please do let me know in the comments and the chats you're at. I'll be sure to. I'll get to you, then go for it. Vin: [00:33:57] Yeah. I saw something. It just kind of [00:34:00] triggered a memory in me a few minutes ago. It was designed for hypersonic flight, where it was a really similar problem where there was, like, a ton and I can't remember where I saw this at. I'm like remembering an old video or something like that, but they had a it was basically just like a target in the middle of the screen. And instead of showing you all the variables behind the scenes, you could pick the variables that mattered to you and it would just move crosshairs. It was like the simplest thing I've ever seen. It would just move crosshairs and then it would tell you why it moved. You know, it basically explained and this is why if you move these few things, this is why if you change these few things, here's where you are from optimal or from whatever you care about as optimal. And here's why. It was like this dummy simple. It just kind of triggered in my head. It might be a way, or at least give you some place to start at for a visualization where it totally obscured everything. Vin: [00:34:59] It basically assumed you knew variable wise, because it sounds like your group knows what variables they care about and what they care about from an optimization standpoint. So if they can put those two things in, really all you need to do is kind of like what Joe is saying to the to the explainability side of it and to the how close to optimal. What would this change? How close to your optimum would it be? And that's basically the target in the middle. You've just got cross here moving around. If you change this to this, what does it do? And people's heuristics tend to take over from there. Remember, that's what they said was essentially as soon as people understood the relationships between a couple of different variables or a few different variables, they were able to make better decisions. And so that's how they ended up visualizing, like if you change this, here is what it would do to aerodynamics or here's what it would do to top speed, or here's what it would do to friction. You're just like this insane number of variables. And so that's how they handled it. Harpreet: [00:36:01] Ben, [00:36:00] thank you very much. Tom, let's go to you then. At the time, anything to add? Let me know and then we'll go to Mark. Vin: [00:36:09] So Avery, I would enjoy doing this with you. Harpreet: [00:36:12] Offline, but I think the thousand variables for some. Vin: [00:36:18] This is a. Harpreet: [00:36:20] Refinery type thing. Vin: [00:36:21] Or a big. Harpreet: [00:36:21] Chemical process, right? Vin: [00:36:23] Yeah, it kind of makes. Harpreet: [00:36:24] Sense now that it would be a brutal process. Vin: [00:36:29] But it's. Harpreet: [00:36:30] Necessary. Start looking at what. Vin: [00:36:34] You can reduce. Harpreet: [00:36:35] Through. Vin: [00:36:37] First you scale, then you look for linearity. Harpreet: [00:36:41] It doesn't mean you take away all the. Vin: [00:36:44] Linearity, but it's. Harpreet: [00:36:46] Instructive. Because if you can keep the. Vin: [00:36:48] Strongest linear. Harpreet: [00:36:49] Variable of a group of linearity of. Vin: [00:36:53] Features that have linearity. Harpreet: [00:36:55] Now you've done some feature reduction and some people were spot on. Pca is a great way to reduce the. Vin: [00:37:03] Complexity, but. Harpreet: [00:37:04] It doesn't really a lot of people don't remember that. Oh, just because I can take away some PCA features, that doesn't mean I've. Vin: [00:37:15] Reduced the number of. Harpreet: [00:37:16] Features in original space. I still have the responsibility to say, okay, we're now using these PCA features, but each of them relate to these combinations back to the original space. Vin: [00:37:28] But you go through these. Harpreet: [00:37:30] And then I'm sure even if you get very little feature reduction, you want to add a lot of feature engineering to those thousand variables. I am of course. Vin: [00:37:39] Joking, but it could. Harpreet: [00:37:41] Be that once you do the feature engineering, you find some amazingly rich features that will allow you to do even more feature reduction. And now you can go with some new explainable AI to the group. But the thing I. Vin: [00:37:58] Most wanted to encourage you to do [00:38:00] map. Harpreet: [00:38:00] Out what you're currently doing. Vin: [00:38:03] And point out to them this is going. Harpreet: [00:38:05] To be a multi generational process to get to. Vin: [00:38:08] Better and. Harpreet: [00:38:09] Better models, and you should pay me millions of dollars over the. Vin: [00:38:13] Next few years to. Harpreet: [00:38:14] Answer it for you. Vin: [00:38:15] Because it will be. Harpreet: [00:38:16] Hard and no one's quite as. Vin: [00:38:18] Smart as me and able to do this with. Harpreet: [00:38:20] My domain knowledge and my machine learning skills. Yes. So we pay you $1,000,000 over the next five years. I was speaking I was speaking as a very not as much. Just sort of. Tom, thank you very much. I was doing a bit of quick research. I sent you this paper. It's called OPPT Map, OPPT Map, and it's using dense maps for visualizing multi dimensional optimization problems. I'm hoping there's code in here somewhere. I didn't look at it there. I just saw like one page of three nice graphs that might be helpful. So check it out. Let's go to Mark and then let's go to Russell. If anybody else has anybody else has anything to add to this topic, do let me know. Otherwise, we'll go from Mark's comment here to Russell and then back to my question. A So I think I've picked up enough context from, from, from listening to everyone and this is more so kind of like brainstorming outside the box kind of thing, applying to this and more so thinking about like my social sciences brain applied to are very engineering focused thing. But one of the key things is like you have a thousand variables, which is a lot, but also it seems like there's this component of like there's a lot of interactions between these variables, hence why it's very important that you keep track of them. Harpreet: [00:39:42] So you're trying to reduce, right? So. Tom, definitely about multi, multi code linearity. That's a mouthful, which I think is like a really cool thing. Also feature engineering. I don't know this will apply to this, but my mind immediately went when I'm thinking about interactions [00:40:00] like mediation analysis. So and this is like trying to draw back from grad school, but you say, you know, A causes C, but between A and C, there's B and as there's some interaction happening there. And so my my thought process is if we did some mediation analysis, maybe there's different stakeholders or maybe there's certain components of all those 1000 variables that are important to understanding your outcome. And depending on which outcome or which variable you're important your focus with, maybe that mediation analysis could point you to the right variables that are most important for that subset as a way to kind of logically again, I don't know if this could be applied to this space because I've only really seen it for like social sciences. And I'm thinking way back to grad school, but just, just something out there to think about how to account for interactions. Mark. Thank you. Let's go to Russell and then something. Vin: [00:41:08] Thanks, Alfred. Harpreet: [00:41:10] So I think Tom and Vin and Mark have. Vin: [00:41:12] Pretty much touched on everything I wanted to say, but I was just. Harpreet: [00:41:16] Going to suggest 1000 variables. Vin: [00:41:18] It's a huge. Harpreet: [00:41:18] Amount. I would, if. Vin: [00:41:20] You can separate those from the data altogether and map out the variables alone. So identify. Harpreet: [00:41:27] A network relationship map of the variables. Vin: [00:41:30] So you can identify. Harpreet: [00:41:32] The the. Vin: [00:41:33] Binary linearity, the. Harpreet: [00:41:34] Multicolor. Vin: [00:41:35] Realities, the contra linear axes, you know, inverse proportionality and absolute conflicts, etc.. And perhaps think about it like a, like a 1000 track mixing. Harpreet: [00:41:47] Desk for. Vin: [00:41:48] Those that like music or deejaying. So if you put something up, it's going to have an effect on everything else there. And, you know, some of the the automated routines that have those desk, you put one thing up and it moves another [00:42:00] one down. So understand what that map is. When you do put it onto the model and I can see Joe laughing at me here. Have I got, I got the mixing desk wrong. Joe: [00:42:11] No, no, no. We're just noticing your video. It always stretches out so you get like, this time warp Russell or something. It's kind of funny. Vin: [00:42:18] So, hey, I'm a bit like Max Headroom for anyone from the eighties. But yeah, so. So lie. Joe: [00:42:27] Down so you can give it to. Vin: [00:42:28] Anyone that's going to then digest the outputs or the data from the models so that they can understand if they change. Harpreet: [00:42:33] Variable A and variable B will naturally be. Vin: [00:42:37] Affected by it. Harpreet: [00:42:38] Or completely blocked out that they understand that that happens and are not surprised at the defect in the output. And I'm still there. Vin: [00:42:53] Yeah, I was just I was just going to say apologies, apologies to Joe, because I can see he's got some real good mixing equipment there and I've probably completely murdered mine. Joe: [00:43:02] So it's all good. I mean, if in the UK everyone there, deejays and stuff. Harpreet: [00:43:08] So it's actually actually a true fact. That's stereotype joke. Sorry for the aftershock. Then I will circle back to something. Mikiko: [00:43:25] Um. Yeah, I mean, I could ask a lot of different questions before I try to answer it, but I'll limit it to one. What is the point? And I think this is others have said this too, right? Especially Joe. I think like what are we achieving with the visualization in particular? Right. Are folks just interested in the values that the the various variables are taking for some process that they're running? In which case, like is like a dashboard or any sort of visualization, the right method, are [00:44:00] they just inputting it into another software service? And in which case you don't you don't need to go through that visualization aspect. So, like, I just I'm very skeptical of a 1000 dimensional visualization. Harpreet: [00:44:18] Yeah, I understand that. I think I think the problem is this is really what's used to determine how to run an entire refinery. And refineries are huge. So you're literally having like probably 70 reactors and there's a temperature, a pressure and a flow for each of those. Plus, there's all the stuff you're putting in, which is probably like 25 different things. And then there's all the stuff you're getting out, which is probably 25 different things. So right there there's probably 200 variables. But I agree, I think breaking down the actual problems because it's not like everyone cares about certain aspects of it. So I think breaking it down makes it better. I agree. Joe: [00:44:56] No, I'll tell you the types of reports that I see that make my eyes bleed. And I saw one today, actually. So I was crying. A river of tears of blood, actually. No, but it was a table. It was like too many columns. I was like, what are you going to do with this? You know, it's it's just a lot of data, not a lot of information. And so but some people like staring at tables. So I don't know. Some people like to watch the world burn. I guess so. What are you going to do when you look at a table? Right. What are you looking for? So I don't know. I think you're right. Just reduce it down. I mean, this doesn't seem that complicated. It just seems like presenting as it is right now, presenting it as it is. I wouldn't call it complicated. I would just call it, like, super annoying and probably, like, wouldn't yield any any actual results. Don't know. It's confusing, but I'll stop my rant. Thanks. Harpreet: [00:45:54] Thank you. Here's a final point to add on this. The other point to add to Avery's racial [00:46:00] question is a question. Mikiko: [00:46:04] Um, I don't know the original question, but I'm just kind of adding on to the, the reporting piece because my, my life does revolve around it quite a bit. I think the part that most people miss is what do you care about like. Like when I work with, I don't know, 40, 50 variables across different media channels, it adds up to a lot of variables. Can you hear me okay? Yeah. And when we. And the question to ask is, most of the stakeholders have their five or six KPIs that they really care about. So start with the end goal and then go backwards. Most of the times I think what ends up happening is you start with that big spread spreadsheet or database. You want to throw everything in there and then see what that layout looks like and then cut it versus the other approach. Say, Hey, I only care about these five things. And just a different strategy of thinking about how to build about your dashboard is important, and I think a lot of companies don't do that enough at the forefront versus at the at the end. And that's why you spend months and months saying, hey, I only care about four things, you know? Harpreet: [00:47:34] Debbie, thank you so much. Let's go to Mexico. Then after Mexico will go to go to Mark's question and then after Mark's question will go to Eric's question. Also a shout out to Toshi. Toshi. So good to see you, man. Let's go to Akiko for a final point on this. Then we'll go to my question. Mikiko: [00:47:54] Yeah, I mean, for better or worse, like if you have so like that's what I'm seeing. I'm like, could this be a dashboard [00:48:00] where it's you have the different questions like broken out and like the different actions. And I guess to support Tom's modernization suggestions sometimes, like when you build something nice for like one business partner or key stakeholder, the other key stakeholders want it, but in a slightly different format, answering a slightly different question, and that's how you get a retainer or multi project engagement. So I think it's also okay if you if you sum it down to like answering a core question, do like an absolute bang up solid job on it and then. Other people want it. So yeah. And also once you develop the domain expertize and the business partnership like with that first key stakeholder, then actually makes it easier to then deal with all the other key stakeholders. Harpreet: [00:48:49] Everybody. Hopefully that was helpful. If you've got any follow up questions, I could definitely add you to queue after we get to a question. But great discussion. Thank you so much for kicking that off. And Mark is next. And Mark. Go for it. By the way, if you're listening on YouTube or on LinkedIn and you've got a question, please. I'll go ahead and add you and your question to you. If you're watching on LinkedIn and you want to join our live session, send me a message to Mark Corporate. Hey, everyone. So I'm about to embark on a really fun project where I'm about to translate a whole bunch of business logic in one technology and move it to a new technology. And my key thing is, how can I assure there's parity between both approaches? What gotchas should I be looking out for? Just looking for some general advice of making that transition. I've done something similar before as a whole entire team and I played a small part, but now I'm leaving that part. And so if anyone's ever done like a migration [00:50:00] of some sorts of data, you know, what would you look out for? What are some kind of like landmines that you've seen? So I can avoid those. I see Joe and Matt in the corner there laughing because I'm sure this is something they've got experience with. So when you guys go for it and if anybody wants to chime in, go ahead. Just like that. Joe: [00:50:22] Bridging that general. Harpreet: [00:50:24] Sense. Vin: [00:50:26] Yeah, I mean, so so Andy Petrella has this notion. Joe: [00:50:30] Of. Vin: [00:50:31] Data observability driven development. And so I think the core idea, I mean, there's a lot more to it than this, but it's the idea that you want to start testing your data right away and so test the business logic in the source system and see if it leads corresponds to what you think the logic is and then start building out in the new system and then run tests as you do that development process. And so that allows you to proceed in a fairly agile manner and hopefully have something that make sense at the end versus like trying to do a gigantic lift in chips and then finding tons. Joe: [00:51:01] I guess we can talk about maybe anti patterns of migrations. How about that? That's well, I mean. Harpreet: [00:51:05] So useful because I'm about to actually do all my testing at the end. So thank you so much. Joe: [00:51:13] You should always do that at the beginning if you can. Yeah, for sure. It's like TD test it. A test driven development, right? Like you don't write your test after you write your code. The whole idea is do it before and you should, you know, kind of begin with the end in mind. So Andy, patterns for migrations. Let's talk about these for a bit. Right. So we see these we do migrations, right? We've seen a few of these and so anti Andy patterns, so definitely anti pattern number one, definitely lift and shift everything and one gigantic fall swoop. That's a sure recipe for disaster. Definitely don't understand your dependencies. That's number two don't understand anything relates to itself. Just just blindly lift it and port everything over don't understand the cost mechanisms [00:52:00] to the new thing that you're moving to. Just assume it's like the old one. Geez, what else? What else? What else would it be? An anti pattern. Don't, don't get anyone involved. Just blindly do it yourself or do it things after the fact and tell people you did it. What would the other things. Vin: [00:52:17] I think you've already talked. I think you've already covered some of these. But like the one about understanding the cost mechanisms when you lift and shift these exactly the same ingestion patterns in the system used and these exactly the same. Joe: [00:52:28] That's an anti pattern, by the way. Don't do that. But yeah, you're happy to answer questions offline, but those are some general things migrations are. I don't know what. They're always fun. I guess that's one way to put it. Yeah, they're fun. Yeah, you'll have a lot of fun. Vin: [00:52:48] It isn't just exciting to get into new technology. If you once you hit walls and you need it to do new, interesting things. Joe: [00:52:54] Yeah. Yeah. So again, hit walls. So have like the proverbial airbag on your journey as well. You will need it. It will deploy at some point. So. Harpreet: [00:53:03] Yeah. Joe: [00:53:05] But. Yeah. Migrations are one of the things I'm sure we we could talk about this for 4 hours. We're out of time. Harpreet: [00:53:12] So a question then, just for people like myself and anybody who's listening, we talk about a migration, right? Markets, business logic, migration. That I understand. What does that mean? What does that mean? What does that entail? I kind of just subsidize that for us. Yeah. So I can't I can't go into too many details, unfortunately. But the give give a good idea. Say, for instance, you have your data and you're running some type of process to get the data into a certain format. Let's just say, for instance, like data engineering, you have like a machine learning model, you do some data engineering, right? You have a process to create those features, shifting to a new technology that help you maybe scale better, doing that [00:54:00] same exact kind of transformations or data prep or whatever it may be into another kind of format. And so the potential downfall is like, I recreate all this logic, but I do it wrong or I miss something. And now we have two systems that don't match up and downstream it's just impacting a lot of things. And the company catches on fire. Worst case scenario. Joe: [00:54:25] Yeah, I mean, you don't want that to happen. So yeah, I think what Matt said is I think absolutely correct, like building tests upfront, you know, building data quality tests you could use maybe for data quality use great expectations and just make sure as you're bringing in data, that data is matching. Right. Or Andy Petrella, he owns Kensi. I mean, there's no shortage of data observability tools at this point. Big guy, Monte Carlo, whatever you want to use. I mean, they're all great, great, great and meta playing and so forth. I will say like, just make sure you're doing this stuff early and often. I think this is actually the stage when you want to bake in a lot of your business logic into the form of tests or some sort of reliability checks before you start moving stuff in. Because what will happen? Something will break, right? Something is not going to match. And as I said, you know, your airbag will deploy. And so making sure that you have these checks in place, or at least it's a difference between like I guess breaking a nail and going to the air, if you know what I'm saying. So I think I want to add. Yeah, that sounds great. Yeah, it's great. Yeah. Harpreet: [00:55:38] I was going to be so serious and that you all just saved me so much because I was legit, about to do the whole thing and test at the end. And now I know that is bad. Joe: [00:55:47] We're like the world's worst consultants. We should. We should tell you. Yeah, I'm doing a great job. Like, do that. Give us a call later. Now it's fine. Vin: [00:55:56] So one other thing I'd add is and you're probably already doing this, but going back to [00:56:00] the Andy pattern of lifting and shifting everything, prioritize and decide what actually should get moved. And you may find a lot of the other stuff that was in the old system, maybe you don't actually need or maybe you need something slightly different. Harpreet: [00:56:10] In the new system. Also a huge shout out to Abe from a Superconductor, which owns Great Expectations. I'll go to my YouTube channel. Just type in like Abe. I did a thing with Abe. Jimmy and Matt Laszlo. Talking about these type of issues. I think guys will enjoy that because back when I was that comet, I saw all my YouTube. So check that out. Anybody else have anything to add to Mark's question? If so, go ahead. Let me know. It does not look like it. So let's go to. Well, then what did you question by suggesting? Principal, Mark Quigley. Rephrase that question and get things perfected. Vin: [00:57:08] No, I heard the. I stick around for the question. I mean, come on. I can't be Joe. He's already got it covered. Harpreet: [00:57:16] There you go. Joe, thanks so much. So let's go ahead and let's jump to Eric's question. And then after Eric Patrice is in the building, Patrice asks a question on LinkedIn regarding a post made video about the mistakes that he made. I want to touch on that, too. That's great. Joe, thanks for hanging out and thanks for hanging out for being here. By the way, Marc Grillo, if you guys you guys want to jump in at any point, you guys just let me know. Everyone is welcome to participate. Just use the raise hand icon and I'll go ahead and put you guys into the queue. As I said, yes, you are totally still here. So. [00:58:00] Okay, so. So this is a question about greedy versus, I guess, dynamic algorithms. So I have a system that is currently currently greedy, right. So it just it's going to match. It's going to match people to I guess I'll just call them consumers. It's going to match a consumer to a partner if they're a good match. And let's just say I'll match them up to three. And so it's going to match a consumer to a partner up to three, up to three maximum. And there you may be potentially qualified to talk to four or five or six, but we're going to cap it at three. So the way the algorithm works right now is it says, well, which three are the most beneficial to me right now? And it will match to that right away. But what we find is. Those partners have capacities and they'll hit them at different times of the day. And not all of those filters are the same, like width, right? Some are pretty narrow. Harpreet: [00:59:05] And so they're going to catch a narrow slice of people and some are wide. And so if we get to the end of the day and this filter got this wide filter, it got filled up early in the day and there's only like a narrow filter left. We're going to have consumers who aren't going to be able to be matched to a partner because this wide filter that is redundant with this narrow filter was chosen just because it was more advantageous at the moment, rather than using some kind of other system, some kind of other algorithm to. I guess, I guess use probability to say, well, chances are pretty good given what we've seen in the past. We're going to fill this if we hang tight. We're just going to put this off to the side and and. And guess that it's going to be filled later. So I kind of thought about I'm trying to I guess what I'm trying to say is, okay, so what kind of like tool or algorithm [01:00:00] or something can I use for that? I've been thinking about like Monte Carlo simulation of like what different days look like in order to. Yeah. Just kind of like simulate what that would look like to see how much of that problem exists, trying to quantify it right now and then how to go about creating that more, more dynamically optimal algorithm rather than something that's more greedy. What I'm saying, whether that has any insight or clarifying questions. Nothing at the moment. Simply that's the insight or I'll. Vin: [01:00:45] I'll jump in ask a but nope never mind Mark you saved. Harpreet: [01:00:51] Let's have a clarifying question, because that that seems like a very tough problem. And there's a lot of moving parts here and I didn't really fully follow. And so could you repeat like I know there's this matching component of people, but where you lost me was like the the the wide versus narrow window with the filters. Sure. Yeah. Can you just go like one piece and hopefully maybe think of something or Venkman? Sure. Yeah, sure. So, I mean, so I work at LendingTree. Right? So we're talking we're talking about people who let's say somebody who wants to get a mortgage and you might have a lender who says, I am willing to talk to people who make between $50,000 a year and $300,000 a year. And then you might have another lender that says, well, I only want people that make between $50,000 a year and $100,000 a year. So it's like a different like this filter is going to catch a larger number of people than this filter is going to catch. Just because it's requirements are more stringent. Does that kind of make sense with the width thing? And so if somebody comes along who makes between 50 and 100,000, it's going to pick one of those two filters. [01:02:00] And if it picks the wide filter, then let's say that one, that one's full now, like it's a bucket and it's full, so it's going to be taken off the table. Harpreet: [01:02:08] So somebody comes along later who makes more than $100,000. This filter might sit there unfilled. It's going to be like, hey, I'm this you know, I could have been filled earlier and this one could have been filled, but instead only one got filled. And then somebody leaves with sad face emoji. So what I'm getting from this is let me know I'm correct. This seems like both a ranking and optimization problem at the same time where you have to rank what's the best fit for a specific lender, while also optimize across globally. What would be the most matches? Would that seem correct? Yeah, I think so. That's actually that's a really helpful split of ranking and optimization. I've done my part. Thank. So real quick. I'm still not understanding why the other buckets would not be filled with the other buckets not be filled because there's sounded like overlaps and the conditions. Yeah. So let's see here. I guess how I can explain it. So. Different. So it's like this is over. I guess this is oversimplifying. So I'm just talking about one kind of one tiny condition, right? But if we're going to we're going to rank those. So the buckets, we're going to rank the buckets to say, well, these are those top three buckets that we should give you. But it may be that there are, let's say, four buckets that the person fits in, and we're going to give them those top three buckets based on whatever internal ranking algorithm we have, we're going to give them those top three. Harpreet: [01:03:59] But like, if we [01:04:00] change that ranking algorithm to like give more weight to something else, then it might actually grab one, two and four and save three for later because it knows like, well, every day at like 9 p.m., after every everything, all the other buckets are full. We tend to get people who come in and we'll fit that bucket. So like, yes, they could be filled earlier, but we're going to we're going to hold it for the the late crowd. But how do we choose? How do we how do we adjust that to. Do just think about that and look look further ahead than just like our local optimum. Martha Stewart. Hands up, go before or after them. I just have another clarifying question that will make things a lot easier for me to understand. Does this need to happen in real time or is it batch predictions or batch matching real time? That's tricky. Now, I understand why you're talking about the Monte Carlo aspect of like, what's the probability of this person being filled later? Okay. I'm curious what others have to say, and I'll click my thoughts from them and go for a chat with you again. Go for. Vin: [01:05:16] Yeah. I mean, Mark's kind of on to it. You need multiple models. One of them's going to be like a demand forecasting model where you look at just likelihood. It sounds like income's a big deal. Look, I'm sure credit score is a big deal. I'm sure there's some other features, but pick out your biggest and segment, your audience. So the first thing you want to do is segment out from a banking standpoint or from a lender standpoint. You want to have a segment of lenders and get rid of the windows. That way you're no longer worried about a lender having a window. You now have a segment of windows and there are these lenders behind them. Now, you don't have to worry about the windows [01:06:00] anymore because that's the part that's going to be the most complicated to do in real time, is to try to play like almost dominos with matching a person to a window. But what if the window is closing later? And so, I mean, I would segment customers, segment lenders and then hide both of them behind the most important features so that you can forecast demand with respect to things like income. So you're not worried, like I said, so you're not worried about the individual windows with respect to a lender. Now you know how many slots you have for each one of the segments, and you can begin matching that way. And if you forecast demand saying that later in the day, we for some reason have more of these people show up, which is kind of like that's a that's an oddball problem to have. Vin: [01:06:55] Like people with high income show up sometime or people low income show up at a different time. That's that's weird. Yeah, I never thought about that, but that's what I would do is really hide it behind segmentation because then you don't. The windows part kind of goes away a little bit and you can fit things into like you have a certain number of parking slots almost in or parking spaces inside of a parking lot. And then you're not worried about which lender, you're just worried about how many parking spots do I have and how many cars do I get at 6 p.m. that fit into this car's semis? I don't know. Hopefully that metaphor makes sense, but create multiple models. One of them's behind forecasting, maybe even supply one's forecasting demand due to that segment, each end of it, and then figure out how to match with the optimization of as many people as possible. Get matched. And as soon as the business sees that, they're going to change your objective to maximizing your revenue. So just [01:08:00] be ready for that because that's going to be the next piece that they'll end up saying, well, have you optimized it for revenue? Would we be matching differently? And just like I said, be ready for that. Harpreet: [01:08:13] Well, that's really helpful. And yeah, the parking lot thing is like a way better analogy because it's kind of like, I mean, except it's not like one person to like one parking space. They can have more than one. But the idea is at the end of the day. How many cars did we want to get in and how many parking spots do we have available throughout the day or whatever given time? And is it are we able to please as many people on both sides of the equation as possible. Vin: [01:08:36] But realize it's not one parking lot like there are several different parking lots. That's why I'm using the parking lot analogy. Like one parking lot fits a mini subcompact car, the next parking lot fits a monster truck because in Nevada you always have to have parking spots allowable. Harpreet: [01:08:54] For. Vin: [01:08:55] The one foot lift kits that we seem to be fond of out here. And then you have semi parking. And so it's not so much in the way you're thinking about it now is this parking lot has four slots for compact cars, two slots for midsize, one slot for a semi. But separate that out. So pretend it's basically one parking lot for semis and just group everything in there. That's why I'm saying kind of segmented in that way. So you have a parking lot, not a particular company has this capacity, but we over the course of the day have this much capacity all together. So one parking lot is one segment that can handle one and then just match it up to a customer segment where the car that can fit in the parking spot is matched up with the parking spot. Instead of worrying about having a certain number of spots per lender, like I said, just break it out by the segment and you're kind of hiding customers and lenders behind the segments that way. The because what you're trying to do is match the two [01:10:00] and make them the same, not just matching people to lenders, but you're trying to make the segmentation on both sides equivalent and then figure out how many demand on one side supply on the other side, so that at the beginning of the day you kind of a picture of what you have available and what you're most likely to have happen so that you know how many slots and how to you know, it's hopefully that metaphor that I'm just killing right now is is making sense. Harpreet: [01:10:29] Good parking lots. There are the other with no parking lots that a meta lot now or you need to. Vin: [01:10:38] Go that's trademarked so I can't use metal a lot but that's that's trademark I can't do that so you know I would say a sued a lot about sued a lot. Harpreet: [01:10:48] All right. Mark, let's hear from you in the meantime. Well, Mark, it's really here. I've found this article that might be helpful to me in publication. It's the Data Machine Learning Marketplace Optimization at Upwork, and kind of the issue they're dealing with sounds similar to what's been discussed now to breaking down the marketplace into homogeneous segments with predictable behavior, trying to understand optimal pricing, which kind of, I guess, analysis that you're doing there and then quantify or predict supply and demand. So might be useful. Hopefully there's some something abstract away from that. That's helpful. Thank you, Mark. Go for it. So I have another clarifying question. How important is like time in this? Like, do you care that like a lender or a customer will leave if it takes too long to match? Or is it just a matter? Is it more important to have a correct match or to have a correct match within a certain time window? Um. Or is that even a problem at this point? Time is definitely important. It's. [01:12:00] Although if I had to choose between time and something that's suboptimal like for the company but gets the customer what they need, that's what I would choose. But you know, time is always important when you're waiting for a screen to load or something like that. We're all impatient. Hope. And by time I don't mean like like the app loading quickly, but I mean like time from requests of lending to like actually signing and getting a loan through lending tree. Harpreet: [01:12:32] So the lending tree isn't a lender. We connect people with lenders, and so once they're connected, our job is for the most part done because they're going to now they're going to be working with the lender to like sign and get the loan and everything. And it differs by product, how long that takes and stuff. For some people, that's super important and for other people it's like, obviously you care, but it's not as like a deal breaker. Okay, because I'm also thinking about the optimization component. I'm just thinking back to like my public health modeling class where we did like health care clinic queue optimization and like the idea being you have the supply side and the demand side. And the analogy that my professor gave is like this bathtub where your supply or your demand is the water going into the bathtub and your client takes it back. Your supply is the the rate in which the water can drain from the bathtub, and your demand is the water that's going into the bathtub. And so the idea being is like with those two components, you can figure out, one, how long it will take for something to go through, but also the probability of people jumping out of the queue and being like, this is too long for me to be in and thus reducing the amount of people. Harpreet: [01:13:52] And so the reason why I'm thinking that is important is like, okay, we're trying to optimize for trying to determine like what's the, the best amount. I [01:14:00] think the key one, the key things when thinking like a mental model of this is like how long is someone willing to wait before they go to your competitor to get their loan, especially if they're shopping around? They might go to like Lending Tree and some other lending service or their home bank. Right. What how long would it take before someone's like, I don't want to do this through this platform anymore? And so I'm happy to send you references because there's actually a whole calculation where you can legit calculate these the supply demand and the jump out rate. And considering that you're already probably collecting all this and imagine you have all the variables to to determine this, but that can be another piece and then I'll have to send that to you on the side. And then another thing top of mind is going back to like bins, kind of like bucket things. For the ranking side of things for the ranges. I know you're there's probably a wide range like by by lender, but you can bucket them potentially by like 0 to 100000, 100000 to 200000. You get the idea like different buckets of like the range. Harpreet: [01:15:12] And I'm curious, you can build an individual model for each bucket, whereas like the probability of a user, the probability of a user being accepted for a loan for that bucket. And so you'll have like a range of probabilities and those could be your features that you use to determine your ranking. Well, that's interesting. That's a cool angle I hadn't considered at all when I write that down. So for the ability of a user. And so my mind, you know, probability for that user for every single bucket that is like relevant to them, right. Yeah. And not even relevant can be the ones that are not relevant. They [01:16:00] probably will just be zero. But essentially the idea being like, I don't know your underlying distribution, but like you I'm just random easy model, understand like you fit a logistic regression zero one do they do they get what they get the lone yes or no? I'm not saying you use logistic regression, but the idea like you could have probability out of that and then from there you'll have essentially a whole bunch of features and it's condensed down from all these different factors to, you know, these sub buckets. And this is mainly just how I interpret it, what Ben was talking about. So Ben, please correct me if I'm wrong, but I think you said Mark nailed it. Got a question here as well. Go for it. Mikiko: [01:16:50] Oh no. I was just asking about when you're are you are you talking about this is all like first party data. Okay. So there's no, like, our server or any media. Third party data that you're using. Vin: [01:17:07] No. Mikiko: [01:17:08] So your. Harpreet: [01:17:09] Key. Mikiko: [01:17:10] Is your key KPI is like what are you what's your key KPI to optimize on? I still don't. Harpreet: [01:17:19] So in this case, like we were talking about with I'll just go back to like people and buckets is just saying at the end of the day how many people so let's say that right now in a day, if we have 100 people come to our site and we have 100 potential buckets, but they don't all they're not all like the same size. Let's say that 90 people end up getting into a bucket. So we have 90 buckets that are filled. So we have ten people who didn't get a bucket and ten buckets that didn't get a person. And we want it to get it to be 100 people, 100 buckets. Everybody got matched. Everybody got matched up. And so it's kind of it's that, I guess, ratio of people, [01:18:00] people who fall to the floor who are who aren't matched. Mikiko: [01:18:05] Okay. Sorry, I don't have an answer. Just curious. Harpreet: [01:18:11] All right? Yeah. Any other inputs or insights to share. Eric? Eric, Helpful. Were you able to kind of maybe get some some vocabulary that you can take and then do some searches? Yeah. Yeah. I think that that's been like the biggest thing is I'm like, I have kind of some ideas, but only like two or three words to describe them. So getting some ideas and understanding the new construct of Schrodinger's parking lot is really helpful. Yeah, definitely. Thanks for the ideas and the words. Marco, I can't remember if your hand is up from the floor and I forget to put it down, but that was a complete mistake. But something that popped in my head as well is thinking about matching looking at matching algorithms for propensity score matching. That may be the wrong path because it's probably a different problem. But the idea of like the step before, before you kind of do your analysis, score matching is the matching component. You have all these different covariates and you're trying to get it down to like a single variable to describe all your covariates, maybe a keyword that might lead you to something that's useful. So like one I'm blanking on, like, mahalanobis distances and stuff like that. Crazy, crazy stuff. I wanted to find a reason to use that because like, what the heck is I'm not saying this the correct thing to go, but is more so a keyword to figure out how other people considering all these different variables and matching to someone. I don't think that's the right solution for your thing, but I think it might give you some clues on what's potentially out there and what literature people are referencing. I will also send you that. But thank you. I [01:20:00] used Mahalanobis this distance once when I was doing a clustering problem trying to find the segments of of course, that's a long time ago. There's not new questions or input here. Is there any I guess just any questions? They can be direction. Eric is helpful for you. Great questions. Great discussions. I've got a critical community question. Yes, please. I'm feeling bad for my brother Harpreet. Vin: [01:20:38] Because he's been out. Harpreet: [01:20:40] Of his man cave for quite a while now. So what's the ETA on getting back to your your main man cave? Yeah. These guys. Do they? First of all, they're taking their sweet time, right? So they finally get the flaws in. But let me tell you how they went about this, right? Not only is the basement incapacitated and out of whack, when they came to put out the flooring again, they put it right in the middle of our kitchen. There's boxes and boxes of luxury vinyl planking just blocking an exit and entryway to the house, blocking my fridge. And so we have to navigate around this entire week and I'm messaging personal. What do we do? Our life already is like inconvenience. You're looking even worse. I don't know. I don't know when it should be done. They've put all the drywall up, the floors are in. They just got to paint this guy the bathroom. I get all my shit back. Soon, I hope. Soon I hope. But I will tell you this. There's some extra renovations that I've done in my office that are going to look pretty cool on camera once I get to it. I'm excited for it. And so I've been I've been explaining to people [01:22:00] in the chat. Vin: [01:22:01] That we're. Harpreet: [01:22:01] Down to our last few chickens. And I built my chicken coop and hopes that when the family got burnt out on taking care of chickens, it could become my new man cave. So there it is. Oh, sorry for my shake. There it is. Vin: [01:22:17] 200 square feet, but I got it pretty quiet. Harpreet: [01:22:19] It looks pretty bad, and I know it'd be hard to believe, but it's got a ton of chicken in it, so get it. Fix that. Insulated. Yeah, clean that out. That could not be healthy to be for sure. Actually, Patrice, I want to get to the question. If you had a question for them, that was a really good question for me. So let's get to that. And then I guess we could begin to wrap it up after that. Go for it, Patrice. Mikiko: [01:22:45] Thanks, Ari. And thank you, Vin, for the Seven Career Mistakes video. I always appreciate those because there's at least one thing in them that's general enough to remind me that even as a mid-career switcher, there is something I know that's that's worth knowing as I go into something completely new. So my question is off of that, if there's any if there's any mistakes that you wish you had made and didn't make, or if there's maybe an eighth story or an example that you cut that might fit into a director's cut. If you have more more thoughts along those lines of things that might be good for others to to know and skip over and make more interesting or educational mistakes than some of the ones that you've encountered in your career. Harpreet: [01:23:35] Real quick for for everyone listening. Go to the YouTube channel, the high ROI data scientists released a video just a couple of days ago. Seven crew mistakes, if you can. Real quick real quick. Just rattle off those seven mistakes and then get to this question, provide a lot more context for folks who have not yet been able to put out. Vin: [01:24:02] I [01:24:00] wish I had told people to. I wish I had told some people off. In more blunt terms, that's a mistake I wish I'd have made. You don't realize the power of push back. And I kind of touched on push back and and telling people that that's do your job. But I wish I had told more leaders to go straight to hell. That's that is the mistake. And good thing that I wish it had done. Like I said, it would have taught me a lot about how how easily pushed off their point most mid-level managers are. They're not really that good at their jobs. They're just really good at sounding assertive and sounding like they know what they're talking about. And I wish I had much earlier in my career just said, okay, you're full of it. No, that's not what's happening. We're not going to do that. Here's what's going to happen instead. And that's you know, if I had done that, I would have had even more successful projects, because a lot of the early mistakes that I made were listening to people because they had a cool title, not because they knew what they were talking about. So that's kind of the one side is I would have learned a lot more through pushback, but the other side is, as a leader, there were a couple of moments that I remember where I should have said no. Vin: [01:25:31] And that's the that's the big takeaway for me. Like, I remember laying one person off and I thought I had won a big victory because they had asked me to give them two names. And I said, Well, how about if I just close this open position that I have? And, you know, that's one name. And I thought like I was winning that battle in reality, I should have just told them, No, I'm not picking anybody. I need everybody on my team. You want to fire somebody? Fire [01:26:00] me. You know, if I had pushed back like that and said no, I would have like that would be one thing I would not regret, because I look back at that and I was at a company that was profitable making money. They were not hurting for cash in any way, shape or form. They were in no danger going out of business and they were asking us to lay off people so that they could boost the share price. And I should have just pushed back and said no. So those for those two reasons I wish I had earlier on in my career had a little bit I don't know about backbone, but you know what I'm saying? We should have been more articulate. I should have been more more willing to push back and say, look, no, no, that's not working. Vin: [01:26:46] Because, like I said, I think on the one side, I would have been I would be happier with myself as an early and mid-career leader than I am looking back now, you know, because there was also a phone call that I made to somebody that was was on a weekend and he was being asked to come in, but he had family stuff. And I remember my director dragging me into her office and saying, no, you need to call this person and get him to come in. And I should have said Hello, will you want to do it? Do it yourself. That should have been that should have been my response. And that's why, you know, those are the moments I look back on as a leader and say I was not a good leader. That was bad as a leader, I should have even young pushback. And so that's the. If I had one more story that would have been the one is tell people to go to hell. You will be so surprised. The majority of people will just look at you for a few seconds like you're supposed to be upset and be contrite and say you're sorry. And when you don't, when you double down, they will back off because they're wrong and they know it. Mikiko: [01:27:53] Thanks for that. One more story. Can I throw in one more follow up? Harpreet: [01:28:00] Yeah. [01:28:00] Mikiko: [01:28:01] I'm wondering. So one of the things that's most compelling about the seven career mistakes that you put together is how far past those you've landed. And that's what makes them interesting, right? So anybody on LinkedIn can write about seven mistakes they made and it's not everybody's seven mistakes that are interesting. Right. So do you have thoughts? Or maybe they are, but people don't. Harpreet: [01:28:33] And. Mikiko: [01:28:36] So so my real question I'm thinking about, you know, if I probably didn't phrase that quick the way the best way, but what I'm thinking about is do you have a way of thinking about like, is there a certain is there a place that you have to have reached or is or is it like the environment you're in? When do you get to start talking about your mistakes and having those land alongside your confidence in your achievements rather than than them coming across as, oh, this person hasn't achieved. Vin: [01:29:11] That's really interesting and I'm going to answer that one. But I also like I'm looking at the chat and yes, as a guy way easier. I have to 100% say this as a guy, this is easier. And so I would almost say if you're a woman and you don't feel like you can do that kind of push back to your boss. You're in the wrong place. Like if someone pushes back against me, I'm going to listen. I trust my people. I don't I don't care who it is pushing back on me. If somebody tells me to go to hell, I'm going to think to myself, I must have said something. Joe: [01:29:45] Out of line. Vin: [01:29:47] Because that's, you know, it's the leadership style where you trust your people. And if one of your people says you're out of line, like you said, whoa, it's time to reset and reevaluate. But that's not that's not everyone's leadership [01:30:00] style. And if you if you're not in that position. That's a bad you. It's a big red flag that you're probably in a more toxic environment than you think you are. It's there's there are these little things. If you are a woman, if you are somebody who is in one of the disadvantaged groups or underrepresented groups, and you feel like if I told my boss to go to hell, they would just fire me on the spot. But if a guy said that, he'd be cool. You're in a more toxic environment than you think you are. That's like your boss should make it a comfortable thing to say. That's wrong. You're wrong. We're not going to do this. Your boss should be asking you before making, like, blanket statements. That might be wrong. That should be like, step one is your boss asks you for input and then makes a and this is what happens every once in a while. You can't do that as a leader. So it's not 100% of the time, but I'd say 80. At least there should be a conversation first before this is what must happen. Vin: [01:31:04] But yeah, I want to acknowledge that there are. Yeah, this is a dude's have this a whole lot easier than women do. But that being said, there's a that's a red flag, like you're in a toxic environment if you're worried about that and you shouldn't be. But to answer the question about when can you admit this, you don't realize how much of a power move this is to be in any spot and say, Oh, yeah, I did that wrong. That was so stupid. And so here's what I did, right? Like that is the power move. You know, I watched. I watched somebody was talking to Davos and he was talking about an investment that he had absolutely blundered. And the person who was interviewing him called him on it and said, yeah, that this is one of your biggest stories he was talking about. I was the biggest bet for him. And he said, Yeah, that's down so much [01:32:00] and so much. And he goes, Yeah, it is. Like power move. Yeah, you're right. And that was like, there's so much strength in being wrong. And when somebody tries to get at you by pointing out a time that you were wrong, where you just stop and go, Yeah, you're right. There is so much. Harpreet: [01:32:23] Strength. Vin: [01:32:24] In being able to be wrong and to be able to say, Oh yeah, no, I made that mistake because now everyone looks at you as human, but also at the same time, you're allowing people at a junior level to be human. It's okay. You know, and that's the that is the power move of it all. Is everyone in that room is messed up? Everyone. If you're in a room with a CEO, that CEO has made some pain mistakes. And when you're the only one in the room who says, Yeah, no, I did that. Oh, yeah. And you own it. You are all of a sudden. You've gone from where you are now to possibly the strongest personality in the room because you just opened yourself up and said, Yeah, I've made mistakes. I'm imperfect. You've said the thing everybody else is afraid to say, but you've also revealed the truth that everyone in that room has failed. And if they haven't, they haven't done anything very interesting. If all you're doing is making cheeseburgers. Yeah. You know, you probably never failed the job. But if you're trying to make like the best burger on earth, you've made some really bad ones in the attempt to make that great one. So there's and there is a difference there. So no be wrong. Be that person that says, yeah, I filled that one time. Don't list all your failures. Like don't don't go too deep. Don't be so look bad. But yeah, from time to time, just acknowledge failure. Especially if you see people in the room who are afraid [01:34:00] of it, who are worried about admitting that something went wrong, because that's another sign of toxic cultures when people are afraid to say, I messed up. Be the one that does it first. Mikiko: [01:34:14] Thank you. I appreciate that. Harpreet: [01:34:17] Absolutely love that. Interesting. You just started the third chapter of Adam Grant's book. You can listening to it on Audible here. And the third chapter is called The Joy of Being Wrong, The Thrill of Not Believing Everything You Think. So I'm excited to listen to that chapter and kind of tie back to this discussion. Going to be Kiko and then will go to go for it. And by the way, last call for questions. You guys got questions? Let me know. I'll add those questions to the queue. Otherwise we'll wrap up after that. Mikiko: [01:34:52] And I think there's like three points. Like, I think so the stories that you see on LinkedIn where they just like go viral, where like someone talks about how they, you know, they made mistakes and all that. I think there's there's kind of two reasons for that, right? One is because a lot of times it's how they're framing it. So they're not framing and saying like, yeah, I totally screwed up and I'm a loser. A lot of times they're framing it like, this is a situation. This is like what happened. And here's like kind of the learnings that I am providing you the audience with that. So I do think like there is a way to like frame that story and I think that's going to be true no matter what. Like whether it's out on public and social media, whether it's like in one on one interactions and all that. And I think in more of like the local conversations, like if you're handling a project and you screw up on on something but you don't admit it, that's over time kind of just erodes your relationship with your business partners. But once again, like there is a way to like have there is a way to frame it, but in a way that is like both. Mikiko: [01:35:55] One, it's positive. Like this is what I learned when I experienced. And this is kind of like my [01:36:00] sort of gift to to you, but also like the I'm a earnest person who wants to work with you, who wants to be a partner to you. And I think people are always sort of receptive to that. I feel like as women, we do have to have that conversation in a certain way and we do have to frame it. And I do think. So for example, like one of my teammates, right. She she's a black woman, right. And she has a lot of stress and pressures because she does feel that. She has to perform at a much higher level than like, say her than her peers, because the reality is that for the same actions, it might be taken differently. And to Vin's point. Right, I think that's also a comment about the environment that someone could be in, is that if people don't acknowledge that kind of bias to that can also kind of lead to like actions being taken the same, like taken differently, you know, something that she does really effectively. Is she always like it's the way she frames it in like a learning and like, for example, when not saying that she's she's actually even she hasn't even made any mistakes as far as I've seen. Mikiko: [01:37:15] But she's always turned something into like a learning story. And I find that personal, very inspiring. And I feel like whenever I've made a mistake and it's. Been received. Okay. It's because I found a way to kind of talk about it, but also to say that this is what I'm going to do differently as a result. So I do think, like there's a lot of different factors at play. Like, I know for me, I do feel very nervous about talking about my mistakes because I do feel like to a certain degree it does hit my credibility because I'm a female, especially like when as a female content creator, I feel like I have to be very careful when I say or do certain things. But I would say that in terms of being authentic. And [01:38:00] also being unafraid to assert like, Hey, I still have value. Like my insight that I gained as a result of this experience can help you can help other people. I still feel very strongly about that. You know. Thank you. Harpreet: [01:38:22] Let's go to Novi and then Mark. Mikiko: [01:38:29] You know, this is this is a very touchy topic. Because although in theory I think what when saying. Is. Is great I in in application of it. I don't think we have the same liberties as some of you have. And I've done that quite a few times. You know, the approach of go to hell and no and I can do it. And there are places where I have done that and been successful and there are places that I have done that and not been successful. And the number of times I have been successful is far less than the number of times that I've failed. And it has nothing to do with me operating any differently. It simply has to do with whether my boss and his boss had my back, which is why I've had I've written a few posts about that when we talk about Judge Jackson's. Hearing, right? Like she's the smartest one in the room. Like, hands down, right? It doesn't matter. Right. She needed the Cory Booker. She needed a couple more white guys to say, yeah, she's right. You can change that rule. You know, you can maybe think about, hey, I work in this company and I have my boss's back, but maybe his boss doesn't [01:40:00] have your back and maybe he his hands are tied. Mikiko: [01:40:03] You know, I've had I've been with clients where I've said, I'm sorry, I can't do it. You're it's unrealistic. You're asking me to do something I can do. And there are times my boss will say, Yeah, I'm right, you know, because I'm not just saying it because I have a spite or anything. I just can't deliver at the pace that they're asking me to. So in theory, then, I would really love for that world to be. But as a woman, as a brown woman. I don't get to say that a lot. Which is unfortunate, but I hope it's better in the next few decades. Which is also another reason that I post on LinkedIn a lot because I want people to know me before they know me, you know? Yeah. So yeah, I agree with you that if you're in a place where you can say that it's a toxic place and I haven't lasted those places because, you know, I'm still going to say what I'm going to say and. You know, so it's it's not as black and white as it is for maybe some of you guys who are lucky. Harpreet: [01:41:18] Thanks so much for sharing the truth. That's why you're only there to chime in before you come out. Mikiko: [01:41:24] Oh, I was just saying thanks. Nothing. Harpreet: [01:41:28] Margaret. Mark, you are. I am muted. I was actually going to talk about the whole manager having your back versus not. But I think Navid said that way better than I could regarding that. And just one thing I wanted to note, though, I can't identify the struggles of being a woman within the workplace, especially in tech. Being a black male, there are some some identifies like there are some things that I can cannot do. But I think for me, [01:42:00] what I really want highlight is working at my current job at home has been so transformational for me because I've had a lot of that. Like my parents instilling in me, you have to work two times as hard. Just get just just enough, right or or even growing up. There's a lot of racism in high school and stuff like that where I was told I wasn't smart enough because color of my skin things, things like that that really stick with you. And so like I operate in the workplace with those kind of baggage in a way for that. And so coming into my current role, where psychological safety is like so upheld not only between me and my manager, but our team and the company as a whole. Harpreet: [01:42:45] I've never experienced that before, and the amount of freedom that I'm able to experience through this is the first job I've ever been able to work and just focus on the work and not about the politics and optics of things. And it's absolutely wild. And part of me maybe this may be naive, but I don't want to go back like I don't want to like it feels like a hard line for me now. Like I need a manager that supports me and gets this. And a big reason why I stay home as well is like is because of this. This is so hard to find. And my, my concern going into my career is like, can I replicate this with other jobs? And I actually don't know because it's is so novel, but it's like now I feel like I just can't go back. I can't imagine going back to a job, being scared to say what's really on my mind or give feedback. So it's I really appreciate this conversation. Mikiko: [01:43:49] Thanks for sharing those thoughts. I actually had kind of the opposite experience early in my career. I had the kind of experience you're having now, and it was actually much later that I realized, Oh, [01:44:00] this is how many different kinds of work environments there are, and this is how rare that actually was. But it's not the only place. There's at least at least a few more out there. Harpreet: [01:44:16] Excellent topic. Thanks so much for bringing that up. Patrice, this is a great discussion. Any final words, final questions does not look like it. Thank you so much for hanging out. Go ahead and start wrapping this up. Definitely appreciate all your being here. Great, great episode today. Great conversations. Don't forget to tune in to the podcast. Have some great episodes released the last couple of weeks. This week was with Jeremy Adamson talked about leading the science teams and for was with mixing coming up though I've got interviews in the next couple of weeks that are pretty cool. Next week is with the gentleman from one salting, Jonathan Lee and Bennett, but one salt that's going to be there. So definitely check that out. Shout out to Mexico for breaking 1000 subscribers on our channel. That's crazy. I'm going to do my thing for two years and I only have 1000 subscribers because my hair is doing it just recently and still crushing it. Great job, Akiko. Yeah so episodes the gentleman from one sultan next week after that Dr. Laura Pence Laura Pence is she's from the Spartan race. She's like, I want to be like the chief scientist or whatever. But it was a great conversation. I enjoyed talking to you all. Take care. Have a great rest of the weekend. Appreciate your being here. I'm about to end this on some serious music because, you know, wrestling piece of music. I'm grateful for the music [01:46:00] you put out. And thank you so much for all that you all. Take care. Have a good rest of the afternoon. Rest the evening. Remember, you got one life on this planet. Why not try to do something good? [01:46:10] Everyone knew how good they are. Yes.