maryam-ashoori-edited-audio === [00:00:00] Miriam, welcome to the show. Thanks for coming. Thanks for having me, Jeff. I gotta be honest, I'm a little intimidated right now. I think you might be the most credentialed person we've had with a PhD, Harvard Business School. Watson X is like the godfather of ai. I feel like you've done a lot of things. I have touched on over the years. How did you get here? 'cause you have like engineering background, product background, and you just span so much. How does one get to where you are? I have to say it's been a fun ride. My background is software engineering. Then I got to AI about 20 years ago and I'm aging myself a couple masters in. Multi-agent system, ironically, is coming back, but 20 years ago I was working on multi-agent systems. At the time, it was mostly on the papers I wanted to build. So for my PhD, I left AI and I joined human [00:01:00] computer interaction. I started building AI systems, and over the past few years it's been everywhere. Whenever you need to build an application that is touching users, solve the problem, and is data driven. There's a good chance that Merriam is touching that. It seems like you've set your whole career up to be a great spot for right now. Glad to have you on. I get to travel around the country and talk to product leaders. I think at this point has been something like a thousand product leaders and AI out of a thousand press has come up a thousand times. It is just a guaranteed topic that everyone wants to hit on. So you bring great context and you know, there are theories of skillset that you need to have as a product leader. It's like a CEO of your own company. They say that, but that's true. You need to technically understand the field. So that's the technical background. You need to understand what it takes to implement that. So that's your engineering background. You need to understand the interactions with your users to optimize the experience. That's your design part. You need to work your researchers to make sure it's something differentiated that solves the problem. [00:02:00] So throughout my career, I always look for opportunities to figure out what is missing, jump on it, and eventually found my way to product to bring everything together. Speaking of AI and kind of like this rapid ascent we of seeing, 'cause while you've worked in it for a long time, we've already in the past couple years where it really hit pop culture status. Right. When chat GBT three came out, we've gone through just this rollercoaster of phases in just a couple years. When I talked to you previously, you put a really good point to this idea of like. Everyone started out like, holy crap, this is new and magic. But what does it take to go from this is a fun toy to this is something that enterprise is going to use regularly? Good question. This is a market that is evolving rapidly on a daily basis, and it's important to understand how it changed over the past two years to get some exposure into where it should go moving forward. You mentioned chat, GPT. What happened then? The technology was not new, the technology was out [00:03:00] there, but in the hands of researchers, the amazing things that happened with Chat GPT was they made it accessible to everyone. Literally, everyone is playing with those applications and that's where innovation happens. Suddenly we started seeing. New use cases that the researchers previously hadn't thought of, and that's the segue into the market, paying more attention into what are the opportunities that it unlocks for us. First year it was mostly like boards asking CIOs to go do something with Gen ai. Everyone was exploring and they are like, Hey, I wanna do gen ai. What should I do? Analyze to do exactly what? What problem are you solving? So when we got to the second year, 2024, at the beginning of the year, I started noticing enterprises were struggling to justify the investment and the I of Gen ai. Because, because basically picked an application, a solution, looking for a problem that it [00:04:00] solves. Versus it has to be the way around, how can I benefit from this technology? Right? And at the time, the acceleration from Gen AI was very focused on a series of niche use cases for content grounded question and answer in code generation, content generation, information extraction, summarization, and classification. So if you have one of those use cases, you can apply LLM to it. If not, it doesn't help you much. I feel like that was the phase. Where someone smarter than me put it was like the Salt Bay of AI kind of thing, where all the boards were just like, can you just sprinkle AI on your roadmap maybe? Or like, how do you, what's your AI plan? And they had no goal for it. It was just like, just use ai. But then ups and downs, the excitement. And now it's like, oh, how can I realize the value? And then agents happens. AI agents and it was really LLMs taking actions. But the reason the enterprise market got so excited about agents was they saw that as an opportunity to connect all that acceleration to every single corner of [00:05:00] their business through API calling even the legacy systems. And now we are talking. Subtly again, the excite went up and now we are hearing pos asking CIOs to go get agents in production and we are like to do what? So as you see lots of exploration investigation looking for vow factor to do it, that energy in the market has shifted beyond all of that experimentation until the next new thing comes in. But at least for now. Enterprises are dealing with the reality of deploying this technology in production. And at a scale of enterprises, which is a completely different stories, the conversation about what is the cost of this? What is the latency of this? What is the performance of this? What is my risk exposure? Am I gonna be on the top page of public media? And it's more like fear of messing up. Versus fear of missing out. That's where the enterprise market resides today. We had some of the senior folks from Zuora on a little while back to [00:06:00] talk about how they've pushed AI across their entire company, and it was interesting because a huge amount of stuff that they had thought about wasn't about the problems they were solving. They did. That was an important piece of it. But before they got to that. It was, how do you give access to as many people easily throughout the company as possible, but maintain the security they need because they're dealing with financial data. How do you keep all their customers sense of data safe? What's the first question they answered before was, how do we roll it out to everyone and then became how do we roll it out and maintain that safety? They've been really far ahead from what I've seen internally because they had to answer that question first. No one wants to be on the front page of the Wall Street Journal for a data leak. You wanna be there because you did something rad with ai. Yeah. I was recently sitting with our client advisory board that constantly and continuously provide feedback on our roadmap and the number one challenge that they brought up. Altogether with no question was. Accountability of AI agents. Something goes wrong [00:07:00] in front of the users. Who is accountable here? Is it the person and the entity that pre-trained the LLM? Is it the person that built the agent? Is it the person that pulled the agent into a workflow like HR system? Is it the person that. Release the HR system for the employees, or is the user that actually interacts with that system, and in this case, maybe he was a bad actor, to get the agent to do something unexpected who is accountable here. So this represents some of the challenges that the market is dealing with to ensure a responsible implementation of this technology. Especially you mentioned identity of their agent's, access management. There is a certain level of autonomy involved here. How do we define the identity of the agents? What is the sensitivity of the use case? That's why it's important to start with the problem versus a solution that I want to have. Yeah, you can build agents in less than five minutes, so you can have tens of thousands of agents, but the reality in production, how many of the agents are solving a problem in [00:08:00] production? Let's talk about that point. Right. We had non you who's the head of product over at Linear On, and they wrote a great guide to the rules of agents basically. And one interesting part that they hit on, I think kind of touches on what you just talked about was an agent can never be at fault because they're not people. They have no fault, they're programmed. They may not be deterministic like code is, but at the same time. If they're not a person, they can't hold fault if something goes wrong. Somewhere in the process, someone built something went wrong, like the guardrails weren't strong enough. Or like you said, maybe the end user was a bad actor. Like what do you think on that end? So the first step is to observe the behavior of what the agent does. So once you observe the behavior, you can see what they are doing. But observing and knowing what they are doing is not necessarily knowing that they do the right thing. Right? So you need to go in as a second step and evaluate. Evaluate if the behavior of the agent is right. And let's say if you're a product leader, you need to have a set of metrics that your business cares about and is non-negotiable [00:09:00] for your system, and measure the behavior of the agent against those measures. So now you know, you observe the behavior, you know that the agent does, right? The third step is to optimize it for the scale of your enterprise. Okay? So the agent rating, but it's using a frontier model behind the scene with introducing 42nd latency on a chatbot that I wanna put in front of the customers on the website. It doesn't work, right? So you need to optimize that. And then last, but not, this is to govern. If you are sitting from a risk and compliance posture, you need to set the rules and policies. You mentioned guard rates for everyone to follow, like they set the rules for what agents can and cannot do, and they need to make sure that those policies are enforced to. Agents, no matter where they are built or where they are run. So knowing that the guardrail highlighted something is good, but you wanna stop that behavior before it happens versus, oh, this happened. I see that this [00:10:00] happened. No, we win and prevent that. Right? So these are some of the nuances that we see the market is going after. Since you have this kind of advisory board, it seems like there's probably good insight here. How do you see companies tackling that At some level, you can try to get ahead, I guess, of some problem. You can foresee some of the things that can happen, but how do people look to solve that ahead? Because obviously there's a spectrum here, right? At one end, you try to solve every single potential bad case, and you're gonna take forever to launch anything. And we know that's not how it goes unless you're like a hospital, maybe at the other end you do nothing and it's chaos. And hopefully that's not how it goes. The most important step is to understand your use case and understand the risks associated to that. So the pattern that we typically see in organizations is someone comes in and defines a use case. So for example, I'm just making it up. I wanna build an HR use case and system for my employees that is powered by AI agents. So now you look into, okay, so what is this system doing? Is it internal? Is it external? Is it gonna have access to sensitive [00:11:00] employee information? What other information is it gonna have access to? Is it gonna speed out the information to the employee or it's just gonna receive them? You ask a bunch of questions and based on that, identify risk factors. In our word, I don't know if you've heard about risk atlas. That's an asset from IBM research. We look into dimensions of risk and brought up a bunch of dimensions that we say, Hey, officer, look into these dimensions and evaluate your risk. Posture, and then you can define what you wanna do. Because you don't wanna limit everyone that is building agentic workflows within your org and say, oh, every single person that is building Agentic workforces has to go get that approval from CIO or CDO, right? You wanna enable your developers, but at the same time, you wanna understand your non-negotiable points and make sure that they are enforce. So you design your approval workflow and then you go into, for example, in this case, HR case, they detect that PII is gonna be involved and they make it [00:12:00] mandatory to have a guardrail to detect PII. So now when the developer is building this system, it's like you have these assets available. Are you using guard race? Use this, and I can monitor the threshold to make sure that you're filtering it and your agent has that behavior. It's AI presents kind a new, I hate to use this phrase, but like a new world. At the same time when you get down on nuts and bolts of it, right? A lot of the things in security and governance you need to do are not that different. Like are you using sensitive data? If so, higher scrutiny. You know, if you're just creating something that's like using publicly accessible data to write content. There's not nearly the same level of scrutiny needed 'cause it's just nothing sensitive being touched there. Very good point. So a lot of things that we used to do with non gen ai, they are applicable, but there are two things that are special about agent workflows. One is the autonomy that the agent has. So it's not that there is a human in the loop to evaluate everything. So back to the identity of agent, what is the access [00:13:00] control, what the agent is okay to do. And the second one is the limitations of LLMs being non-deterministic, unsupervised. Learning that can potentially do hallucination, lack of explainability or limited explainability. Now they are amplified through agents. So because tourism is more important to make sure that you have a solid governance process in place before you go to production. So in reality, enterprises are looking into a series of, we call it evals, evaluation like faithfulness. Is the answer grounded to the right information, context, relevance, and a bunch of them. And I think that's where enterprises are struggling today. That's the core of the challenge for them, is as they figure out, okay, what are the different eval metrics that I need to watch? And what are the tools that are available to me to make that sound decision? AI is really impacting the world of product leaders at twofold.[00:14:00] The first one is. What are the opportunities and risks associated to my portfolio? We see a lot of startups that were powered up by Gen AI over the past two years can just look into the number of startups because this technology power up something new that has not been out there. We see a lot of products that use this generative AI to enrich their existing capabilities. I would argue that any products out there can potentially benefit from this in some shape or form, right? So these are opportunities that a product leader needs to have a really good understanding of where the technology is going and position their products at the age of this to take advantage of this. But also there are risks associated with them. It can crash their products at the same time, if not designed well. So that's area number one. Area number two is, as you mentioned, the roles and responsibilities that are changing. Historically, a product manager would work with a number of engineers in the firm, depending on where you are. [00:15:00] Maybe every five engineers to 20 engineers assigned to one product managers, depending on the industry. And then the product manager writes down a PRD to provide the specific requirements for the developers to go and build up on. Then you ship a product, it goes to production and it does what you expect the product to do. And so there is bug over there, you fix it. And that was the cycle. In the new world, the product manager with AI assisted coding, they code, they built, they have an army of agent that it's at the prototype level now, but it's gonna keep getting better every day. Like as we speak. It's getting better. So it's safe to assume that like the well next generation of product managers, they will be designing for agents. The PRDs are gonna be targeting the language of the AI modules. So it's a prompt that you're defining and the role is evolving. You have probably heard the role of AI PMs. The role of A IPM is really looking at observability [00:16:00] dashboards to make sure the agents out there, they are doing the right things. So as a product leader, like we need to have a very good grasp of where these two sections are going and plan for it with our talents, with upskilling and with our products. You bring up observability, right? One of the main thing we do on the log rocket side is everyone's familiar with like session replay, but this is the idea of an AI agent that can actually watch every interaction for you and just flag to what's important. The old ways of observability we've seen don't function in the same way that they do now. Because it used to be, you could tell if someone did the thing you wanted 'em to do because you just put like events in your application and you would see they did this step, this step, this step to this dashboard or whatever it may be, or to this endpoint. When it's agents and when it's a lot of the stuff going on in the background, you don't have that workflow you can monitor, and so what comes out is just did they get the outcome they want. Oftentimes you need a much more kind of visual medium to assess that, and that's been one of the big use cases we've seen kind of as [00:17:00] we've upleveled what can be done in digital experience. Understanding is exactly that. Like how do I measure these outcomes when they're not event based, when they're, like you said, they're non-deterministic kind of agent based. So that's been a really interesting piece that I think we've seen evolve there as companies are creating these new workflows, is how do you ensure that users are getting what they want out of them solving real problems. I have one other piece I'm curious about that I want to hit on for this. You said earlier on that basically companies are looking at latency, cost, trust, and you know, that gets into the governance that we just talked about. On the cost side, this is one that goes back and forth in my head. I'm curious to get your take on this. People are looking at, you know, lower cost models and, and that's always important that I think people hit on. But at the same time, given the rapid acceleration of everything, you know, we know the paradigm of how infrastructure costs. I think you can think of foundation models as infrastructure at some level. We know that over time cost that comes down. Typically, should product people be focusing on optimizing costs right now? Is that the right thing to do or is it just, we know temporarily we're gonna take a margin hit. It's much more valuable to be the absolute best in [00:18:00] delivering just such a acquiring customers because it's a much, much, much better experience or function we can deliver and then we can figure out the cost later. But like there's certain periods I feel like of time. That you only get one strike to, to kind of do rapid acquisition, right? If you look at Uber early on, they bled money for a long time because it's all about acquisition, winning a market, and another profitable, and they're doing great. They should care about cost. And I tell you why, because cost has dependency to two other factors that are, from my perspective, even more important than cost. One is latency, and two is footprint and energy consumption. So when you think about cost, where is this cost coming from? Is from compute. Compute requires energy and compute introduces latency. So when you go after optimizing cost, it's really a signal for, Hey, I'm optimizing for compute. Optimizing for compute is what they should pay attention to because as the result of that, you are optimizing costs and other [00:19:00] stuff. And because of that, let's say you're a company that doesn't care about cost and you are okay to, you mentioned Uber, to spend cash until you're profitable, right? Then you put this system in front of your customer and the latency is killing it. I'll give you real board examples. An insurance company, they were using chatbots in their website. So you go to the website and you see the popup that says, how can I help you? Historically, for complicated cases, the three second response time was okay with that. They added LMS beyond the scene, A rag pattern retrieval, augmented generation to retrieve answer and then apply. Can you guess what the latency went up to? I'm gonna guess 30 seconds, but I feel like I'm wrong. Yeah. 40 seconds. 40 seconds. Yeah. Okay. Okay. Basically not usable. It's like, woo, we can't use this. Which is fine 'cause you're talking to a human. It's gonna be way longer than that, potentially. Yeah, but it's like you submit and [00:20:00] you're basically waiting. So latency is extremely important, and then you go to optimization. But then in this case, optimization is not just optimizing the model, it's optimizing the whole stack. Where is the model hosted? What kind of GPU? What is the memory of that? GPU? What are the tool callings, right? How far is it from the end user? Lots of question comes into the play to optimize that which is way beyond cost. So even though cost is not the driver, cost is pushing you to optimize for this up compute and the rest of the stuff on the stack that is gonna benefit your cost at the end of the day. I mean, I get the point about optimizing for speed, right? It's funny 'cause I think we both remember, if you go back to, you know, the s the time for like an e-commerce site to turn around. Any request could be multiple seconds and people were happy with that. Then suddenly got the point where it's not instantaneous, you're losing customers. Right? Google put a thing that like every thousandth of a second matters from a customer retention and conversion standpoint [00:21:00] and like chat. We're definitely getting that where it used to be. I would tell teams. If you're waiting a minute for a response, people are gonna leave. And now it's like, it's just expected instantaneous, especially if it's an LLM. But there is an element at times of like cost in the mall that you're paying for can be right, high compute. And part of what you're paying for is high compute, higher speed. And that's kinda where I'm saying like it seems like we might be in a moment in time where if you have the luxury, you should look at cost. I'm not saying you should just like not look at cost at all, but there's gonna be times when it's worth taking a lower margin, maybe temporarily. 'cause long term you have some bigger game. That maybe optimizing for cost isn't the right move. It's not right for everyone, but there's gonna be certain companies that think that's the right move. I would it a little bit, instead of optimizing for cost, I would say that you redesign your architecture to be agnostic to the models. If you put abstraction from your architecture and the compute you require, you can easily do the switch tomorrow. Technology is like, you can get, uh, much higher performance from a much [00:22:00] smaller model. You can just make it switch because the challenge that I see in enterprises is the time it takes and the approval they need to just replace a simple model. And that's the challenge for them because they have hardcoded everything, like direct dependency to the model that they can't just fly. So I would say that like if you wanna pick your battle now, it's probably better to focus on your architecture and make sure that this gen AI and adjunct workflows and everything, you have a level of abstraction from the underlying. Use cases that you're building. Well, I think that goes back to a point you've brought up before is part of doing this right is going to be taking the right tool, in this case, the right model for the job. There's gonna be some jobs where the right answer is some super expensive, big God model. It can do everything really, really well and is super, super fast and all those kinda things. And there's gonna be some times when the right answer and if we define the right answer by the best experience for your customer, also for your business. Where sometimes some [00:23:00] companies do have to look more at cost than others and things like that. But picking the right model for each use case and each function is gonna be really important. And like you said, there's just no way you can do that over time. Well, unless you are architecting your app for being agnostic to these things and being able to switch them in and out and pick and kind of go through and and optimize internally. Exactly, and even I see a lot of automatic routing at this point. Mm-hmm. That would comes in and you automatically have a module that classifies that, even using LLMs to classify and send it to the right model. So lots of innovation there and we are gonna see a lot more too. Looking at that, given your background and understanding and probably more depth of involvement in a lot of these things than most people across the board in these functions, what can people do right now from a product standpoint or engineering, you know, team standpoint to ensure that they're ready in a year, two, year, five years, to keep excelling at their roles versus kind of becoming obsolete? In my own org, I'm responsible for product management, design, and engineering for the exact same reason that you [00:24:00] mentioned. The lines are blurring. Historically, product management was isolated from the team, like vertical that was doing design versus engineering, but now they are part of the same team on a day to day, like coding together. Building together, because. It's not that the product manager would tell the designer what they need. They come back with the designer and the engineer bills. They are all interacting with their AI to build a unified vision that they have. Then they bring expertise. So the design is the design, voice, and expertise that they bring in. Product manager is bringing the expertise of the market. View engineering is the technical optimization view that they bring. So one year from now, I'm pretty sure maybe the new roles are evolved, but for the next 12 months it's gonna be the line blurring. Just leverage AI as much as you can and go build, like my product managers are shipping features as we speak.[00:25:00] Yeah. It sounds like Go Build is generally a good tip. It is. Just, just use it. I think the thing, I am curious to get your take on this, given what you said part of the answer, it seems like it might be, go read a book. Go read about design theory. Go understand better how design works, right? If you're an engineer or a product person, if you're a designer, go learn a little bit about how code operates because these things are gonna blur, and some of these things are not AI based, it's education based. We need you thinking than doing because doing, we can pair it up with AI assisted coding that is available to us, or LLMs or everything. It frees up our time to do more thinking and planning looking forward. So I'm with you on that. I love it. Do more thinking because you know, AI can help us with the doing. I'll be honest, Miriam, we had so many things we wanted to talk about that we did not get to, but that was fantastic. I feel like I learned a lot there. Great kind of ways to think about governance with ai. How do you actually scale these things in larger companies where you do have to worry about actually breaking things? [00:26:00] Not everyone can just like move fast and break things. Sometimes there are real, real bad consequences there. Calculated risk. Yes. I love the idea of the, did you say risk atlas? Yeah. Yeah. I love that idea. So yeah, do your risk Atlas, understand your risk portfolio there and then if people want to kind of learn more or maybe hit you up with a few questions, is LinkedIn the best place to find you? Linked is the best place. Awesome. Well thank you so much for coming on. Miriam's a great resource. If you see her on LinkedIn, say hi because. She knows more about AI than at this point many of us will forget. So thank you for coming on and we'll have to have you back on 'cause I think there's just so much more Lisa wanted to talk about. So hopefully we can get it. Wonderful. Awesome. Thank you so much. Have a good rest of your day. Bye-bye.