LaunchPod AI - Joel Polanco === Joel: ~ If the internet's going down , can your application still function? That's the problem we're typically dealing with is, network connectivity problems and then still needing the function and operate like a retailer, if their point of sale goes down. It's fully dependent on the cloud. They're in a ~~world of hurt during that, even if it's an hour or two hours. ~ Jeff: ~Welcome to Launch Pod ai, the show from Log Rocket. We're taught product in digital leaders share real world ways. They're using AI to move faster and work smarter. Today's episode is a little different.~ ~We're talking about how AI isn't just something you call from the cloud anymore. It's moving to the edge, and if you're not thinking about where your models run, you're already behind. We're joined by Joel Polanco, senior Hardware product Manager at Intel's Edge Computing Group. He's led dozens of enterprise AI edge deployments and knows exactly what it takes to run AI models locally and at scale.~ ~In this episode, we discuss why AI at the Edge will cause a whole new wave of disruption across multiple industries. Moving AI to the edge, slashes costs and levels up customer experience, and real world examples of Edge AI deployments Joel has seen from IT. Assistance to agents that restock store shelves from a single photo.~ ~So here's our episode with Joel Planko from Intel.~[00:00:00] Joel, welcome to the Showman. You you've been a writer for us on the product blog for quite a while. But great to get you on the, launch pod here. I think this could be. One that people wanna listen to because everyone knows about ai, it's the talk of every product leader. I sit down with every product leader I talk to, but there's a whole world that is developing here that I think people are not paying enough attention to. And that's an area you have a deep specialty in, and that is, as AI is kind of proliferating outta the cloud and maybe outta some of these on-prem models that people are hearing about, but out to the edge where it's actually running on kind of. Devices out, out in the world It's probably an area that AI's gonna keep developing in. So welcome to the show, man. Let's talk a little uh, AI at the edge. Joel: Yeah, thank you Jeff, for having me on. I really am a big fan of Log Rocket and you all, and I'm so happy to be on the podcast here today. So, . A little bit about the edge, like I'll start with. A little fun fact, so a lot of folks don't know this, [00:01:00] but the iPhone has had a, what's called A-N-P-U-A Neural Processing Unit in it. Since the iPhone X and , that processor, it's like a co-processor. It's been sitting on that board for a long time, and so Apple has been thinking about this a long time. A lot of tech companies have been thinking about this a long time and have been. Slowly adding capabilities to their devices. And kind of what you're seeing now is the proliferation of what are called ai PCs. This idea that your laptop, your desktop, you know, the ones we use every day also have these neur processing units and other AI capabilities built in. So you can actually run AI models locally, and that's what, you know, edge ai, and edge computing is all about. Jeff: To dumb it down maybe a little more for someone like me, is this kinda like where I remember, you know, if we all remember back in the day you had Siri on the iPhone, but if you were, you know, in airplane mode or somehow not, you know, didn't have a [00:02:00] good enough signal, it wouldn't work. Now it worked no matter what, because it actually runs on the phone basically. Same idea, but AI models. Joel: Exactly. Yeah. So, , in that instance, you're right, you're, you know, with Siri, the original instantiation, like you're talking to Siri, but what was happening was you're doing an API call to the cloud, you know, through the network. And then that response was then being sent back down to you. And now to your point, you're, these models are, have gotten so good, you know, with deep seek coming outta China and some other. Open models that have just been released by open ai. They've shrunk them down to be able to fit on these devices, and they still perform pretty well. Maybe not as well as like the big ones that, that you see in the cloud, but yes you can run them now locally. Jeff: I guess to just jump right into it here, like what does this mean or why is this important? You're doing implementations now, so where is this impacting? Joel: Yeah, it's impacting a number of industries and it's gonna take time because What happens [00:03:00] is first you start in the cloud. Why and why do you do that? You know, it's like. First off, you don't have to manage your infrastructure. You know, You can rent your servers, you can basically rent the compute and then you can deploy quickly. But what happens over time is eventually that gets expensive, especially as your usage grows. And so then companies will begin to optimize their spend and bring that compute and that those capabilities, you know, locally in a place where they can manage the cost much more efficiently and optimize it. So some places you start to see that now, for one, it's like I'm in the retail space. We're starting to see retailers experiment with conversational AI agents. Doing different tasks. You know, one use case would be as an i it agent helping their employees resolve printing issues or you know, point of sale issues. Another use case is like, Hey, this agent is here to help you with like inventory so you can take a picture of a [00:04:00] shelf send. Send it to the agent and then it'll come back and tell you with with, Hey, your order is confirmed. Go and restock this area of the shelf with so much product , those models were originally deployed in the cloud with the retailers. They're now looking at how can I bring this on-prem, this capability so that I can save cost on my API calls. You know, I can basically, but I have to manage it now, which is a trade off, right. Jeff: Is the thinking here, like, is this a. Are they making this transition for cost? Is it like accounting? Because I'm assuming if you are running, you know, the cloud models, \ you're running, you know, some opex cost there. If you are hosting it and you're paying more device than anything else, that's probably a CapEx expense and you can amortize it. Is that. Is that how they're looking at it? Or is this more of a speed, or is there just a different cost where you just, you kind of understand you pay for the device once and you're just running it over time, versus having to pay kinda monthly fees that are beholden to whatever the API cost is? Joel: Yeah, though I think what happens is [00:05:00] first you deploy to the cloud and then you validate the use case, Hey, this use case is working. Usage is increasing. You know, I have product market fit, so to speak for this, you know, particular use case. My employees like it, they're using it. Once that usage starts to grow, then you know, , this cost is not gonna be sustainable for me if I scale to thousands of locations. So now I'm looking at like, okay, yes I do want to implement a different architecture. I'll look at on-prem. Which the trade off is then, like you said, hey, it's that higher initial CapEx. 'cause you now you gotta put hardware in your store to run that capability. But there is a, there's a point where, you know, it transitions where the cost get too high for Clouts. Now you're gonna bring them down by implementing on-prem. But \ you get some lumpy spend in there because of \ the CapEx cost. Jeff: And just real quick if you are listening from the cloud or on your phone and you're liking this conversation so far and you want more of this kinda stuff. Go into your Apple podcast. Go into your [00:06:00] Spotify, whatever you're using, \ hit the subscribe button, leave us a review, please. If you're on YouTube give us a subscribe. Follow us. Make sure you leave a nice comment. If you like this and you find it interesting in your learning, tell a colleague, tell a friend, tell 'em it's great. . Help us keep doing more of this, keep bringing this great AI content from people like Joel. All right, Joel appreciate the patience on that. Let's jump back in because I guess you've clearly seen a lot of this, so maybe let's just kinda like take the ten second explainer. Why are you the guy to come talk about this right now? Like, what are you doing out in the field that is showing, giving you these insights? Joel: Yeah I work with some great teams over at Intel in the Edge Computing Group, and our reason for existence is identifying problems that our customers and their customers are having with, you know, deployment of. Compute or processing, you know, new applications or running new, trying to implement new solutions in industries primarily. So, you know, Intel is largely known for serving like [00:07:00] laptop and desktop and server markets. Our Edge computing group is focused on those same processors, but applying them into locations like retail stores or manufacturing facilities or warehouses and distribution centers because there's a whole slew of industry based use cases , that require , these sorts of capabilities. Jeff: So you're out in the field helping kind of understand what these end user customers need and what the use cases are that they're deploying these kind of edge executions of AI models for Joel: Exactly and that's how we look at it. It's like a little, maybe a little bit more unique than some other places in ai, you know, where. We're not only looking at the application of the models but we're also looking at what are the nuts and bolts of those models and how can we optimize 'em to run on our processors, and which ones are more conducive for our processors and which ones are not because. Obviously like customers are gonna want choice and we're not always gonna be a great fit for every [00:08:00] solution. So we, you know, we do our best to position ourselves with those particular use cases where we do, where we can offer like a cost advantage or a latency advantage or like a long term advantage of some kind. Jeff: Yeah, and I think this is an important thing that, that maybe a lot of people who are a little earlier in their AI kinda journey maybe haven't seen yet, but you know, it's very easy to look at. You tried some little ai thing that you built internally or some kinda internal tool and it's pretty cheap. I mean, like, if you're looking at analyzing text with open ai, you're gonna have a hard time running up the bill more than, you know, a couple dozen dollars a month, but. As you start to do more and more as you, or as you said, it's one thing to do one store or like one small set of users, but if you are some, you know, enormous retail chain and you're trying to deploy something so that every staff member on your team out in the stores has a device that's running these models, that's gonna be tens of thousands of people potentially, you're gonna really quickly, you know, a couple dozen dollars a month per person adds up when it's 10,000, 20,000 [00:09:00] people. So this is something as people kinda mature on their AI journey, probably good ways to think about their own, how are they baking it into their products? How are they deploying both internal and external applications is, you know, thinking about, maybe it's not the same apples to apples, but how do you think about cost? How do you think about the unit cost of some of these models? How do you look to reduce and. You know, pick the right model and the right unit economics at work to deliver these things. So I see a lot of parallels to what you're doing out there and how people probably should be thinking about it in their own product journey. Joel: . Absolutely. Like if I could add on that for a sec. Yeah. like there are examples where you know, you can have a much better customer experience if you account for. Internet going down. If the internet's going down you know, can you, can your application still function? Because if you're totally reliant and your application goes down, you don't have some sort of, you know, local on-prem. Sort of processing or local on device you know, that could impact your usage your [00:10:00] user's just experience with the application, right? And so that's how we look at things because that's the problem we're typically dealing with is, you know, network connectivity problems and then still needing the function and operate like a retailer, you know, if their point of sale goes down. It's fully dependent on the cloud. They're in a, they're in a world of hurt during that, even if it's an hour or two hours. And so, you know, we're working with solution providers that have lots of experience in dealing with that sort of situation. Jeff: Probably less vital use case here. But I know there's this one application I use in my phone that is all about you know, using AI to identify plants. And I have a place up in the mountains that we go with my family. And one of the things the kids like to do is understand like, what's that? What's this? \ and. Using that app to identify the plants and make sure we have kinda the right thing is fantastic. The problem is it's a bit remote and cell service is basically zero. So I can't go more than like 20 feet from our actual house Joel: Yeah, Jeff: [00:11:00] before my internet dies. And I can't, so the app is useless. So I, I haven't paid for the pay thing because the biggest use case I have for it, I can't use it for if they were running the model locally. I know we're probably not there yet with like the visual models, but if they could get to that point, I'd shell out a bunch of money for that thing. Joel: Yeah, exactly. To your point, visual models do consume a lot more data because, just 'cause of the nature you know, of the pictures having, you know, large file. Sizes, but those models are just getting better and so, and there's more and more memory and more and more compute being added on these devices. You know, one, just one thing that the trend has been a lot of people don't realize is of the s and p 500 of the top 10 companies. Seven of them are all now chip designers or chip vendors, you know, so, Apple's designing Microsoft's designing, they're all designing their own chips. Why are they doing that? They see this as a strategic important, you know, imperative for them to be successful long term.[00:12:00] Jeff: Well, so the other thing to get into here is, right, you know, I think the most common kind of objection I've heard to companies, adoption of ai, whether it be in their own product or whether it be internally for their teams to use, has also been security. And I know we, I mean, we have a couple kind of skunks works projects internally even that we're looking at that we're trying to, you know, we kind of need the models to catch up to some things we wanna do, but. When you have the sensitivity like that, if you can run the model on your own infrastructure that's a lot lower hurdle to clear than sending, you know, sending your data to any of the big providers, right? Open ai philanthropic. No matter what you sign, you can sign any kind of NDA secrecy. Any piece of paper you want, but in the end you know, I think it goes back to right, there's some saying like, the secret's always a secret until you tell someone once you know, as soon as you add any dependencies external, your risk profile went way, way up. So do you see companies kind of bringing this in for that? Maybe it's less in retail for that, but like kind of across maybe other disciplines? Joel: [00:13:00] Yeah, there's two reasons. Well, I'll give three security and then privacy. And then the third one is your unique data. And so that ties to security, but you know, your unique data set, being able to leverage that confidently. To enhance those foundational models is huge. It's got a huge competitive advantage for companies like your own, right? You have your own unique data set that you can leverage, and then you know that, hey, if these open models are available. Then you can augment them with your own data and improve upon their outputs. Right. The other thing here is I think what's happening is, like with open ai, you know, recently releasing their open models, I was kind of like thinking about something like two sides to it. Well, one is, you know, why would you do that? Your proprietary you, you're trying to drive monetization of your proprietary models, which are very good. But you have to think that part of the reason that they opened up the, some of those models [00:14:00] is to just drive more usage, kinda raise all the boats, and they know that they'll have like a, you know, like essentially a two tier model, right? Your freemium kind of open models available. And then on top of that, you know, paid versions of like the God models, you know, the big models, which then they can drive they be, you know, they're gonna continue to improve and people will pay for those right. Jeff: Also, I think, I mean, over time what I've seen is generally any kind of open source, self-hosted thing like that, you're going to get a lot of, you know, noise around it too. But there's only so many people that are really gonna run like production workloads and host all their own infrastructure for it and all the kind of stuff you have to, it's a lot more than just calling an API at that point. You need a lot more capability. Some people are gonna do it and the people who are going to do it probably need it. But like, it, you know, I would wager the cannibalization of cloud spend on that might be a little bit lower and people think, 'cause it is just hard to run those things in a high fidelity way. Like the, you need a operations team that knows what they're doing to run [00:15:00] highly performing applications and infrastructure on top of then running your own application. Joel: Yeah, it. Exactly and I've kind of seen this like before with With Edge, you know, with Edge in a prior, prior version was the iot. So internet of things. And we had the same sorts of things uh, challenges where you know, for certain workloads you people were gonna send their data to the cloud, have it processed, and then send results back. A lot of times that was like, you know, basically the idea of IOT was you had these sensors, you know, they were sound or video or RFID for instance, that data would get sent. You know, you have a use case like inventory management. The data gets sent to the cloud, it's constantly processed there and brought back. In the case of iot, there's a number of instances where you needed to keep that data locally and you needed it processed and it needed to be like, you know, processed very quickly and turned around so you can actually take action on it. And one of [00:16:00] those use cases was like loss prevention. For instance, if you invested in these RFID tags on all of your inventory. Then if someone was stealing something, you need to know immediately when they leave the door and that alert would come back. And so kind of what happened over time was, initially initial implementations were like mobile devices scanning and then sending the data to the cloud Later on, additional use cases got added where they'd like add gates at the door and then the tags could be, you know, checked at the door and then you have an alert happen locally. And so, depending on your strategy and depending on what problems you're trying to solve, those will be the drivers as to where you, how you architect your edge AI implementation. And you know, you're gonna look at that ROI on it, right? What am I gonna get for this use case? And and why should I be spending this much CapEx? In a lot of cases it makes sense and in some cases it just doesn't. Jeff: Yeah, I mean, even if you think about, you know, back to retail on your end and. [00:17:00] This is totally hypothesizing here, but you know, a lot of cases, you're talking to clerks, you're talking to people kind of helping make recommendations or at least, I dunno, at least I am often I'm like, Hey, what, you know, what maybe goes with this? Or what are some ideas here? When, you know, clothing retail that is or you know, like Home Depot, Hey, how do I kind of do X, Y, Z? There's a ton of retail options here. And having that conversation like. The human layer when it's, you know, someone like you and me, you go back and forth very naturally. I've used a lot of the voice layer of open ai and it's great. I love talking to it. It's, you know, handy to do sometimes where you can't type, but there's that lag that is, is really noticeable. And it's fine in that thing, in that case, because I know what I'm doing. I know I'm not talking to a human, but like in a retail environment, I could imagine similarly where e-commerce, right? Every like. Millisecond of delay in loading a page causes degradation and conversion. Do you start to see something where people aren't interacting with it as human-like and not, you know, getting, maybe not purchasing or taking recommendations as much if they have some kind of agent on [00:18:00] the floor, helping people with that, because that delay makes it seem less human and less kind of relatable versus you can make it seem more relatable if the back and forth time isn't to call out to the cloud and come back. It's just locally run. Joel: E Exactly. Yeah. And you're not totally dependent. And that latency is huge for both a customer service use case. And then the AI enabled checkout. So in, we've seen self-checkout kind of grow over time. A lot of that is really effectively the shopper scanning, you know, Their own things on the way out. But there are some examples of AI enabled checkout where the full process is handled by the point of sale that's largely happening in smaller footprint, places like convenience stores like here in Phoenix where I'm at. There's a full deployment in one of the local chains at Circle K of these AI enabled checkout through, through Magen. And basically you just put your stuff down, it sees everything, the camera sees, everything you put on the platform and it just inputs the, those pictures and translates 'em into [00:19:00] a barcode, right? And that data gets sent and put on the screen for you. And going back to your point around this like idea of like human latency. The big problem convenience stores have is when the line gets too long, they receive a lot more drops than say, like other retail environments. Someone went into a convenience store, because it is literally a convenience store. If I have to wait even more than 30 seconds in line, I might consider just leaving 'cause I gotta go. And so these new checkouts that have been added have been really helpful. And what ends up happening is you still have, like, you know, the normal checkout next to them. And then these other checkouts are available for people just to get in and out really quickly. Jeff: If you can kind of not, I guess not infinitely scale checkout but you can, you know, somewhat scale up and down as you need without having to kinda step function. The human cost. That's helpful. And like, you know, we have self checkout now, but you're always, you gotta scan things one by one. There's always some kind of error in the barcode won't read, or like. It read, but it threw an error and now you have to [00:20:00] wait 10 minutes for a clerk to come over I'm just impatient, unfortunately. Joel: Like we all are nowadays, right? It's just growing. Uh, our, Our expectations you know, just continue to go up and up with all Jeff: the go, the Google effect is real, man. It, it affects real life, not just online now. I mean, I guess we gotta keep up. I do have a question. I understand this at the edge piece is this is a binary, is it just like you're either running it like on your local device, kind of handheld or in the cloud, or is there, are there other pieces? Like, how should people think about this from an actual implementation standpoint? Joel: Yeah it's definitely hybrid. Like, there's some cases where everything ends up running at the edge. Because you have to. But there a lot of the cases are, you have a mix and match approach. And I'll give you like one example where in retail. We have traditional point of sale or, you know, Your kind of your everyday self-checkout Where the user's scanning. What has been the result of self-checkout growing over time has just [00:21:00] basically been more and more loss or shrinkage. that's a trade off the retailers make, which is, hey, I'm getting, you know, efficiencies on, on labor savings and customer satisfaction. But as a result, you know, some small percentage of the shoppers are taking advantage of that, and they're not really shopping, right? They're stealing. So, you know, the loss prevention use cases come in where like cameras are added. Then some processing will happen locally, like on an Edge server or in an Edge computer. And it doesn't necessarily happen on that, that point of sale, but it's happening locally on the computer. But then simultaneously that computer, that point of sale it may be processing both locally. And in the cloud. And, And they do that to have redundancy. So, they can make sure that if the network goes down, processing can continue to happen. But if that network goes down, they can't actually process the credit card transactions instantly. And so what they end up doing is like batching them and then they'll [00:22:00] send them all off once the network comes up in like, you know, in five minutes or whatever. And so the same thing happens with AI, where, you know, depending on what you're doing, you may be better off running your model in the cloud. You know, say like for instance, you're analyzing all the shopping baskets that got processed in the last like three hours. Um, That's a bigger, you know, processing, that's a larger amount of data that's gonna process in the cloud. And then that might come back with like a coupon recommendation or like an inventory stocking algorithm or some sort of smart detection of what's going on in that big batch of data that you just sent up to then go send back to the store manager to say, here's how you can go optimize this process. Jeff: Makes sense. So we've kind of gone through the theory, right? Let's get on the fun part. I mean, you're in the field doing this, you are talking to, you know, big companies, helping them actually lay the groundwork and understand what they, you know, need to do here. What are some of the, you know, and maybe you can't use [00:23:00] names in all of these and that's fine, understand that, but what are some of the coolest things you've seen people doing? Like what are some really neat implementations you've either heard about and, you know, via maybe team members or that you've been a part of and seen kind of some really kick ass things here. Joel: I really like the the growth in conversational AI right now. So a couple of things we're working on was one large retailer implemented a couple of in North America, a couple of conversational AI agents for their employees. And one of the agents was helping with IT issues. The other one was helping with like inventory and stocking. And they originally implement them in the cloud. So the employees all have like a mobile device that they're using in the store. They would interact with that mobile device. You know, it's like, basically looks like a phone or a large phone. They would query it and then that sends the query up to the cloud, to the models to run, and then they get a result back. Essentially, the usage was going to a point where they're like, Hey, this is gonna be become very expensive[00:24:00] pretty quick. So, they call us to, to work with them on, you know, their edge architecture. Hey, we wanna look at like. Here's everything we have implemented in the store. Here's all the compute we're using. How can we better use this? Is there a way we can leverage existing compute to like load these models on? Can like these models actually run on something we already bought and paid for? Or do we have to go like buy new hardware? And typically the answer is you do have to buy new hardware, but a lot of these models have shrunken down where you don't need to like go out and buy massive servers. Like that's just not the case anymore. Jeff: Yeah. I mean, the cost of the private infrastructure cost is not that huge, but you're gonna save a giant amount on a monthly bill. So the it assistant one kind of makes sense, right? But on the inventory side, how are companies using this for inventory management in a, a chat function? Joel: Yeah it's a natural interface, you know, it's like, a lot and a lot of people are talking about this as being our interface, you know, in the future for a lot of things, right. [00:25:00] Even email, like, is this a time, like even email gets reimagined. Same case here where, you know, you historically may have had some, like, you know, in inventory application it required you to like fill out like text boxes and fill in like, you know, fields and then do some things and then you hit send and then it would like do something in the cloud, come back and give you some information like, okay. Just ordered, you know, seven more bottles of shampoo. Okay, now, you know, go to the back and pick them up. And then it's, it sends a restocking. Well, imagine that like interface changing where it's like just a chat agent, like it's just like another person and like, Hey you go up to the aisle and like you take a picture. And it sees a blank section of this aisle, and it knows like this shampoo is out. It sends the, immediately the picture, it knows exact it, it takes the context, like the situational awareness to then go and fill out all those fields and text boxes and hit send for you. All you really did was [00:26:00] just send a picture. Jeff: Yeah, so basically if you're like, you know. If you are a large grocery company and you have, you know, people out on the floor and part of their function is to make sure the shelves have all the SKUs there they can, rather than having to kinda look and notate and manually pick apart every single SKU and see what needs to be brought back in, can kinda just snap a quick picture of that shelf, like you said, someone maybe a picker in the back and assemble the pallet and you can just bring it out and start, da and greatly speed up. The kind of person cost of that kinda stuff. Is that, understanding it right or, Joel: Yeah, exactly. And you, you can optimize your operations and your. Your workflow all around that particular, you know, inventory and restocking use case And ordering. 'cause maybe that workflow was like three separate processes before, now you have a chance to like re-architect the business and those workflows, maybe you can condense them into one or you streamline it down from like 10 steps down to like five steps. And that's all because you just have this new [00:27:00] interface to work with, which is. Getting everyone's creativity going as well as just delivering a whole new set of capabilities where potentially maybe even one day you create like an AI agent, right? That can do some of these things. That were manual before, right? I. Jeff: Any models you've seen where maybe someone thought one thing and then, you know, as you kind of were going through like, that's not really a great use case for this. Joel: Yeah. A lot of those, right? , something that didn't work out. One of the things that one of the retailers found was it was actually more efficient to have. Employees that were already doing stocking to be doing the inventory scanning with their handheld mobile device. As they were restocking. And so , it was kind of like this ongoing process of like, we can run the robot down the aisles every couple of hours. But the reality was they already had so many associates going around. All you needed to do was like, have them take pictures as they saw things. You just had to change a business [00:28:00] process. And then I've seen other cases where. I know there's some companies that are working on devices where, you know, employees can actually wear cameras, kind of like a, like police officers wear the cameras now they're looking at use cases where people are wearing the cameras, but then those things can also do inferencing and other run other models. So those are some experimental things that are happening. Jeff: Interesting. And then does agen get into this at all? Like, is, you know, agents have been a big talk amongst people in the SaaS world for a little bit now and some of those are really coming to fruition. Is this something that we're gonna see at the edge anytime soon, or is it kind of too compute intensive and maybe just we're farther away from that and that's not something that's been thought about a lot yet? Joel: Cisco actually and I posted this morning on LinkedIn. In their, in latest investor presentation, they posted what they expect to happen to the network. As AG agentic, AI starts growing and you just see this huge spike in applications running. And why is that? [00:29:00] Well, when I think about agentic ai, it's like now. An agent is essentially a software application. It's a software application that you've developed. It could be, you know, a very small one. It's doing a specific task or , solving a specific problem. The AI is helping identify when that agent should run under certain conditions. And you're training that over time. And so you can imagine just the amount of little tasks that are out there. That could be automated if you had the right interface, if you had the right tools to put them in the hands of users to build on their own, like a no in a no code kind of way. What we see is that, yeah this space is definitely gonna grow. And then at the edge, you know, you are, you're gonna see some of that proliferate, but it's gonna happen in the cloud first. It's gonna happen in the cloud first, and then it'll proliferate down. And some of these cases that I'm talking about we'll start happening, you know, you'll have more automated ways of detecting you know, inventory [00:30:00] shortage or. Other things happening in a on-prem, you know, in a retail store. That you can inference from the cameras that you have from other sensors that you have, and those applications can run automatically, but it's gonna take time because you know there's gonna be a human in the loop for some time before you can validate that the use case is really working. The way you want. But I believe this is gonna result in a huge compute refresh. There's gonna be a big network memory and compute refresh both in the cloud and the edge as a result. And not even on the ai, it's just on, on your existing compute. 'cause it's just, you're, it's gonna, there's gonna be more applications running. You're gonna need more horsepower basically. Jeff: One last thing I wanted to get to is, you know, we've talked a lot about kind of AI and implementation and how people should be thinking about maybe the future here of AI and running these models kinda more locally. That said, one thing that the, you know, our listeners are always interested in too is like, how are you as a product [00:31:00] professional using ai? But one thing we talked about kind of before the show is you have this cool use case where you're using chatbots to actually kind of simulate and train for those conversations before you go into them. Can you maybe just give us like the quick and dirty version of like, what that looks like and how are you using it in that case, because that sounded really cool. Joel: Yeah. So, you know, heading into some of these conversations with retailers or with solution providers, I always wanna try to get like, you know, build that empathy, like where are they gonna come from? And so I'll do this in two ways. One is like. I'll use AI as a replacement for SEO for, you know, I'd always Google things, right? See what shows up on Google. If I search like, you know, cloud versus edge computing of this particular use case, what are the trade offs? And then someone will, may have written a blog on it that'll pop up and I'll see like. Hey what would this customer have read ahead of time? Now I'm doing that with AI and trying to have a conversation as if, you know, the AI is my customer or user. [00:32:00] What are the sorts of things that they're going to see when they input uh, questions about, hey, like I'm a software architect. I'm a Edge architect. I am looking at implementing this conversational AI bot. Should I implement it in the cloud or the edge? Give me all the pros and cons and then give me your recommendation. And, 'cause I have to assume people are doing this. And so when I go in what is it that the AI is actually recommending for them to do? So I have that kind of upfront. I know that. Then two is. Hey, like based on your analysis here, give me all the potential objections that this particular customer might have. It'll go scan like their public filings, you know, their articles, their marketing, their, you know, trade shows that they've been at. And then it'll highlight different key points and it gives you some insight into the customer's lens and like when you go into speak with them, you have all this context and you're hopefully trying to actually get to a point where you can [00:33:00] help them solve their problem a little bit better from their perspective. Jeff: And if I remember right, you even said you are using some of that then to pass on to your team as well and help them kind of enable and be ready for those conversations, those interactions as well. Is that accurate? Joel: Exactly. Yeah. So, you know, try to give it over to them in just like easy to consume formats. 'cause a lot of 'em don't have time. They're running from, you know, a lot of 'em from the sales team, you know, they're moving from customer to customer. They only have so much context. So, and whereas I have a deeper expertise in their particular area. I try to get that kinda summarized for them so that they have a package to bring in ahead of their conversation. Jeff: Love it, real time kind of training that's based in, you know, somewhat of a real life experiences there. So Joel thank you so much for coming on. Thank you for talking about this area of ai, which I think is inevitably coming to more and more industries, but maybe a lot of people haven't thought about it yet, but time to start noodling on it. If people wanna reach out is linked in the best place to reach, or is there a better spot? Joel: No, absolutely. LinkedIn [00:34:00] Is by far the best place to reach me. Don't hesitate to drop me a dm. I'm happy to connect and have conversation about this, or product management or customer discovery. Love, all those topics. Jeff: Awesome. Yeah, and you know, if you like Joel's information here, like I said, he also writes for the Log Rocket blog, so check him out. He's got info@blog.logrocket.com as well as stories dot log rocket.com. If you like the podcast and you want more of this kinda stuff you know, the biggest thing you can do a tell a colleague pass on an episode, let them know that you learned a lot, that you enjoyed it and that they should check it out. Thank you for joining us, Joel. Huge thanks to you, man. I learned a ton here. I think everyone else who listens will as well. Have a good rest of your day. Thank you for joining us. Joel: Thank you for having me.