Olga Stroilova: Manufacturers need incredibly high quality. They need reliability, they need trust, and they need a repeatable system that's gonna be at 2, 3, 4 nines all the way through. So with trust, they can't hand over decisions to black boxes. Narrator: You are listening to Augmented ops where Manufacturing meets innovation. We highlight the transformative ideas and technologies shaping the front lines of operations, helping you stay ahead of the curve in the rapidly evolving world of industrial tech. Hey Mason Glidden: everyone, and welcome to Augmented ops. I'm Mason Glennon. I'm the Chief Product and Engineering officer here at Tulip. We're coming fresh off of Ops calling last week our annual user conference. This year we had I dunno, 600 or so people here in hq coming, seeing demos, hearing about new roadmap features, listening to other customers and sort of their stories of transformation. It was an awesome time. And AI was really at the forefront this year with a lot of the launches and things that we've been building and we're demoing for the first time. Actually, we had three or four new launches that we showed off last week. With that, I'm really excited to, to bring in sort of two people from our product organization here today. Pete and Olga, who have been working really heavily with our new roadmap and some of these features that we've been building. Welcome, and do you mind introducing yourselves? Pete Hartnett: Yeah, sure. My name is Pete Harnett product manager here at Tulip responsible for a lot of our AI features. So excited for this conversation today. Olga Stroilova: Hi everyone. I'm Olga Stroilova. I'm also a PM here at Tulip in the build experience. Super excited to be with you. Thank you. Mason Glidden: Welcome. We'll forward to the conversation. We're obviously seeing a lot of noise and hype about AI and Manufacturing right now. Everything from what agents are gonna do in our factories and how agents are changing the way we think about software to lights out factories and robots. And on the other side, the sort of AI can't replace people contingent of it. Today we're gonna explore what's real, where some of this might be overblown and where we think some of the actual opportunities lie, especially for manufacturers trying to put AI into their operations responsibly. So it feels like oftentimes we have this extreme right, two separate camps on one side, the like AI, zealots of AI can do everything like tomorrow it's available to do X, Y, Z on the other side. Totally useless, can't do anything, dead end, all of that. So let's try to find that middle ground Olga, what sort of specific problems do you think we should be trying to more focus and work on right now with ai? Olga Stroilova: That's a great question and we are in the industrial and Manufacturing space, and AI isn't new to industrial. We've used AI in industry for a long time. We've used machine learning. We've used forecasting, defect detection, understanding when we need to restock and how we need to restock or identify use cases that we could forecast to be great across multiple sites. Now, what's actually changing is the context. And the accessibility and availability of that ai. So it would be great to focus on AI for data, so making those insights that forecasting even more accessible like at a chat. Or a click and AI for content. So we've already made some steps there with AI Composer helping you pre-build some work Instructions app and we can definitely do a lot more there as well. Mason Glidden: So on one point you have there the, like how do you contextualize ai, put it in the right place at the right time. And two, really talking about kinda like almost an. Accessibility of using these models and a new way of thinking about it. Totally, Olga Stroilova: totally. Previously, you might have had to be a data scientist with an AI ml PhD. Yeah. Now you can go into chat with tables and just ask a few questions and get some awesome answers. Mason Glidden: That's, there's a lot less math involved these days. Olga Stroilova: That is always great. Mason Glidden: Pete, what do you think? Where's AI for operations headed next? Pete Hartnett: It feels like we're at this kind of interesting precipice. Like Olga said, we've had these tools that can like, I don't know, unlock these insights, but there's been a real gap in the ability to like respond to them, right? It's pretty easy to pick up every machine sensor in your factory and start, streaming out all of these insights and bearings that need to be replaced and a suggestion to, to reorder something. But at best, that ends up being an email to your scheduler and they need to decide what to do, or an email to your maintenance team. And so with the advent of this like age agentic, ai AI that can start taking action. It feels like we have this like interesting moment to start closing that loop, right? Take these insights and start proactively acting on them. Mason Glidden: It's interesting, it feels like some of what we're seeing with agents now is the ability to automate some of the low hanging fruit that previously only humans could do. From a like knowledge work side. I think we've heard from a lot of our customers like, Hey, I have this report I have to build every day. I have to go look at all these places and try to find anything that might be weird in this data. And they're starting to think of how can we use AI as the first pass on some of those tools. Olga Stroilova: And that first pass doesn't replace the human expert. Exactly. It just does some of the busy work. And then they can do the thoughtful problem solving part of it. Mason Glidden: Keep the human in the loop, keep the human doing the thing they're best at, but remove the parts that are on that sort of like spectrum of the boring route side, but a little bit more repeatable Olga Stroilova: because that boring side still takes energy. And then you can't be creative. Mason Glidden: You can't do the thing that like gives you unique leverage. Pete Hartnett: Totally. You hit on leverage here, which I think is actually the key thing, right? Like the bottlenecks in Manufacturing so often are that like last mile of taking that action and it's can you put an agent in a place where they're handling the things that are lower risk or the lower complexity and only pull the human in when they need it, right? So that scheduler, keeps their job, but they can do 10 times more, optimization to, to how their machines are running, for example. Mason Glidden: As we've seen people start to deploy tools like this into production, it oftentimes feels like these prototypes or these pilots stall out. There was that MIT study in the news recently saying 95% of Gen AI pilots don't prove ROI Olga, do you have any thoughts on that? Why do you think so many sort of. Fall flat. Olga Stroilova: That's a really great question. Some of the challenges with ai, both with gen AI and other kinds of our AI are around data governance, especially in some of the sensitive industries. Also, adoption and the culture around adoption data is really. AI is only gonna be as good as the data you provide to it. Otherwise it could jump to conclusions or fill in the blanks or not really be able to interpret. Previously with ml folks spent months, years cleaning up their data to get great forecasting. Now we don't have to spend quite as much time, but we do have to have the data there. Be clean enough for a Gen AI agent to be able to interpret. Governance is also super interesting with Gen ai. Similar to other kinds of ai, you really need to have that double click, the traceability to be able to understand, Hey, where did this forecast? Or where did this answer come from? And AI agents are really getting good now. At showing their work, they can show a plan, they can show their sources. And that's a huge step for regulated industries to feel safe and comfortable with using gen ai. And the last part is cultural. Whenever something new happens, it takes us all a little bit of time to get used to it and to feel excited and just. Use it as a part of our day to day. I think that's also gonna shape and evolve, keep Mason Glidden: patterns, adapt, and people change a little bit. Yeah. Olga Stroilova: Yeah. We're all adapting flexibly to the new tools that we have access to. Pete Hartnett: I think what we've seen is this, natural response to a new technology is like, ah, we can make a pilot just to test that technology. And this sort of like a bolt on a new technology without a clear problem that it's intending to solve is destined for failure, right? And so I feel like we're finally getting to a level of maturity where. It's clear where this technology can solve unique problems and drive unique value. And then, the successful pilots that we're seeing now are taking kind of the intersection of this new technology and the problems that it can solve, as opposed to just adding this as a tech pilot or a sort of a demo, POC or something like that. Mason Glidden: Yeah, that makes a lot of sense. It's like another tool in the Lean Toolbox, right? You're not gonna go and say, Hey, we're scrapping everything and restarting and going clean slate. You're just continuously, what's the next improvement? How do we continuously get better? And now suddenly there's a new class of things that previously were very hard to improve on that now all of a sudden you have a new tool that can enable you to start iterating and improving there. Olga Stroilova: And it doesn't have to be scary. You can start small. Yeah, you can get comfortable with that tool. Do a little pilot, as Pete mentioned, and then. Really understand how this tool is best for your specific industry. Use your expert experience, plus now your comfort with the tool to know how you'd use it best. Mason Glidden: Yeah, exactly. I think we, I wanna go back to something we talked about earlier. Where we started to touched on human in the loop and the importance of how do you design good processes and systems that keep the human in charge at the right point in time. We launched a bunch of new features at Ops calling last week around in this area with agents with video composer and with Ops Modo that all have this at its core, right? How do we keep. Coming back to the human at the right point in time for approval and verification and keep the human doing that sort of higher leverage thing. Pete, can you explain where our customers are starting to think about putting these tools in agentic AI into practice? Pete Hartnett: Yeah, I think you hit on, some of the places that people are seeing like really early value, right? These like manual tasks, think like the standup, handoff report that you generate every morning, which right now you go to 10 different dashboards and collect your numbers on an Excel sheet. Taking and building those workflows into Tulip agents to go fetch all that data that you need and then build your kind of standard report. We also have customers that are like starting to swing a little bit bigger, right? How can they empower their engineering teams or their. Their validation and quality teams in the process of actually building Tulip solutions, right? We had this builders workshop at ops calling where they gave, we gave them agents for five or six hours, and they built lots of pretty incredible things to empower, like I said, both their engineering teams and actually the shop floor. A notable example that comes to mind is one of our customers that was building with agents. They built an agent that evaluates many different applications, identifies common patterns in the Manufacturing process in those applications, and then ultimately suggests improvements to how they're splitting their demand across different lines because they've identified, ah, these three SKUs follow a very similar process. And so we can actually build all of those on the same line. So really going end to end from understanding of applications into, production, tangible output. Mason Glidden: Yeah, it was a lot of fun at that agent builder challenge last week. One of our customers there called it the largest shift he's seen in the last five years. How he thinks about his operations and his day-to-day work and what he can do with the tool, which is pretty cool. And he can't wait to see what they start building there. Olga Stroilova: Just from the side. The energy and the excitement in that room was palpable. I wasn't a part of the challenge, but anytime I walked by it was just. Everyone had not seen something like that before. And in the readout after, they're like, when can I have this? Mason Glidden: I think the common thread through a lot of it was going back to previous point leverage, Olga Stroilova: right? Yeah. Mason Glidden: How do they find, like we, we brought a lot of real experts in people that were core to a lot of their company's efforts and transformations. And they saw this as a tool to speed up and to make more of their time and be able to do more and have a larger impact because they were often the bottlenecks in transformation. Olga Stroilova: They had concrete problems that needed to be solved that they were now spending too much of their time on, and they're like, wow, this could change my life. It could change my team's life. Mason Glidden: Now I think it'll be interesting seeing as this starts to get into more regulated industries. I dunno if Olga, you have any thoughts there? Olga Stroilova: That is a great question. This circles back to the governance question, especially, in Manufacturing, manufacturers need incredibly high quality. They need reliability, they need trust, and they need a repeatable system that's gonna be at. 2, 3, 4 nines all the way through. So with trust, they can't hand over decisions to black boxes. Fortunately with Tulip, every AI action does have that kind of trust built into it. It is auditable, it is explainable, it is aligned with compliance require. We've spent extra time with our compliance team digging deep into what regulated industries need and make sure that our products are designed for manufacturers to be able to safely use in the day-to-day from day zero. And we call this AI built for Operations. It's context aware. It is safe, but it is also purpose driven. We do hope to evolve this over time. Based on feedback, but also based on our understanding of these regulations as they evolve. And we know that regulations are evolving. Mason Glidden: Yeah, absolutely. I think as long as we think of these tools as a human almost, and that's like sometimes the tricky part 'cause they have weird things there. They're clearly not human. But they also make mistakes like a human. Yeah. And we've figured out how to build validation processes and regulation that account for. Human mistakes. And as long as we continue doing that, and we don't treat this as ah, it's the computer, it's never wrong. Actually, it's going to be wrong. And you need to treat it like a system that will be wrong and place it in a validated context that will help catch those mistakes. Olga Stroilova: And one way to think about it is if you outsource a data analysis to make a key decision, you might pay someone 500 KA million to get that data analysis back to you, but you're gonna reread that. Yeah, Mason Glidden: exactly. Olga Stroilova: You're gonna make use your own expert opinion Mason Glidden: and check the data that they're looking at and understand their outcome and maybe test and iterate. And I think we'll see a lot of that same thing as we think about app creation and agent workflows of where does the human come back to help with that validation and double check everything just like they would in a normal, traditionally created app or process. Olga Stroilova: Totally. Mason Glidden: Let's skip forward ahead a little bit into the future. As we look ahead. I'm curious what AI for operations actually means to each of you. Pete Hartnett: I'll start us off. I think it's interesting we talk about generative AI as this thing that's much like human intelligence, right? It's a generalist. But it's not great at these kind of deterministic cases, and we often. Talk about this as a downside. I think in a lot of ways it's actually an upside, right? There's a new domain of problem that we couldn't solve before, and so I think it presents this kind of unique opportunity to have a toolbox of AI tools that actually scales to all the different challenges that you might see in your business. And so it's no longer just a pilot or just AI for one point problem but rather AI that can be transformational to your business. I think there's certain certainly a future state. Where Manufacturing systems actually start getting smarter, right? An idea that they do this autonomously, right? Your engineers are working collaboratively with AI to build better systems and build better process. Importantly, I think the core goal here is that AI is, it's augmenting everyone through the Manufacturing system, whether it be engineers or those that are directly out on the shop floor. Olga Stroilova: That's a great point, Pete. Augmenting and leverage. I loved the word leverage previously and also the concept of human-centric. We're not replacing humans here, we're supporting them. It's like one of those power suits in a superhero movie that you can now fly with it. Mason Glidden: Yeah. The Ironman suit of agents. Olga Stroilova: Exactly. And it did feel like folks in the app building challenge felt that way. They were like, wow, I feel more of powerful today. And we could bring that to any area that folks are. Building within Tulip today using AI Composer to create work instructions from PDF. Taking that a step further to recording a short video on your phone of a very complex part of the Manufacturing process and just building an app from that. In minutes. This could go on to making more complex apps, building logic with ai, suddenly going into validating that logic with ai, this is super powerful. It's gonna save a ton of time and it's gonna help those folks. Focus on other problem solving capabilities. Hey, which use cases are more efficient and effective? How can I scale those use cases? How could I get the most out of Tulip? Mason Glidden: Yeah, I think the Ironman suit of of AI is like a pretty good analogy. I definitely felt that way the first time I opened up Claude Code and used it to. Yeah, write some software. I was like, holy cow, this is so cool. Like I can fly so quickly right now and maybe I'm gonna crash into a building or two along the way. But when that exploded throughout our own engineering org and we saw everybody having that moment of, yeah, wow, I can do things that I couldn't do before. I think the question for all of us came to like, how do we enable that for our users? Exactly. How do we give them the governance? The context, the tools where they can have that moment, that light bulb moment of, oh, I can suddenly do a lot of things that I couldn't do before. Olga Stroilova: The safety on the suit that you could distribute that to all of your center of excellence and app builders. Yeah, exactly. Yeah. Mason Glidden: Yeah. I remember one of our engineers saying oh, I no longer have any estimate for how long things take. Some things are way faster, some things are still the same, and it's a new skillset tool that you have to learn and figure out how to deploy into these different scenarios. Olga Stroilova: And that could be okay. Mason Glidden: Yeah, exactly. Change Olga Stroilova: is. And we're evolving with that change. Yeah. Mason Glidden: Pete, I know you've thought a lot about how do we make customizable AI for our customers, the different challenges that different customers end up encountering. Can you speak a little bit about that? Pete Hartnett: Yeah, it's, I don't know, foundational to the Tulip story that all of our customers have unique needs and unique, challenges. And Tulip is built to support the flexibility to build exactly what you need for your business. And I think the story is no different with our AI offering, right? We talk about something like a shift handoff report as a good example. The metrics I care about are a little bit different than the metrics you care about, right? The data might be in a slightly different place. Maybe your data's in an ERP, it's not in, in a Tulip table. Maybe it's in a completion or a record history widget. And so giving people the tools to take, a good starting point with something like the Tulip Library, download an agent, but then configure it and fine tune it for your specific needs. Is in a lot of ways the same story that we have for the Tulip App editor or Tulip tables, or any of the other components within Tulip. Mason Glidden: I was thinking back recently to when we started off our co-pilot team and started thinking about gen AI in our product back in 2022, and I remember the sort of how transformational GPT-3 0.5. Felt at the time, and man that feels so archaic now, it feels like we've also come so far since that first moment sort. It feels like the only constant has been that change as we look forward five years and don't really know where AI is going to be and how much it's gonna continue to evolve to either you have any advice on how to adapt to that reality and how to think about that change going forward. Pete Hartnett: I don't have concrete, advice here. I think the way to think about this is what are the investments you can be making today? That will continue getting better as the models continue getting better. A good example of this is something like AI Composer. We built this kind of at the first moment that a model could do what we needed with the understanding that the models are gonna keep getting better and the feature's gonna be keep getting more capable just because the technology is getting better, right? So how can you chart a path where you know the expected, improvements in the technology will continue to make your solutions more powerful and more capable. Olga Stroilova: I love that approach. It's very insightful. It's a great sort of perspective for making decisions. What can you invest in now with the assumption that the technology will evolve? Circling back to regulations and data, investing in your data and your people seems key. Let's get that data clean. Let's make sure any level of AI can use it, but also upskilling, let's say upskilling myself and my tech force or upskilling. Your business to be able to leverage that new AI and know how to best use it helps people be superheroes. It helps 'em know how to wear the suit no matter what suit comes along and also some flexibility and comfort with change. It's okay if change happens. We just have to know how to adapt with excitement, joy. And the potential for creativity to best use it. I do like to also keep an eye on like how are others in the landscape using it and get some best practices from them. And I think we would also try to do that with our customers to share those best practices as they come up. Mason Glidden: Yeah, that's some great advice. Just always keep experimenting, keep testing, keep learning. Thanks for joining me here today, Pete and Olga. Thanks to everyone listening. If you enjoyed the conversation, I encourage you. Go ahead and subscribe to Augmented ops wherever you get your podcasts. Bye everyone. Narrator: Thank you for listening to the Augmented Ops podcast from Tulip Interfaces. We hope you found this week's episode informative and inspiring. You can find the show on LinkedIn and YouTube or at Tulip dot co slash podcast. If you enjoyed this episode, please leave us a rating or review on iTunes or wherever you listen to your podcasts. Until next time.