Voiceover: You're listening to Augmented Ops, where manufacturing meets innovation. We highlight the transformative ideas and technologies shaping the front lines of operations, helping you stay ahead of the curve in the rapidly evolving world of industrial tech. Here's your host, Natan Linder, CEO and co founder of Tulip. The frontline operations platform. Dominik: This week on Augmented Ops, we're switching things up from our usual formula. For this episode, we're introducing Erik Mirandette, Tulip's chief business officer, as your guest host. With that, I'll let Erik take it away. Erik: Dominic Obermeyer is the CTO and co founder of HiveMQ. He spent over the last 10 years Helping customers build the data foundation and insights that they need to create new connected products, find new efficiencies through automation, and scale their business within the demands of real time communications environments with MQTT. He's worked on a number of projects with HiveMQ's customers from BMW to Daimler to Netflix and to SirixXM. So welcome to the podcast. I'm looking forward to our conversation this morning. Thank you, Erik, for having me. Dominic, you have an interesting background here. I'd love to hear more about this. So you went right from university, right from undergrad, and co founded HiveMQ, and have been here as the co founder CTO for the last 10 years. What led you, as a young university graduate, to take on this fairly audacious Dominik: mission? What's very, very interesting. So even before I studied, I worked in companies, I come from the IT side. So I started as a, as a programmer in my very first job, the company was developing MES systems for automotive companies in Germany. And the first few weeks of my job. I had the fortune to be in a, that's a big project at a automotive customer where they had a rollout for let's say a new iteration of the vehicle they produced there. And what was happening I saw in my first weeks on the job in real life, what's happening if a big manufacturing line stops working and the people there are unable to fix it in a timely manner. So this deeply impressed me and I couldn't believe actually how much money was at stake back then. Really this, this story where you had a complete breakdown, which took like more than 15 minutes to recover the manufacturing line, had such an impression on me that I was really convinced that we are not taking reliability serious enough when it comes to computer systems. And especially on the IT side. This was one thing that I always deeply cared about. And also when we founded the company, this was one of the key things it was. Reliability, reliability, reliability. And what led me to co found a company was really, I was looking for an environment where I wanted to work in. Actually allowed to grow, but also did something very interesting. And the funny thing is we didn't start with anything on the audio side. So we really started the company with automotive companies, but on the cloud side. So we helped them build the very first connected car platforms, which connect millions of cars. And this is very high MQ, as an MQTT broke was really born. And back in 2012, MQTT, which is now quite a different story, was completely unknown. So IBM had a commercial offering out there. Which was based on hardware. There was a MQTT broker called Mosquito in its very first version. And then HiveMQ was, to my knowing, the first commercial MQTT broker on the market, which was then used, for example, with, you mentioned BMW as a reference customer, which we have, but also other German OEMs in order to connect the cars. And, uh, with Daimler, who's also a reference customer of us, they started in 2014, also deploying MQTT technologies inside factories. And this is now really, after many years, MQTT came from connecting devices with the internet, now much more onto shop floors and into factories. Erik: You said something kind of interesting that I want to double click here. The thing that prompted you to take this problem set on was people not taking reliability serious enough. But then you also said the way to solve this was a cloud native MQTT broker. A lot of times when I talk with folks, they see reliability And cloud native offerings as being intention. How do you have something that is a dependency on the cloud that is also better in terms of reliability, high availability, et Dominik: cetera. Let me elaborate on this. So what's here at IBM, what we're building is we call it a central nervous system. So cloud might play a role. But especially in manufacturing use cases, cloud doesn't play as big of a role, at least when it comes to the mission critical parts of it. The customers we work with, they usually have a multi topology deployment. This means that there's a cloud component usually involved, but there's also an edge component or multiple edge components actually involved. So why in connected car scenarios, where you connect cars while you drive, you connect them over MQT or usually a mobile network. And in this case, you don't need any hash deployments. So it's very unusual to have an MQTT broker in a car, for example. I mean, there's companies doing that, but this is more of an unorthodox kind of deployment. Usually the thing itself connects to the cloud. When you have a factory, as an example, Or multiple factories. Most of our customers have multiple factories they connect together. You have usually at least one cloud deployment, sometimes multiple deployments in different regions. Example, if you do China and Europe and the US, you usually try to also separate it for data privacy reasons. And then you have factories that must, under every circumstance, function without cloud connectivity. This means you have a local deployment of an MQTT broker or even a network of MQTT brokers. That can connect to the cloud and are connected to the cloud, but you usually do not have any critical manufacturing processes attached to the cloud. So we also strongly recommend to not, not do that when it comes to analytics. When it comes to machine learning use cases, this is a different story, but usually you do not have that real time connectivity requirement. You can have that, but this is usually not a must. Erik: Help orient me. What specific problems, like, if I'm a customer and I want to use HiveMQ, the problem set that you were initially responding to is lack of reliability from the MES system, and specifically, What you said is that the people who were responsible for these systems didn't also have the ability to fix or understand or make improvements to this system. So, I'm struggling a little bit. Can you help connect HiveMQ's product offering with the original problem set that led you to start this company? Dominik: Oh yeah, absolutely. So HiveMQ, first of all, it's not an NES system. So HiveMQ is a messaging middleware. This means HiveMQ is being used to transmit data and data packets between machines, applications, and also humans. And this at very high scales. So we have customers connecting, as example, millions of cars, 20 million plus cars, but also with very high throughput. Uh, which is especially important for manufacturing use cases or industrial IoT use cases in general. And reliability is something that is usually, it's not per component. Um, reliability is something you want for an end to end system. And the experience I made back then, I mean, the problem was with an MES system, but I think reliability comes from how you engineer and how you develop software. So how important is quality for somebody? And this is something that I was, was unimpressed with some of the companies I worked with. Because at the end of the day, you let customers down by creating low quality software, especially when a lot of money is on stake. You want to make sure that the software is working all the time. And I mean, there's multiple ways to achieve that, especially in a world where you have edge and cloud converge. It's also very important to allow for highly resilient deployments. Hello, everyone. So Cloud platform is a platform that is built on the edge, but also in the cloud. So because you cannot have cloud native at principles on the shop floor, for example, because the IT teams are administrating software and factories, they cannot rely on cloud components. And also the pieces of software you install there are usually very different to what the cloud software you have these days. And so reliability for me is really more of a concern that should affect really everything in the software stack, really from edge to cloud. And this is a principle that here at Daimler, we take very seriously. So high end is known for their resiliency, the high availability, and in the end, it's really like we are known for reliable things for 24 7, 365 days a year. So I added the example of Daimler. So since more than 10 years, they're using the software in production. Even with more than 10 years of upgrade, there was not a single downtime. And this is why people are excited. It's really because they know they can run their business upon these kinds of technologies, which we provide. Erik: Let me ask, I want to challenge one thing that you're saying here, which is like. Why can't you rely on cloud for mission critical software? I rely on it for my banking, I rely on it to get access into the building in the mornings, I rely on it for ERPs, all, you know, SAP, everybody's moving to the cloud here, and reliability of that infrastructure is very, very good. I would argue better than a lot of the on prem installations you see across many customers. Why are you taking such a hard stance on this? Dominik: It's a very good question. So I'm really driven here by what we see with our customers. So our customers in manufacturing space, they spend pharmaceutical manufacturing. There's a lot of automotive manufacturing. We do globally, discreet manufacturing, and also we do outside of manufacturing when it comes to industrial IoT overall, there's also a lot of renewables and oil and gas suppliers, which we work on. And it might be true that there's a lot of good infrastructure where you can rely on cloud, especially when you have latency sensitive use cases. You usually do not get the guarantees. Um, I mean, it might work in happy cases, but you usually do not get the guarantees you want to rely your business upon when it comes to cloud connectivity. At least this is what we see with our customers. And so the way how they think about it is the default is that edge to cloud is connected all the time. But even if there's an issue, With, uh, the internet connectivity with the wide era network connectivity that under no circumstance you stop producing things locally. Mm-Hmm. , Erik: I, I've heard this fear quite a bit, however, in practice I've not actually seen it happen, you know? Yeah. Uh, you know, I take it from Tu tulip is a cloud native tool. We obviously have an edge component, both, uh mm-Hmm. . If we think about the Tulip platform, cloud native, we have edge hardware, also the applications themselves run locally on the client, all that logic is executed locally. That said, I would say five years ago, this came up all the time. Look, you know, we need to be on prem. We are afraid of the cloud. Um, Over the last five years, I would say those concerns have pretty consistently decreased. So people are less concerned about it now than they were, and I think they're going to be less concerned about it. Fundamental bet here is that network is increasingly ubiquitous and the reliability of that network with redundancy. So even if your ISP goes out, you still have 5G, so on and so forth. It's becoming like electricity in many, many ways in terms of its reliability and our dependency on it as critical infrastructure. What I will say though is I could tell you over the course of Eight years of Tulip's customer base of north of 600 sites, I could tell you on a single hand how many times it's actually been an issue, and we're talking even in those instances. Seconds, maybe a minute or two. Now, the point that I do agree with you on is where you have latency sensitive use cases. So if you're talking about the potential of latency being problematic. So for example, I'm not going to say your SCADA system should be on the cloud. Like, you know, if you're going to be running an automated line or something like this, you don't want your control system, uh, on the cloud where you're talking about High reactivity, you know, sub 10 millisecond response times here. But increasingly the trend I see is that critical infrastructure is moving to the cloud. And I'm not just talking about Tulip. I'm talking about Tulip. I'm talking about LIM system. I'm talking about ERPs. I'm talking about the whole stack. And I think what you're seeing is the costs go down and I'm not seeing the trade off. I'm not seeing people compromising reliability. In fact, I would say that the uptime for many of these cloud native solutions is actually far better than their on prem, you know, predecessors. They are Dominik: interesting points. So it seems like there's different customer profiles which have different requirements. So the good thing is if you look at NQT infrastructures, you do not need to compromise on any of that. So if the use case allows you for having always on connectivity, it doesn't really change anything because if you build a network, MQTT infrastructure for Edge to Cloud. If you can rely on it that you have data transmission from Edge to Cloud and back all the time, this is great. But even if not, there is ways you can work around it. This is not a requirement. So this is actually pretty good. And if you're right, like infrastructure improves all the time. And if the use case is not prone to latencies, it's even better because then you can have really this all the time edge to cloud connectivity. For customers who don't have the luxury of doing this, or some might be overly conservative, this might also be true, there is always the fallback of offline buffering, so anytime the connectivity is broken for a few minutes even, all of the data transmission is happening afterwards. And this is really something the risk appetite also customers want to use. But it doesn't change how you deploy technology. Microsoft Mechanics It's really just, do you want to have this kind of always on connectivity or can you live without it? But in an ideal world, you have it on all the time. Well maybe that's Erik: a good segue actually. You know, we hear a lot about MQTT. We hear a lot about unified namespaces. Let's pause for a minute. For the folks who may have heard these terms discussed, but may not know exactly what, what it means or what it refers to. Can you tell us about like, what is MQTT? What is a unified namespace? How is this different than OPC UA? And you know, recently I saw Erik Bernstedt posts something on LinkedIn about OPC UA over MQTT. Can you help me and our audience understand these concepts and how they relate and what we're actually talking about when we throw these things around? Dominik: And this is, I think, very important to untangle many of these things because unified namespace and MQTT get convoluted these days a lot. So let's untangle this by starting with MQTT, talk about UNS, and then also talk about Opus UA. So MQTT is a communication technology that works vastly different than traditional request response technologies like HTTP for the web. MQTT MQTT is a technology that works on the publish, subscribe pattern. This means you are decoupling producers of data from consumers of data. And MQTT is a very old communication protocol that was invented in 1999 for monitoring oil pipelines. So it was invented for a project at Philips 66, was proprietary technology, was shelved for quite some time until in 2010, the specification was made open again. And this pretty much was released as a royalty free document, which really just means you can implement MQTT, you can implement a client, a broker without getting sued if you would do that. This was a huge deal because a community formed around it. So there was a gentleman called Roger Light who created the first open source MQTT broker called Mosquito, 2010 ish, pretty much as a hobby project. And this is when people started using MQTT for smaller use cases. I'll see you next time. And MQT was especially useful for home automation use cases. Um, but also it started to crawl more and more in this kind of big deployments of many connected devices, cars as an example. So you have car manufacturers who produce millions of cars each year, and they have a data transmission cost attached to any kind of online services. So usually you have the mobile network. And, um, especially 10 years ago, bandwidth wasn't as good as today, and also data transmission costs were much higher. Even today, data transmission costs are still pretty high, but compared to 10 years ago, they get lower and lower and lower. And this is where MQTT replaced pretty much on all the connected car vendors on the globe, HTTP based communication services, just because of the cost reduction and also the scalability it provides. This is how MQTT really started going into commercial offerings. www. microsoft. com And we'll say a few years ago, many companies also started thinking about this kind of decoupled nature of the publish subscribe protocol and implement it also in factory environments. And you always had this traditional message services like IBM MQ as an example has been forever in factories and stuff. But the advantage that NQTT provided compared to traditional messaging technologies is the extreme decoupling. So you make it extremely easy on the clients to implement, but you make it incredibly hard on the broker side. To actually fulfill the promises of the protocol. It's very, very hard to build a proper MQTT broker. It's very easy to build a simple MQTT broker. So this is something that also we found very surprising actually, how hard it is to build up mission critical MQTT component in a broker. But so it really, it really started like that. And then later on, people found the problems of the MQTT protocol, which is the big, the biggest advantage also. It's just a communication protocol that does not care about the data being sent. So you can send any kind of data. You can send data for connected cars. You can send data for discrete manufacturing, like you can send anything you want. The protocol does not care. And also the pretty huge sizes. You can send up to 250 megabytes of payload, which is really, really high for a communication protocol. And this is where companies start to think about how can we standardize to make sure that the producers of the data and the consumers of the data somehow can talk. And this is their spark plug, which is also. Getting more and more popular these days, especially in North AmErika, start to emerge to say, okay, we have an opinion in that way of describing how payload should look like, how MQTT topic structure should look like, and so on. And also like how the behavior between the applications and between the consumers and producers, what's the contract. This is where technologies like Sparklet started to emerge. On the other hand, and this is where Unified Namespace comes into play. People were thinking more about concepts because Unified Namespace, to make it very clear, it's a concept, it's not a technology. And while Sparkplug and MQG are technologies, UNS is a concept. And it got more and more popular over the last few years because it describes a way how to access data from OT data and IT data, make it accessible to the business. Given that most technologies work with point to point protocols, or sometimes bus protocols, it That are not really designed for end to end data movement across an organization. You will certainly have a lot of data silos and still to this day, you have a lot of data silos. The more you go to the OT side, the more data silos you will find because there isn't really an incentive for most vendors to actually open up and be interoperable. So some companies do, but overall the incentives are pretty low there still. And with UNS, the approach is really to bring everything into an MQTT infrastructure, because it allows for flexible namespaces, it allows for flexible payloads. So you can bring anything from machine data to we have a customer who, who brings in their personal planning. So you can get their personal plans and access it here. And the cool thing about NQT technology is you can make real time subscriptions to data you're interested in, but to some extent, you can also query the data. And this is where UNS concept really shines. It's making the central hub accessible to OT folks and IT folks to applications. I don't think we have seen unified implementation in the market. So every company, if they build a U and S unified namespace, is taking their own approaches. I think there is a sort of open question when it comes to data quality and other things, how to do that, and especially also interoperability. But we see many organizations. All of these see the advantages of having a central hub to query data and get access to data. It's such a game changer for them that they also take the drawbacks of that because the technologies around UNS are still maturing. And also the vendors entering this space are pretty much still starting and also trying to figure out what's the best way to build into operable UNS. Erik: Interesting. So at the risk of massively oversimplifying, uh, what you've just explained, but if I were to think about this, if I think back to like the ISA 95 traditional Purdue model, you This describes point to point communication. So this system is going to talk with that system, this machine is going to talk to that software, this is specifically the connection that it's going to use, this is the information that it's going to send, and then it's got some intended job. This is like kind of the old or traditional way of thinking about how to implement these IT infrastructures. And what you're saying makes MQTT different is that it completely moves away from this paradigm, that it's literally, you Anything can produce data, it publishes it, and then anything else, any other system in the network can subscribe to that data, and it's just a payload. It can be images, it can be a JSON blob, it can be a, it can be a string, it can be literally anything, and MQTT doesn't presume to know what the needs of this data are going to be and doesn't have an opinion about how this data is going to be used. It's just a mechanism that exists to capture this data, make it available, and then let other things access it. And then the unified namespace concept is basically a place where you say, okay, let's provide some structure that we can use to help navigate this space, right? So we're going to put all of this in the same place. We're going to have some way of labeling these topics because this is going to very quickly become, I think, uh, you know, an overwhelming or potentially, uh, the term sprawling chaos comes to mind or the potential of sprawling chaos. And so basically the concept of unified namespace. is how you avoid the situation where Everything's just out there. You have no idea how to access it, you know, uh, but this is bringing it into a structure, making that available. Is that like a fair characterization of, of kind of the core concept of MQTT versus the, uh, the, the ISO 95 model? Dominik: I think yes. Um, I really liked the, the strolling chaos also what you mentioned. I also think when it comes especially to UNS itself. Um, I also think there's a lot of potential by way for chaos because the next question usually, and this is if you see companies trying to wrap their head around the UNS concept, the questions which usually arises, Oh, what about access controls? Like who should be able to access data? How do I audit if someone's accessing data? How do I make sure that specific data isn't good and not bad? Like you want to make sure that there is no bad data. Even if you have somebody who is allowed to publish some data, you want to make sure that the data is. So an application who, for example, expects a JSON payload gets a JSON payload, another XML payload, because this would likely break the application. So you also want to make sure that data quality is high and you want to make sure that. If the data quality is low, you either prohibit the data or you want to heal the data. And healing data is much more important than it sounds in the first place, because very often you have the situation that you have an application that has a version one and publishes some data. And let's assume you have multiple factories around the globe who produce the same data. And then you do a rollout of a version two of that application. And now they, whatever, add some additional data points to their payloads, while others don't. And if you have somewhere in the cloud, as an example, another application consumes the data, this application needs to interpret two different kinds of payloads for the same application. And so what we see, what our customers are doing, they're using with a technology called Data Hub that also allows to identify an old version, as an example, apply a new schema, and then also To make a trivial example, if a timestamp was introduced, it is not there. Allows the reader to fit this kind of data and get it somewhere and then heal also the data. So, UNS, while it might sound very intriguing, and it's very simple, and I think it is an intriguing concept. The data is really in the details if you roll it out in production. And this is where you also need advanced tools to make sure that the data is consumable and also makes sense for the organization because you invite a lot of bad data into your unified namespace by default. Oh yeah. Yeah. Erik: Yeah. I can imagine. I want to pick this thread back up, but before we do, I want to do a quick detour here because in addition to being co founder, CTO of HiveMQ, you also sit on the MQTT standards committee. Right. And you have for, I think, the last, the last 10 years or so. So can you give us a behind the scenes look and insider's view of what is the MQTT standards committee thinking about? What are the biggest challenges that that committee is wrestling with? And what are some of the more interesting opportunities that, uh, that you guys are excited about? Dominik: So, the MQTT Specification Committee, the governing body of it is at OASIS, and OASIS is a very well known governing body that also governs a lot of industry specifications and works very closely also with ISO. So what's happening is every MQTT OASIS standard is also an ISO standard. This is very important because standardization is very important. It matters a lot. And ISO standards are globally one of the most well recognized standards that are used in the industry. MQTT version 3. 1. 1 was the first formal specification of MQTT. We worked on it in 2014. And this really opened the gates pretty much for a lot of implementations, especially open source implementations. Because now everybody could implement MQTT brokers and also MQTT clients. And the focus was always to make it that simple for a producer or consumer, aka an MQTT client, to interact with MQTT, but also move the complexity into the brokers. So this was one of the core things, and it always was focused on simplicity. It turns out that for some advanced use cases, the feature set wasn't rich enough for MQTT 3. 1. 1. So a lot of feedback was gathered by companies like IBM, but also with our customers. So there were new features that started coming. And finally, the MQTT version 5 2018, which is the standard now pretty much everybody uses. And what is interesting that also while there were some companies who were heavily working almost against MQTT as a standard, because back then there were a lot of competing stand ups when it comes to internal things communication. Also like AMQP, there was also XMPP and as a co op and others, they were competing for pretty much market share and MQT eventually also won. This also led companies like Microsoft to also enter the technical committee and also work on MQT version 5. And this is, I think, where it really started that big corporations also started to promote MQT as a technology instead of advising against it. This is what I would say since 2018. MQTT is now really the dominant standard to any kind of IoT, but also it's getting more standard for IoT communication. And what we worked on in MQTT version 5 was a lot of, let's say, traditional messaging patterns that are being introduced, like request response patterns. Also, So you can also not only have payloads, you can also attach metadata to the payload that can be interpreted by either some proxies or other applications. So imagine it's similar to HTTP headers that was introduced and many other features, some small, some pretty big, but it was always important to be compatible with the old version. So we take a lot of care to make sure that any kind of deployment in the field will not break. Even if there's a. Technology upgrade. So this is really table stakes and I think the committee did there. Good job there. The question is now what is the committee doing now since 2018? What? What are we working on? The good thing is MQDT version five did not get a lot of feedback for additional features. Like there is, of course some people are requesting some features, but overall the industry seems to be pretty satisfied with that version. There's one big problem though, and this is when it comes to use cases that require low power radio networks. Like narrowband IoT, which is not as interesting for manufacturing use cases, but it's extremely relevant for an oil and gas use case as an example. But also when it comes to high bandwidth, low latency use cases like autonomous driving and so on, with 5G technologies as an example. So on the end of the spectrum, when it comes to, to networking, there seems to be something missing because MQTT is not ideal for that. Because MQTT, I did mention that it's based on TCP IP and TCP has a huge advantage. Over UDP in most cases because it implements things that are pretty hard to do in a network for free pretty much. The problem is TCP is pretty expensive in terms of bandwidth, but also in terms of latency compared to UDP. And this is what now we're working on is called MQTT SN. And this is an MQTT version designed for sensor networks, but also for low power with error networks and high bandwidth, low latency use cases, which is also compatible with MQTT. But this is something that we believe will be a game changer for reducing costs on narrowband IoT networks, but also on satellite communication networks. So we're bringing MQTT to remote areas, but also for building automation as an example, while being almost fully compatible. And Taiming is taking the lead here. So one of our employees, Simon Johns, is working as the chairman for this endeavor. And next year is the 25th birthday of MQTT. And I do believe there will be a birthday present, actually, for ACES. Are you guys going to get a cake? I do not know. I do not know. But I think the OSS committee might be thinking about a cake. I think you should get a cake. Erik: I'm in the plus one for a cake Dominik: for MQTT. I love it. Okay. I'll bring it up to the committee. Bring it up. Say Tulip Erik: is advocating for a cake for MQTT. In all seriousness, we love MQTT. I think it's done incredible things for the industry. I think our customers have had a ton of value unlocked through this capability. It simplifies a lot of things. To your point, it's not a silver bullet, you know, in this term, sprawling complexity is probably apt and maybe a good segue into the next thing I would like to talk about with you, which is something interesting at the beginning of our conversation, you said, it's really easy to build an MQTT broker, but then It's really, really hard to build a good and reliable MQTT broker. And I, and I want to kind of pick that thread back up as we talk about HiveMQ's journey and some of the work that you guys are doing. I think it's done incredible things for the community. It's gotten us away from this paradigm of you need to perfectly architect all aspects of your IT stack before you can start creating value anywhere, right? Which is Yeah, for a long, long time, that's how things were done. And I think what you're seeing is the reality of the operations and the reality of the environments in which, you know, these technologies are being deployed. They're dynamic, they're complex, and you don't know always what the perfect end state looks like. And you have to start building, start creating, start solving these problems. And MQTT gives people the ability to say, I know this data is important, and You know, I'm using this system for now, and I'm using this data in this way, but I might change that out later down the road, and it doesn't paralyze you. You can still start solving problems. Anyway, for all those reasons, Tulip thinks that MQTT deserves a birthday cake. Now, let's pick this thread back up around HiveMQ. It's super easy to build an MQTT broker. You go online, there's a bunch of free ones. But it's really hard to build a good MQTT broker. Why is Dominik: that the case? There's multiple technical reasons for that. And we were also, when we started the company, we were actually surprised how hard it is. So it took us many, many years to get it right, especially as a big scale. So as I mentioned, like if someone wants to get started with MQTT, there is a bunch of cloud services you can use, including also Hyper V Cloud, which is free up to 100 devices. All the hyperscales have an MQTT offering. There's a bunch of open source software that came and go over the years. And there is a likelihood if you browse GitHub, you see a lot of student projects because it's a very traditional student project to also implement an MQT program. When it comes to the feature set, there is a few things that make it hard. Number one, you have standing TCP connections, similar to if you have an iPhone, it is connected all the time. And this allows for push communication. And the push communication approach allows MQTT for really big bandwidth efficient. So if data arrives at a broker, it pushes the data down to the device. This opens a few very interesting computer science problems. Number one is the amount of TCP connections you can connect to a server. This is one thing. Also, when it comes to high availability and reliability, how do you design a cluster? And by the way, HiveMQ, this is something we invented. We invented clustering for MQTT and still to this day, this is something we have teams actually work on it. In order to make it scalable and robust, because it's a surprisingly hard program, as in it requires like a lot of investment to do it right. The other thing is like really, how do you scale it? And scaling has these two components, it's the amount of data you pump through, but also the number of devices. The number of devices is not as interesting for manufacturing use cases because it's not happening that you, whatever, go on the shop floor and then one day, whatever, a million new devices show up. This is just not the case. But for Internet of Things use cases, this is actually the case. If you deliver physical goods to your customers that are connected over the internet. Just to give you an example of the scale, we have customers doing 30 million devices on a single cloud installation, and they have more than 1 million messages at any point of time. And they have more than 300 million MQTT topics active. So this is multiple orders of magnitude bigger than what you would do with traditional message queues. And this makes it very hard. But if I would say, if you build an MQTT probe that supports 1000 devices, whatever, a few hundred, let's say, messages per second and data points or text per second, this is something everybody can build. The problem now is, How do you support it? How do you make it upgradable? How do you, in a running system, update the software version without disconnecting devices, without any kind of data loss? And this is where a majority of the time goes into. Usually if you look at very simple MPT progress, these are exactly the kind of questions that are not addressed. And this is then also how customers usually start. Very often customers come to HiveMQ because they want a reliable NQTT broker, very often because they got burned by something else. But they stay usually not for the NQTT broker, they stay for everything you get on top, the day two operations, the tooling we have around it, the analysis methods. So. Finding a needle in a haystack if something goes wrong. Also the kind of policy enforcements. Also these days we have a program called HiMQ Edge that allows you, it's an open source software, that allows you to integrate non NQT workloads with NQDT. So we integrate PLCs directly as an example. So yeah, usually, Customers come for the value added things on top of Erik: MQTT. Interesting. I want to ask last question, you know, if I'm a new customer or if I'm a new prospect or if I'm an IT leader, an OT leader, and I'm just starting my journey, what's the number one piece of advice you have to set them up for success on this journey that they're about to embark on? Dominik: I think the number one piece of advice I would give is stick with the open standards. Make sure if you go with MQTT to not get locked in. by something that looks open, but it's not. There is this traditional playbook that many players take of using an open standard, having some additional, let's say, non standard pieces on top, which gives the illusion of being open. Very often, our customers, especially when they start the MFG. works of innovators, also in the organization, they do not have the budget for something like production grade, or they don't need it yet, or they want to try out different things. Make sure you stick to the open standouts and very often marketing suggests something else. So you have to issue with other technologies. It also becomes a UNS. People try to lock you in with something that's not MQTT. Stick with standard MQTT and not with proprietary functionality and then move up from there. And if you want proprietary technology, make a decision intentionally and not lightly. Good advice. Erik: Dominik, thank you for joining us on the Augmented Ops podcast. You know, appreciate the time and let's talk again Dominik: soon. Absolutely. Thank you so much. Thank Voiceover: you for listening to this episode of the Augmented Ops Podcast from Tulip Interfaces. We hope you found this week's episode informative and inspiring. You can find the show on LinkedIn and YouTube or at tulip. co slash podcast. If you enjoyed this episode, please leave us a rating or review on iTunes or wherever you listen to your podcasts until next time.