Yaro Tenzer: A lot of this really driven by the excitement around large language models. Yeah. If you're not using large language models today in some way of capacity you really don't get exposed to what's going to happen. Narrator: You are listening to Augmented ops where Manufacturing meets innovation. We highlight the transformative ideas and technologies shaping the front lines of operations, helping you stay ahead of the curve in the rapidly evolving world of industrial tech. Natan Linder: Welcome everybody to Augmented. This is a really, I think, important episode I wanna say, because it comes in a moment where robotics are taking center stage, shall we say once again. Yaro Tenzer: It's definitely getting a lot of attention as attention Natan Linder: lately. Yeah. And Yahoo Tener is a good friend and co-founder of CEO. Right Hand Robotics is joining us today. Welcome to the show. Yaro Tenzer: Good to see you now. Thank you for inviting. Natan Linder: Yeah. So what is Right Hand Robotics main offering? What do you guys do? Yaro Tenzer: We enable robots to manipulate every little objects and really apply it for supply chain logistics, for operations, really automating the tasks that are mandate and very repeatable. So really the smarts, the AI that runs behind the machines. Natan Linder: I think you probably more than me, but have been actively involved in trying to build all sorts of robots for the past almost 20 years, I wanna say. All my life. Yeah. Tell us the story. How'd you get into that? Yaro Tenzer: It was fascinating because I was really interested in manipulation and sense of touch, and my PhD was actually from Imperial College in London focusing on surgical robotics and how do we empower. People to do telesurgery. And then I got a call from a professor from Harvard, professor Rob Howe, and at the time there was a team of Harvard Yale, and iRobot focusing on developing end factors that's say, really building capable robotic hands for a lot of different tasks. It's fun to think about it that it was around 10 years ago now where our team really focused on manipulation and when we, when our team won that challenge, it was a DARPA challenge at the time led by Gil Pratt Natan Linder: Gil, who is now running the Toyota Research Center, correct? Yaro Tenzer: Yeah. He is the head of TRI. And for us it was a realization like, Hey, if we. Take this capable hands and attach, data, visual processing to it, we can actually start building smart machines. And that was the a that's Natan Linder: assuming the machines can see. Yaro Tenzer: Correct. And so the Trond at the time and when we looked into this was like, Hey, the cameras getting better and cheaper, really driven by cell phones. And so if you can take the. Data from these cameras and do the processing, like the Trond of the cloud the processing, the understanding was clear, right? And so we said, Hey, if we join now, we start building Now, we'll be best positioned as this. Tools get better. And I think now seeing it all coming together, it's wow. It's Natan Linder: what is this now thing? What is this moment that you think we're experiencing in robotics? Yaro Tenzer: A lot of this really driven by the excitement around large language models. Yeah. People finally realize if you're not using large language models today in some way of capacity you really. Don't get exposed to what's going to happen. And some of the interesting things around this is really, the ability of large language models also to generate images and process images. So it's really exciting to see this evolvement. So over the last few years, this. Push where we're processing an image, understanding the surrounding, all of that is really coming together. And some people call it VLAs vision language. Models. Yeah. Vision, language, action, I should say models. And that's the, these capabilities are coming together. That's the excitement moment. Natan Linder: I started hanging out, looking at, robots when I moved over with, rodney Brooks, professor Rodney Brooks at the time when he started Heartland Robotics, that became Rethink Robotics. And it was the dawn of cobots, oh eight era rethink robotic builds Sawyer Cooper, and then Sawyer I think well ahead of their time. So two arms each. Six degrees of freedom a face. But the face is a screen. And a bunch of cameras. Yep. UR robotics are just dawning. Not yet the phenomena or the attack of their clones, which, I dunno. Have you counted how many clones you are have by now? It's, Yaro Tenzer: yeah. We getting reached out to, so many times everybody trying to sell us a robot, which is like fascinating. Natan Linder: How much did the price drop the past decade of the. Mid-range you are to like the clones, like how cheap has that cobot space gotten? Yaro Tenzer: I think you can buy an arm for a few thousand dollars. That's Natan Linder: crazy. That's when in robotics where we met and, I was trying not to interfere too much with your work. Who knows? Yeah. Yeah. It, it dropped Yaro Tenzer: it like 10, 10 times, 10 times. Yeah. Which is, Natan Linder: so the cost of hardware is dramatically going down. Correct. And we can also say that for the cost of sensing, 'cause the sensors have become dramatically more capable. Yaro Tenzer: Correct. The only thing is like the cost went down, but the reliability. Has suffered. So there are definitely more cheaper arms, but the question is would you p put them in production? And if something goes wrongly, what will happen? For sure. And I think that equation is still, a challenge. Natan Linder: Even the reliable tier one product dropped in price. Correct. As a result. So you can still buy the reliable stuff cheaper, Yaro Tenzer: correct? Yeah, a hundred percent. Natan Linder: Okay. So we talked about the hardware equation. Talked a little bit about sensing. Now we're getting to this the hype moment, no good episode without some hype. What is the story with humanoids? Yaro Tenzer: Yeah. It's a fascinating moment where I think some companies seeing this ability to put this end-to-end models and really try to enable robots, right? And I think this is really a trajectory. The question like where we are on that trajectory, like we all fascinating by. Just human shape. And so building something that walks around, has the arms and has some type of a, a head, one can argue that it can match or feed a lot of environment at the same time. I think it's very speculative. And he mentioned, Rodney Brooks he recently wrote a really nice article saying, Hey, not in the next 15 years, and this is coming from Natan Linder: the guy who actually. Part of his life mission was to build useful humanoid robots. Yaro Tenzer: Yeah. I dunno if he is, he was really focused on humanoids as such. But, putting what you mentioned about Sawyer and Baxter, like really putting, like enabling robots and putting them in factories and doing things, he definitely spent a lot of time trying to solve that. Yeah. That aspect. And around the humanoids is obviously. Elon is with Tesla drawing. A lot of this, I figure just seeing some of the hype around this, but it's really the cost of hardware that dropped substantially. It's the computational capability that came together. Yeah. And the open question that, hopefully will touch on this is okay, what can they do and what value they can provide to the society? This is, I think, is very speculative as well. Natan Linder: Okay. So before we get deeper into that. I think it's fair to say you would characterize yourself as a roboticist. Sure. Yeah. Or, and hang out with roboticists. Sure. So let's dissect this kind of humanoid thing. 'cause I've written like some sort of a article, some may say a rant about humanoids recently. I think eventually like when the hardware, software equations meet well with the use cases, will we have humanoid robots? Likely there will be all sorts of consumer driven use cases in our environment. Yeah, perhaps it's gonna be a pretty interesting world and I think that falls into, we are continuously building our science fiction and there's at least some empirical evidence in the past a hundred plus years. Humanity is doing that, which is a relatively short amount of time, but that's what technology does, right? So it's okay, I guess that can happen. But if we're more like in the, here now, next couple decades and talk about industrial environments and production, all that kind of stuff, not to belittle Elon's ability to, make reality for all of us. And then we all need to eat crow and agree that there will be thousands of humanoid robots if he wills it to be, but. Know, when I talk about this, it reminds me how Elon, three, four years after, the Tesla, I think it was like the Model 3D Buckle had this very candid interview saying, oh, we over automated and we took the humans out too soon or something. And so at least that's empirical proof that he can also be wrong. So I just just say that, wanna say that. But here's like my real kind of between rant and beef with the humanoids. Okay? So as a roboticist, if you assume the dexterity, the perception, the span of control. So envelopes and safety, and you assume the teachability as we say, from. How do you train those robots in free open-ended environments where they have to do a lot of zero shot techniques on anything that they see because that's their brains right now. It's like this new, multimodal. It's not just like VMs. It's like VLM plus, I don't know, whatever's left of Ross, us. But then you also go to the hardware side of things like I was looking at like. How much energy efficiency you need to actually have human-like locomotion and gate preservation and all that kind of stuff. Then you think about what do you need to do in a production environment? Let's just assume you solved all those things. You may you, you build the perfect mechanical human. Then the question, why the hell does it need to be a human Speaker 4: right? Natan Linder: Why do you need to put all the energy on the gate? Why do you need not to have two eyes and not 50 eyes? Why do you need one hand? If not 20 hence, to exaggerate for a Right. So my point simply put, it's if you can build a humanoid, you don't need to build a humanoid. You can build so many other form factors. Yaro Tenzer: Yeah. I a hundred percent agree with you. And I think the fascination about the humanoid is actually, I think comes from I think the first time something like that mentioned was goem. I think it comes from Yiddish. Yeah. And it was like from a play or something like that where like this fascination about the human, the mechanical human. Yeah. It is just driving imagination. That's what makes people excited. But from a business perspective, you need five, hands or six hands or 20, for the sake of it. And as we are going through this journey, it will be really like, okay, what's the business case? Whether, what's the application like and how it should be applied? A cynical person obviously says, Hey, a person with two hands can do everything. So therefore this is the most optimized form factor. But I Assuming that Natan Linder: that's your environment. Correct. Yaro Tenzer: Exactly. That's their point. Like assuming the environment. Natan Linder: The rationale for the business case is that we think about our industrial base today, and this is US numbers, right? It's like we talk about 3.8 million workers that are needed in the US by 2033. Yep. At least 2 million out of those are unfilled jobs. And those people are not coming from anywhere, right? So the idea that you can replace some of those working hands with robot, I think necessarily means you need to design production lines differently and rethink the role of humans, but just the one-to-one replacement function. So we don't have those humans, we'll just make them. Is crazy. You very appealing, if you can appealing, if you can make them work. But you dive into like your RR experience because like, when you think about the type of operations you've been supporting, like logistics, what have you learned? Like you, you basically rethought goods to picker pickers to goods and you know how to do complex human things like sorting or unsupervised pick, like all those things. Tell us about replacing humans like with real robotics. Yaro Tenzer: Our experience shows that the facilities that benefit from automation or look at automation, the facilities that have volume a small shop doesn't really u usually start looking into automation. When the volume starts growing, they start looking into optimizing processes and putting, automation. What's interesting is that as soon as people deployed automation. They really start relying on it. And so our experience is that if the robot is not really operating at 99.95% accuracy and ability to do the task, nobody really needs it. Because people cannot trust it. Natan Linder: But I wanna challenge you a bit and then I wanna ask you for a concrete example. Sure. Our client base, the folks we serve in our, ecosystem by and large is large scale enterprises. Multiple site operations, the factories, this could be the warehouses, the labs the remanufacturing sites, logistics, everything in between. So when you look at them outside looking in, it's a very large enterprise. But when you get really down close, from the loading dock to the shop, as they say, every single one of those ginormous enterprises is actually. Smaller collection of operations, sometimes very heterogeneous and very autonomous within the same, under the same enterprise roof. So I actually argue that it's actually small shops as you described. They just live in larger organizations. Yaro Tenzer: Yes, but the facility, the overall operations is driving some volume. Natan Linder: Do you think The locality of volume, basically that's what you're Of the facility. Of the facility, Yaro Tenzer: One of our public customers is Staples. Yep. And we scaled with them across three facilities, building new facilities with them. And what you're absolutely right, when you walk into facilities, there are different teams that are in charge of different things, but there are, like, overall as a facility it drives a lot of volume through this. So this creates an opportunity to focus on automating some tasks and really elevate people to do, other things because if the facility is, organic and evolving, and this is what I love about Tulip, is like things change all the time. Yeah. And so you need to be able to adjust. Yeah. But some of the tasks needs to be automated regardless of what the facility does. Natan Linder: Yeah. Can you walk us through a case where. Here was a bunch of humans we've put in our system. What does that process look like? What actually needed to happen on the operations side, on the, organizational side and the people aspect of this. Yaro Tenzer: Yeah. Yeah. No ha. Happy to do it. In my previous comment about the. The need for the robots to perform at 99.95. Accuracy it's all about the scope of the task, right? Because at the end of the day, if you, if your scope is too big, then the performance, it cannot be as reliable. And then so people will not use this automation. So one of the challenges of deploying robot is really adjusting the scope such that the robot can perform in this environment. It really. Reliably and accurately so you can trust it. From operations point of view, Natan Linder: I think that trust is a really important keyword because you said earlier, look at the world through the lens of like LLMs VMs. But I think what is really fascinating, folks, not just from the sci-fi fascination of the humanoid, it's the fact that sort of AI is getting a body. It's like this kind of RO embodied, AI embodied, now this is a new buzzword, like embodied physical ai, that kind of stuff. Yeah. We can call things a robot, but it's basically like a very reliable, repeatable machine that does task X, y, or Z. Correct. That you can program it to do tasks a, y, Z in a easy sort of cost effective, maintainable, like all the things that makes any system useful and TCO positive. Yaro Tenzer: So you ask about, example. I actually will give you two examples. Let's go one examples with Staples. They have a lot of operations around fulfillment so that, fulfillment goes to B2B or B2C, and they have a lot of storage that is already automated and the way their facilities operated, they bring these goods. To a station where, imagine a tote, it has items and the robot needs to simulate one of them and say, you went online and bought something, say a paper box or something like that. So the robot needs to simulate and really put something into your order. So they approached us and said, Hey. We need you to handle all this range of objects. And it was fascinating because Staples obviously, stores and sell a lot of different variety of objects. And for us it's like working with this customer over time, and applying all these AI tools to increase the range of objects that we can handle. And this was a fascinating example where, we building this trust. To where we started something like 30% of inventory. And over time we got to something around 70, 75% of the inventory. But it's Natan Linder: kind of a whack-a-mole. No. 'cause the inventory always changes. Yaro Tenzer: They, a lot of the same things. They stay the same, like shapes change and all this adjustments and this is where the operations team come and some of the data, some of the AI helps the team to understand. But with this specific example, the operations. To do. The task was already there and we added automations into the existing process and enabled. Second example is recently we had a very large organization reached out to us and they wanted us to automate a specific Manufacturing task. What are they making? It's something in the, in healthcare and it was fascinating because the robot's capabilities are not there yet to build a final. Product. And as we were looking in, there was a real realization where from a reliability performance, if we adjust the ask and we realized that robots can do kiting The robot can prepare the items for the next station. Where the person actually. Puts us all together and adjust. So in a way it takes away the quality control because the robot can validate that's the right item. Make sure there are no damages and literally prepare this kit for just in time Manufacturing. Natan Linder: Yeah. That's why I think there is like this kind of strong bond. And a relationship philosophically and practically between robotics and computer vision. I always wanna say that robotics is just applied computer vision. Sure. In a way, you know why I'm saying this is because you just describe a case of kidding. Correct. That usually humans do. Kidding. Tom, here is Kit A. Instructions to do kit A, B, and C. Here's the logic between them. Here's how you check that kit A, B, and C is working and now you're replacing it with all sorts of computer vision, all that kind of stuff. So the robot here is just doing the actuation, like just the picking up, and it's really just showing a brain, Hey, is this what it is? And need to talk to a logic that says, Hey, is this the right thing to kit? Given this condition? Almost like the mechanical elements of picking and gripping is becoming less the issue. It's not the issue at all. Yaro Tenzer: Yes and no, yeah, of course. The roboticist, you would've, to say gripping is still a hard problem, so you get it. No it's really interacting with the physical world is still really hard because the items change the way they behave. Yeah. And so for example, there is a Boer, water bottle here and it'll behave differently. Depends how it's oriented, how it's placed. Yeah. Of course, for us it's so easy. But then for the machine, so if you're saying, Hey. The kit that goes into the next process needs to have one of this oriented in a specific way. So now the question is like, how do you orient it? Is that the right item? Yeah. Okay. It's the right item. You're right. You confirm it with vision, but actually making sure that it's in the right place, in the right orientation. This is the embodied ai, Natan Linder: but this is what it becomes really interesting because I think embodied AI also means that there will be embodied just I next to it. So just the normal intelligence, right? But I guess my point is that the way computer vision systems handled this issue is constraining the problem. So it's like you make a box and control the lighting conditions you convey and jig, you do all sorts of things like to make the vision highly reliable. Yep. So now you can teach a robotic arm in certain situations, like to use tools around it whether it's like changing orientation of the piece or this and that to get to the same confirmation. So it's just in fact, like I think bold's, like my claim is that the robots are working in the service of the computer vision, algorithms and the brains, and it's really about making them see and understand. Yaro Tenzer: So let me build on what you said and actually emphasize it. Yeah. One of the cool things you can do these days with, lMS and kind of image processing. You can take a photo of something and then say, Hey, generate a video out of it. Of course. And if you think about it in a way, it's a very similar, what you just said. Like you can take a, an image of the table with specific, items and say, Hey, make a video of these items ending up in this space. And now you can feed that into robotic arms and saying, Hey, help this video become a reality. Natan Linder: It's the reverse VMs you're describing where synthetic data sets can be used to train and make robots smart in real data sets like video. So you look at what companies like physical intelligence are doing and bunch of others. There, there's a whole list of them right now that are working on new types of robot brains, let's just say, and they're doing really interesting work. Because, the classic sort of text driven LLM trained on all the text operation we do on the internet, right? So it's really good at guessing, predicting what would be a great campaign if you give it the right input. And it learned math and coding and all sorts of things that are more, some sort of regular languages. But then it cascaded to visual. Then it's oh, create a video of a bunny jumping over the White House and becoming a F 35. Then it knows how to do that because of so it can go the other way around. That's what people don't realize. It can say I'm looking at a video. What's in there? And therefore this can become inputs for a brain or understanding like the interaction between humans and a robot. Why is that such a big revolution? Yaro Tenzer: I think there is a lot, once again about like vision, language, action. You see something and therefore you can predict the next token. Yeah. On the next image, or what will happen, right? And you mentioned physical intelligence, like it's really fun to watch some of the videos and some of the stuff they're showing. At the same time, if you look into some of the so for example, work that Ross Tedrick is doing with TRI, they were recently showing robot taking knife and cutting an apple, but when you look at the data, I think their, success rate was around like, 70% or something like that on ability to succeed with a more sophisticated task. Yeah. One of the examples which. I and I a hundred percent agree with Rodney Brooks. You need to understand the sense of touch. So for example, in Manufacturing, say you have a bin of switches and you need to take one switch and put it into a rig or something, right? And so when you are reaching into the bin, you know they're not ated, you need to figure out how to take it in a bin. And as. Pulling that switch, Hey, it got caught with another switch. How do you separate them? How do you do some of this more sophisticated task? So yes, I agree with you from a vision perspective, unless of Natan Linder: course you change the switch packaging to correct foolproof picking. Great. But then now you're building a machine and you're not doing a general purpose robot. Yaro Tenzer: Correct? That's exactly the trade off is hey, where are the boundaries and can you ate the task to be 99.95? Accurate and what physical intelligence is doing is trying to really increase this envelope. And we'll see how this Germany shapes. Natan Linder: I, I just think that the robotic brains or the machine brains, if we generalize it, we'll know a lot more on people just by watching YouTubes. I don't know if there's enough factory YouTubes, but I think there will be enough approximation of how humans interact in a physical space. Is that at least, I'm pretty sure it'll help with like safety, for example. And like human robot general interaction and probably like fine motor skills type tasks. If enough people show their craft work on TikTok. 'cause then you can synthesize that like I'm imagining like imagine a PLM that has then simulation. Yep. So you have a cad and then the PM tells you how to build a product. And then a simulation product tells you this is how it's actually built in a factory thing. And then you go to the first VLM and you say, make a gazillion, bazillion synthetic data on how this product is being trained for the robotic dataset. And then the robot brain learns that and then compares that to a real world, whatever driven. And then do it 15,000 times and that's how they're continuously gonna learn. Yaro Tenzer: So yes, you can do quite a lot of simulation, but so our robots, around the world, they're collecting data, they're doing picking millions of picks, a month. And what we seeing that some of these data you cannot simulate, you naturally need to see it. So for example, we had a pill bottle and because of the shaking in the tote, the cup came off. So there is a situation where, the robot is presented with a. Bottle and a cup separately. And the robot is being asked to pick an item. Okay. The robot needs to have the enough intelligence to realize wait. If you pick just the bottle, you still need to pick up the cup. And maybe put in the same CAIO or something like that for somebody to assemble it later. So we seeing that the real world examples and using that data to train that, that's what we are doing at right hand robotics. That's how we making it smart. So simulation is great. You have to be in the real facilities to collect this data. Natan Linder: Yeah. I wanna end on like a topic that I think is a good span between, where I spend most of my time in building new types of production systems for organizations. And that involves all sorts of technology, automation and robots, and also some normal, boring technology. I don't know, barcode guns and touchscreens, just no fancy ai and like where it meets robots. So if we extrapolated the future. I think it's almost an imperative that you need like a ton of orchestration if you are having robots. Classic automation. I don't know humans from your perspective, when you're seeing organization adopt real world robotic use cases here and now, not in some. Overhyped humanoid future. Speaker 4: How Natan Linder: do they think about orchestration? And what do people need to consider if they're wanna invite robots into their world of operations? Yaro Tenzer: So we were focused on task where, the workflow is really prescribed. So from our perspective, like the APIs are very straightforward and so the robot being. Tools to do something. And so if it does the task, it reports back. So from that aspect is quite simple. Natan Linder: But your robot is not really autonomous. The idea is like we got to autonomy may, maybe it's But autonomy. Yaro Tenzer: Autonomy of a task, Natan Linder: or a set of tasks. But you have AGVs a, AMRs running around. Maybe you'll have some humanoids, maybe you'll have cobots that gets repurposed and like they, they have wheels maybe, who knows? Yeah. Yeah. And now you're just increasing the level of autonomy. And I, I argue that autonomy requires insane orchestration. Yaro Tenzer: So there are different tools that doing. Some of the orchestration warehouse control systems, which when in a higher level, I do think that longer term a lot of this will adopt because these days, like any robot will have like APIs or documentation, you can feed this API documentation into. System and say, Hey, use these APIs with this specific machine. Where I think there will still be a human involvement is, okay, what happens? So for example, if you have a tote and a subdivided tote, right? And say that the divider, something happened to it, like it blows misplace broke or something, right? If the robot doesn't know about something like that, and it can see it, but it's like. Now what do I do? So I think the orchestration software and other things and humans as well will need to tell the robots and say if you have a subdivided tote that there's something broke, get rid of this tote. Or still try picking or do something. So there is a lot of element, and by the way. This aspect of prescription is very different across different cultures. So for example, in Asia, like Japan, Korea, they want the whole process to stop, and want to understand what exactly happened and when. In some other countries, like in us, there are some processes where it's like you. Robot keep working, we'll figure out things later because you don't want to stop the process. So I think there is the human element or cultural element of a approach to the process still is really important and will play into the orchestration input, if you will. Natan Linder: Yeah. Interesting times. Yaro Tenzer: Yeah. I was exchanging messages with our common friend from Vlad, from bcg. And he put it nicely, he says, buckle up, this will be fascinating. A few years. Natan Linder: Yeah. Definitely. Yeah. With that, let's see what kind of robots we integrate into production systems in the next few years. Thank you for joining Augmented ops. Yaro Tenzer: Thank you for inviting that Narrator: one. Yaro Tenzer: Yeah. Narrator: Alright. Thank you for listening to the Augmented Ops podcast from Tulip Interfaces. We hope you found this week's episode informative and inspiring. You can find the show on LinkedIn and YouTube or at Tulip dot co slash podcast. If you enjoyed this episode, please leave us a rating or review on iTunes or wherever you listen to your podcasts. Until next time.