Speaker 2 (00:00.118) All the FDA wants you to do is to call your shots before you make them. They're cool if you miss them. They don't mind if you miss a few baskets. Just say, I want to make this basket this way. And you shoot it, you miss. You're like, I wrote down that one didn't work. We're going to make it the basket this way now. And if that one works, they're really proud of you for like writing down your plan, testing against that plan, and then pivoting when you need to pivot, you found a better way. Right? So many engineers, especially in software, as well as in like mechanical engineering space, can get bogged down just solving their niche problem. that's in front of them that moment and then you're cutting your way through a forest of trees. You cut down the tree in front of you only to realize the one behind it's way bigger. sometimes I have this conversation with management, engineering management, and they're like, you know, we, care about quality, but we want to be fast. And like, that's a very common conversation with quality because quality is obviously in a slowing down. And I say, well, listen, the fastest way you can do this is how we did it back in college. you just submit to production. Yeah. Just like, just like do it, commit to production. That is the fastest way you want to do that. Speaker 1 (01:06.946) Welcome to the Hard Tech Podcast. And everyone, welcome back to the Hard Tech Podcast. I'm your host, Deandre Hericus, with my usual suspect, Grant Chapman. And we have a very special guest with us, Ashken Razuli, the founder and CEO of Ingenious Solutions. He's been all over the space from corporate to startups, all in the world of medical device. Welcome to the show. How's it going everybody? Speaker 1 (01:29.08) Thank you, Deanna. Thanks for having me. It's good to be here with you guys. And I was to be super fun. I always like talking to people that have played in more than one seat in the game. Because you you can be you can talk from both sides of the table because there's always a balance to all things, whether it is engineering and marketing, marketing and regulatory regulatory engineering. there are two sides to every coin and talking with someone else who gets to sit on more than one side of the table helps dig out the nuance on all the ways you balance those choices. I'll go ahead. Sorry, go ahead. I was just gonna say it is an extremely underrated quality of one's career to have actually experienced multiple roles. Yeah, now being yelled at helps you yell at other people, Knowing what's effective and what's just hurtful. Speaker 3 (02:16.16) Sure. then for the listeners as well, I'd love to hear your journey and from where you started out into now with Running In Genius. for the- Speaker 1 (02:25.3) yeah, for sure. So, you know, I've always loved just coding and software. And so I started out even with a computer science major that found biomedical engineering kind of made my way in there. But even in biomedical engineer and the focusing on software. So I was coding for like medical applications. I was doing signal processing. And so my very first roles in the industry were, you know, engineering, doing coding, algorithm development, and then doing testing. And from testing, made my way into quality management systems. I enjoyed the aspect of quality management systems that is very end to end and big vision in terms of just touching base with every step of product development. It's a really nice big picture. Whoa. You get to starting from ideation, engage with product management all the way down to like postmarket surveillance and customer service. So here's an idea we thought about and here's how it turned out in real world. Quality has a touch point with every step. So I like that about it. I did at some point also dabble in product management. I think quality management system does actually lend itself well to product management. But putting all that aside, I think I ended up picking quality management system because it has my focus because of that, just because it's very broad and it actually gives you a good big picture of how to go about software development lifecycle. think I, you know, looking around, I think I belong to a very small niche group of people. There's already a niche group of people that enter at this intersection of regulatory and software in the first place. But even within that, people that have kind of tried on the different roles, there's a very small number of people that Because it helps you then intimately understand for each role, what are my goals? What are my drivers? It actually helps you facilitate as a quality manager, assistant person working with that role because you know what they're trying to accomplish. Speaker 2 (04:30.798) Yeah, you're trying to you're trying to convince them to eat their veggies and write it down first or at least right after they did it write it down which no one wants to do all engineers just want to jump in go to dessert and the whipped cream and the cherry and be like I want to print this I want to run this code let's go test this can we put this for any users let's go but the the beauty of a quality system is it makes you take a step back like you said look at the whole picture the my favorite like a you know way to understand is like all the FDA wants you to do is to call your shots before you make them they're cool if you miss them they don't mind if you miss a few baskets Just say, I want to make this basket this way. And you shoot it and you miss. You're like, ah, I wrote down that one didn't work. We're going to make it the basket this way now. And if that one works, they're really proud of you for like writing down your plan, testing against that plan, and then pivoting when you need to pivot, you found a better way. And what most people I think like misguided about why quality exists is yes, it exists to make sure we make the same thing every time. It also exists to make sure we're thinking about the problem in a rational way from the big picture. Right. So many engineers, especially in software, as well as in like mechanical engineering space, can get bogged down just solving their niche problem that's in front of them that moment. And then, you know, you're cutting your way through a forest of trees. You cut down the tree in front of you only to realize the one behind it's way bigger. And you know, I sometimes have this conversation with management, engineering management, and they're like, you know, we care about quality, but we want to be fast. And like, that's a very common conversation with quality because quality is obviously slowing down. I say, well, listen, the fastest way you can do this is how we did it back in college. You just submit to production. Yeah. Just like, just like do it, commit to production. That is the fastest way you want to. See you the house guys. Speaker 1 (06:13.346) We'll see how this goes. you know, when you when you say it like that, you know, at least what you get a concession out of people is OK, you know. The fastest way is in the best way, even like in the in terms of like speed when you look at the overall end to end, including postmarket maintenance that you need to do. The work you're going to have to do to make up the upfront quality assurance you didn't is going to be orders of magnitude bigger. And so there's a happy medium. Could you give us an example of that? 100 % I mean What do you think about the quality assurance process for software? You know, it starts with like a simple code review Static analysis you got your system testing integration testing detailed testing All of that is Essentially, I think of it as a funnel that keeps mitigating not eliminating it is almost impossible to play software without bugs It then goes back to the definition of what a bug is in the first place, but it's mitigates how many bucks can go out. think something like 40 % of bugs started the very first step, which is requirement definition. So the very first step of software development, which is a part of quality assurance, is requirement definition. If someone actually took the time, accurately defined the requirements, went through a process where the different stakeholders looked at it and made sure they fully understood what it meant. Speaker 1 (07:49.058) how to test it. They wouldn't then expose themselves to this crunch time testing scenario, high stress. Right. Yeah, exactly. What somebody just. bug finding the bug instead of fixing them. You're like, wait, just found, I fixed a bug and I found five more hiding under the rug. That one was hiding under, right. That's not even the worst scenario. The worst scenario is you push to prod and next thing you know, the entire system is down because you, I mean, you asked for a specific example. You didn't check backward compatibility. Right. Right. Someone else is an older version and the thing just crashes. Speaker 1 (08:23.916) Yeah, so next thing you know, you got a flood of customer complaints coming in saying your app is just not loading. Why? Because during the deployment process, nobody documented all the needs for backward compatibility. Right, no, and this is the, like that whole like basically user needs to input requirements to design outputs to verification validation like that waterfall effect that everyone associates waterfall with a really slow or bad design cycle. And the way I always like to liken this to people is like, well, you can go through the right cycle once, or we can do it twice or three times or however many times it takes you to, you know, to do this pure agile to get it right. So there's certain things that is impossible to do without some gates and some reviews and some structure of how to get there, You know, I feel like the terminology being kind of unique to the regulatory world confuses people and puts them in this mindset of this is some additional thing I have to do in my product development lifecycle when what it really is defining user needs is what problem are you trying to solve? Design impacts. Yeah, and who is using this one? And who is the user? Because sometimes the end user isn't your only user, right? Especially in medical device. There's the patient, the clinician, the back of house who's reviewing the data. There's all these different users that have different needs of that product and how it's supposed to work. Speaker 1 (09:44.846) 100%. And so like this right there is not something that is additional work to third in your product development process. This is what a good product manager should do anyways. Who are the stakeholders? What problem are you solving for them? That's what user needs are. And then design inputs as unfamiliar as the name sounds is, okay, what is your product going to do to solve all that? Yeah, the product needs to work when it's hot out and when it's cold out. awesome. Well, we're going to throw in a thermal chamber at this chamber temperature and this chamber temperature and it still works on both those. It works, right? It's a translation from how the user expects it to behave to how I always joke, how can an engineer put it on a checklist? I did this thing, therefore I met that need. That is like the simplest translation to go from user need to like a design requirement. Something that I loved that you had on your website was you have a No BS QMS manifesto. It has four key pillars there. Could you dive into that a little bit? I think that was really interesting. Yeah, you know, the non-B.S. manifesto was kind of the culmination of, you know, my almost 15 years of experience in the industry, having a lot of the same conversations and noticing that there's a lot of underlying paradigms. Obviously, the content of the conversations change, but like the underlying paradigms, which would lend itself to, you know, an effective but also an efficient quality process. And so I say effective meaning you do actually ensure quality and you have a quality product, but efficient as in you do it in the least burdensome manner. And this kind of took me back to the FDA has gotten us on, you know, least burdensome principles, which, you know, has been out forever. You know, the FDA at the end of the day has said things we're telling you, we are trusting you to figure out how to achieve the objective in the least burdensome manner. At the same time. Speaker 1 (11:38.254) being in the software world, I'm very familiar with the agile manifesto and how it's got four principles. And so that's how the non-biased manifesto was born was marrying the two together. Oh, that's fine. So essentially it's got four principles, which I think if applied at every organization is going to lead to kind of your best quality outcome, most efficient quality outcome. So, you know, it was quality over proceduralism, we see that be a problem and you know I can deep dive into every single one of them as necessary but we see that often be the problem with quality departments because you know it's really easy in a quality position to criticize everyone else for not appreciating quality but I think often quality brings it upon themselves. Yeah, no, they're they sit in their ivory tower and they yell down at everybody. Why can't you make it this detailed? Why can't you document it like this? We need it all in as many, know, all these things you have to do and the engineers sit there like, yeah, but if we don't ship by next quarter, like we're all out of a job anyways, like how do we balance these two needs of the world, right? Yeah, exactly. In fact, most of the manifesto is mostly a nod to the quality team. It is also something for the cross-functional teams to be aware of because it clarifies their role. But the third and the fourth principles, redundancy over duplication and conciseness over verbosity, those are also mostly kind of principles that need to be applied by the quality department because redundancy is not the same thing as duplication and we often see those two get mixed and adding words to your procedure doesn't make the procedure better. Right. fact, it often makes it more confusing. Speaker 3 (13:26.094) dive into that first one for us a little bit more. Yeah, 100%. So the concept of redundancy, that is a useful risk mitigation approach. So the idea is, you know, stale your mode number one, how do we prevent a hazardous situation or harm? Well, we're going to put in this other redundancy into the design. This also applies to the internal product development processes, right? So from the software development lifecycle, product development process, risk management, you are ultimately designing machines. I often think of a QMS as a product whose users are the internal cross-functional stakeholders. It's just not something we're selling. The clients are internal and you've got this product with its users designed to achieve a certain objective. know, the risk management process has an objective. The product development process has an objective. Your post-market surveillance process has an objective. Every single one of these processes have an objective. And when you design that, you look at what are the failures most often it's going to be failures of people to do the thing they're assigned. Right. So if that fails, what is our going to be? It be our safety net, all of this to prevent, you know, the ultimate quality issues and making it to the field. Right. And so there's value in redundancy in a process. Often I have seen this be incorrectly implemented as Speaker 1 (14:54.552) duplication of steps in the process without any value add. For example, I'm going to have the same question be asked three times in a form or two different forms, which are filled out by the same person. If that person is the failure remote, you're not actually, yeah, it's the same answer, right? You're not actually adding any redundancy to your procedure. You're just adding duplication. And so what happens is not only are you not helping, but also duplication has a cost because When you've got the same piece of information spread out around your quality management system, it is only a matter of time before you get a misalignment. If people are really good following procedures, you just have added overhead, the same they could be updating once, they're now having to update in three places. That is the best case scenario. The actual and worst case scenario that happens is... they update one, they forget the other two, someone else takes the one that is not updated, someone takes the one that is updated, you got misalignment, you now got to go figure out what went wrong with it. So that is how I differentiate between redundancy versus duplication. that's good. And I think one of the things I have seen in how to make good quality systems and good plans is what I call the facilitator like bonus. If you're the doer, it's really hard to grade your own homework. It's really hard to come up with a plan to check yourself. Right. It's really difficult because for some reason you internalize either I've seen engineers go both ways either they're an absolute focus engineer and they're going to way overkill their quality system. A lot of duplication, a lot of way too down in the weeds and they won't get the doing done. Or you get the ones are so excited to do their quality system is like, did I do the thing? One checkbox. I did the thing, check. And like the whole product's done. And what's really useful is facilitation from an external party. This is why quality consultants, I think do very, very well in small to medium businesses, right? You shouldn't hire quality internally if you're that small because you're going to have a really hard time leading yourself to the correct amount of balance. Speaker 2 (16:58.284) because you're too close to the problem. Once you get to a larger organization, you can have someone in quality whose job is no individual contributorship engineering. Their job is only to ensure quality and they're not really doing quality, they're facilitating quality. They're asking the engineers, how should we protect this in the right way that the engineer is not creating their homework? They're being coached through and giving solutions to quality problems that the quality manager identifies. This is like my fairing that no one from the outside sees. No, 100%. And you know, I think what you're really pointing out there is the concept of independence. And this is a concept that is explicitly stated in FDA guidances and also standards. There's a whole lot of standards about, you know, in different capacities, whether it's internal auditing of your quality management system or it is testing of the software, it is understood. that in order for a verification, evaluation or assessment to be effective, the person doing it must have a degree of independence from the work. At the end of the day, where the redundancy comes in here is that you've got a different brain with a different perspective looking at the objective output of the work versus that exact same brain that created it. Right. Speaker 1 (18:22.752) It probably already addressed whatever it could find in the development process. Anyways, if I coded something, I probably already thought of all the defensive coding lines I need to put in there are the ways I need to validate the input. All that I probably already put all that in there. It's a different brain that is going to actually find other stuff and therefore mitigate the overall risk. Yep, no, and it's such a fun thing that like even those that are great at their craft, incredibly good at software or mechanical engineering or electrical engineering, if you've already solved the problem in your head, that's not like this, the way to handle the product or way to handle it so it doesn't break, you overlook that all the time, right? You're never going to break your own prototype. It's hard for you to find ways to break it because your brain steers around those things, like unconsciously. But man, my team, at the moment they hand me a prototype, whether it's software to go click through or if it's a mechanical or electrical device, like I will find a way to break it because I'm not the one that designed it. I'm just going to handle it like a traditional user would. I don't know, what happens when you hold down these three buttons at the same time? Right? And like, why would you ever do that? And then of course, it locks something up or resets or whatnot. You're like, yep, this is what users are going to do because users will do anything. It's super common as well. So my background started a software company that I ran for a number of years. And even as you're kind of iterating on a mobile application for working with coaches and things like that, I was the best user of all in team sports. I knew how to navigate it perfectly and sign up and do all the demos and things like that. this is the easiest platform to use in the world, of course, because I literally drew out every single screen that's on it. And I couldn't see the areas where the user interface wasn't great. It's just really interesting how that applies so much at the actual device level as well, for sure. Speaker 1 (20:15.598) 100 % and you know, it's not just even having two different brains. I have gone through a protocol the same night, maybe three times or four times and thought, okay, there's nothing to see here. And the next day I looked at the same thing and I find a glaring, I'm just like, how did I miss this? I find a glaring mistake that I put in. like, how did I miss this? So even, you know, the same person at different times of the day will look at the same thing differently. Yeah, no, and we're all human. That's why quality exists, because we actually are all human. We're all going to make small mistakes here and there. On the whole, we mean well. On the whole, we actually do pretty incredible things. But that's why quality exists, why verification validation exists, to catch the things that you just glossed over. Right? You're so busy solving something. Important point you bring up actually, I think this is by the way a fundamental point I always try to drive in quality conversations is it is really easy in everyday work to have this adversarial feeling towards quality management system people the and Also be scared of admitting mistakes right now the thing is the reason we decided as a society that quality management systems must exist is because we have acknowledged that humans are a incomplete they have limited bandwidth and B, make mistakes even when they do know better. And the QMS is the ultimate big picture mitigation for humans. That's what it is. We're trying to proceduralize things and we're trying to define them and we're trying to verify what humans do by other humans. And that's the ultimate concept here. And so when someone does make a mistake, they shouldn't be afraid to come forward and say, hey, messed that up. QMS. Speaker 2 (21:59.318) No, they should run for it with the big flashing signs. I screwed this up. Can we help fix this? Right, because you're not going to get a slap on the wrist if you're in the right culture. Now, obviously there's four cultures out there, but in the right QMS culture, we knew you would mess up one day. Like, that's why we designed the system. We understood that everyone here is human and they will make mistakes. And so that's just accepted as the premise of the quality management system in the first place. Yeah, to that point, the one thing that doesn't make any mistakes at all is, of course, AI. Never. Right? No, this is the fun part, like the brand new world that we're all living in, right? yeah. Because what, two years ago, AI could barely write a cohesive paragraph? Like it would bring up a lot of good stuff, but it was like, that sounds weird. That reads weird. Yeah, we talk about all time how the connected wave in all the world of consumer and industrial and things like that is finally making its way over to the medical device space, as well as AI. And I know you have a background in that as well. I'm curious to pick your brain on what's happening there, what are you seeing, and how is it being implemented. Speaker 1 (23:07.886) 100%. So, you know, what makes AI implementation in a safety critical system and medical devices or safety critical system even more tricky is there is as of now no straightforward black and white method of verification and inspection at the end of the day. You know, I talked about how big picture QMS mitigates the risk of human failures through a number of things, including inspection and verification. With AI systems, how do you know that this system is going to work during your test, but also continue to work? With traditional software, we always had the assumption that it is deterministic. And what that meant was, if I design a test in my VNV, after I push this logic to product, it is going to continue to give me that exact same result. For the same input, you'll get the same outputs every time. With AI though, all bets are off. There is no way, first of all, you can cover all the inputs. And for the most part, how the model's operating, we, even the people that create the model, see as black box, giant statistical machine. And so one of the primary challenges is that because of that, what we're seeing out there is we're starting slow. We're saying, hey, Let's focus on the low risk applications where if it fails, the severity of the failure isn't high. Right. So what that ends up looking like is things like, first of all, tools that are just not medical devices. So use of AI in operations, product development process, design development process, brainstorming, and making sure that humans 100 % verify the output. But also when you look at the medical device landscape, for the most part, Speaker 1 (25:05.582) First of all, there's nothing on the generative AI side of things cleared because the FDA is still trying to figure out if they want to allow it, how they would allow it. Yeah, it's a big open question of how do you regulate a generative AI product? Same with continuous learning. You know, how do I regulate a continuously learning model? So none of that has been cleared. For the most part, what's been cleared in the higher risk setting of a medical device has been a niche of narrow task machine learning model. For example, most of them are in radiology for image processing identified features. condition, can you see something that you think doesn't belong here and then we'll show it to a person. And that's still your validation at the end of the day is we're going to show this one to a person. Right? Now the thing is the problem is it might miss some, it might produce false negatives, but the false positives are no big deal. It's the how many false negatives does it work. if I can tune my machine learning model to just be more sensitive than it could be, that's, you know, the safer side to fail on there, right? And then you've got to show it a bunch of data and see how often it produces a false negative outcome, which is really negative, right, for the user. False positives suck from a process perspective, but they're not that bad for the actual patient. Sure. Well, and also depends on how you define positive and negative. But I think the other problem we're having is we don't even know how to define positive and negative like concretely with some of this stuff. You know, we're having to be statistical. We're having to be subjective. And whatever I characterize in my test environment as the false positive, false negative, does it actually carry over to the real world? And so, you know, a lot of the FDA guidance and all that is to address that question of Speaker 1 (26:53.546) make sure whatever training data you had represents the real world, we have yet to be convinced that this is safely in NUS. We're still trying to figure that out based on the data we're getting. that's so interesting. So what are you counseling startups that are looking to leverage AI in the regulated space right now? Like, how do you counsel them to move forward? How do they launch their products today? Well, honestly, with everything QMS and AI is no exception. I started with like common sense and good engineering practices. And I'm like, how do you know this thing's actually going to work? Right. And it, if you actually deep dive enough into that, you will reverse engineer every single GMLT that the FDA has the good initial learning practices. Uh, because ultimately that's how they came into existence in the first place. However, you know, the very first question is like data. mean, that's basically where, you know, the From a pure quality standpoint and functionality standpoint, the biggest challenge of any machine learning application out there, Gen.AI or otherwise, is data. What is your training data? What is your model? And can you give me provenance-rate data? What do you get? And how do you know it's representative of your actual real world? Right. And then have you classified your data correctly? Right. If you're feeding it, I think this is a positive outcome or I think this is a negative outcome data set. How are you sure that data has been classified correctly by the humans that did it? Right. How much error are you putting in your models? You're feeding it the wrong labels on certain data sets you're feeding it. We've got a client that's not even in a medical space right now, but in a detection space is what I'll call it. And they're trying to find single and noise. Speaker 2 (28:39.51) in an incredibly noisy environment. And they're like, you we want an AI that's like, well, you just need to feed it a lot of examples of the thing, but the event they're trying to detect is incredibly expensive to create. So it's going to be super hard to get enough data to train that model. Well, that but also these examples you're feeding and you to be represented, right? Because it's really easy to overfit a model. Everybody can do it, right? You can just, you know, pick a subset of the reality and that's where I have data and I'm just going to feed it that and then, you know, use that to also do your verification validation. And next thing you know, you've got this amazing result because you just overfit the model to your training data and patch yourself on the back. You know, I've got a product and the next thing you know, you're out there in the real world and you know, you're miserable. Right. You're nowhere near what they were in the lab. Exactly, 100%. And so that's the other upfront design assurance activity you can do, like the assessment of, is this actually representative? That's so good. Now that is always such hard stuff to do. Speaker 3 (29:40.91) And can you explain to us a little bit around like the QMS culture versus mandate? I thought that was really interesting as well. yeah, that's the principle we missed of the Nambias Manifesta. Yeah, so I believe the single most important lever one can pull is the culture in an organization to just make everything go smoothly. To me, what culture represents is everyone being aligned on what quality means to them, to that organization, and also what role they particularly play in achieving that. Because once you once you get there, you don't need somebody looking over your shoulder. You don't need everyone catching you for you to just identify the things you need to do. And in fact, all functions are working together. You've got non quality functions come into quality with ideas on improving quality. You've got quality cutting fat from the QMS where it's not necessary because they identified that way. And the opposite of that would be what we unfortunately see too often, is quality is a bunch of procedures and checklists mandated on people that they just got to follow without them understanding what the intent and objective is. Even if the objective is there and there is an actual intent that these processes and checklists are meant to accomplish. When people do not understand the why, they do a poor job even filling out a checklist. Correct, or they fill it out without the right intent and they're missing the actual meaning of the words. They're just reading the words and interpreting them however they do that day, right? And I think this goes back down to that culture conversation we touched on earlier is the fear of failure can create very toxic quality like systems and cultures, right? If you have that engineer that's afraid to have done design work that's going to fail a quality check, they're going to engineer their tests to not fail their work. Speaker 2 (31:44.768) not engineer their work to not fail what the test is actually trying to go find. Exactly. Exactly. 100%. And so, you know, that is kind of what the principle of culture versus mandate was pointing out, which is like when you were going about doing trainings, when you're going about building your quality management system, you got to focus on, you know, making sure you're building the culture. Everybody's on the same page. I mean, I've shared examples of this before where, you know, I've been in an environment where I had software engineers pushing product management for having approved requirements and they even refused to code. is requirements weren't clarified. And on the other hand, I've had environments where, you know, despite the best procedures being released, software engineers pushed production without a change order. so it's the two extremes, obviously, most companies fall somewhere in between, but the two extremes of appreciation for quality, having a culture that basically understands what the quality is supposed to look And it's the I am going to play that I'm playing the engineer in this game I'm gonna play on the quality system because it will save me from being embarrassed later in production Right I want to be embarrassed here in front of my friends that I made a small and silly mistake and they're gonna help me fix it Versus I made a small silly mistake and that person the MRI machine no longer exists right you know some control loop went out of control or something broke horribly like there are those moments that are like not recoverable amounts of embarrassment or shame and then there are other ones or if you build a great culture like you laugh when you find the bug that you made or your buddy finds the bug that you made because he was one doing your code review as it should be done right one of those things like don't review your own code and one of the things like that healthy culture actually helps just fix all the problems before they get out in the world Speaker 1 (33:33.198) Well, I present it even if like in the real world you don't have a human harm cost to your failure, just the cost of fixing whatever issue there is to get the product functional again is an order of magnitude greater post-market than it is like free-margin. yeah, the compound interest on technical debt really hurts. It does. It does. And it gets to the point, I mean, I've seen it over and over again. People in the interest of product roadmap, they, you know, forgo working on technical debt and then they just get to a point where you can't, you fix one thing, something else breaks. You take twice as long for a standard feature from the roadmap because of poor architecture. And they do this thing where they just like throw all hands up and they're like, I don't know, five sprints just tech debt. Yeah. Speaker 1 (34:42.51) So, you know, we're living in times when the FDA has gone through a lot of changes at many levels. I will say, though, one thing that I've seen be kind of a continuous threat in the FDA's thinking has been this willingness to come up with 21st century regulatory frameworks that support innovation with these new technologies while, you know, they still have the hard job of ensuring public safety, right, with these medical devices that go out there. And so I've seen the willingness on behalf of the agency to try out new regulatory things. The advisory committees that are public-private partnerships, they actually now have one on generative AI. They had a meeting a few months back trying to get feedback from the industry. Things like the PCCP guidance, the pre-determined change control guidance. These are all at the end of the day signs of the FDA's willingness and know active investment in new regulatory frameworks that are best for new technologies Obviously you could apply old regulatory frameworks to new technologies. It just ends up being a round peg in a square hole kind of a situation There's enough time effort energy to put them in together, but that's time effort energy. Yes, exactly. Excess waste. You either compromise on the patient safety aspect of this or you compromise on the streamline aspect of this, right? So when I look at what the FDA has been doing, I'm actually pretty optimistic. You know, we also saw the announcement of them rolling out AI internally to streamline their own operations. They talked about the model called ELSA. They're trying to use that for streamlining reviews, but also streamlining some of their internal operations. Speaker 1 (36:36.174) It all points to the right direction. think the success is going to come down to the quality of execution and that remains to be seen. I think we're in a good place for AI enabled devices though. I agree. think the the fascinating thing I see coming down the pipeline from like the Not you know, there's deep tech that's like forever in the future and they're near tech. It's not here yet, but it's coming is using AI and machine learning to test AI and machine learning that weren't developed together, right? You use one to test the other and you can throw enough stats against a new model or whatever that models purpose is if you can have another model exercise it right like you have a natural language LLM just say bunch of stuff to your new medical model that is supposed to answer questions about a certain thing and find out how many times the natural language model on the other side gets harmful data. And you ask it, hey, tell me every time you think you hear harmful data from the new AI system, right? Ones that would be not congruent with what you think is correct. And statistically, if they all line up that it most of the time answers safely, that might be the data set. Because you either have lots of data of you know, okay quality or you need a little bit of data of incredible quality, well, it's getting to be, we can now have lots of data of okay quality and stats will eventually play themselves out. Yeah, yeah, 100%. This AI versus AI model has been proposed. I have not yet seen it be used in a medical device product that is cleared. I'm sure we will because I think there's a lot of value in just including this in the design assurance chain. Ultimately, the amount of testing an AI agent can do is going to be much bigger, but also much different. Speaker 1 (38:30.242) than what a human tester can do. Speaker 1 (38:46.328) Thank you.