Speaker 2 (00:05.496) Welcome to the Hard Tech Podcast. And everybody, welcome back to the Hard Tech Podcast. I'm your host Deandre Hericus and with my cohost Grant Chapman. How's it going, everybody? CEO of Glassboard, a hard tech product development firm right here in Indianapolis. And today we have a super exciting guest, honestly, building a company that I definitely could have used, being in the world of football, getting concussions. Joel Sanderson from Aculogica is the chief technology officer in the company behind the iBox, an FDA cleared eye tracking device that objectively accesses concussions without requiring a baseline test, which I think is really interesting. We're going to dive deep into that, but Joel, welcome to the show. Well thanks for having me on. Appreciate it. Speaker 3 (00:46.958) Yeah, just to get people kicked off here, I tried to give a little bit of an intro. I would love for you to tell us a little bit about your background and then also what brought you into the space of working on Concussion Tech. Yeah, my background is mostly in software. I started out, you know, when I was seven, eight years old watching my dad program and watching him code on the computer at home. It was just always super interesting to me. So from there, I guess that passion continued growing and I just love building technology. So started out in software now at OcuLogica. Obviously it's got the hardware, electrical and software all combined. which I love that mixed here. So gotten into a lot more of the full stack of engineering. And so did you originally go to school for like the full stack of engineering all the way from the cloud down to, you know, the, the firmware and so on, or did you actually just kind of teach yourself along that journey? A lot of it was self-taught. did, you know, do computer science and physics in college. Other than that, you know, a lot of just learning online, learning from others, experimenting. Speaker 2 (01:57.922) Yep, a bunch of what we call driving by Braille, just bouncing off those guardrails as you're learning. That's awesome. And so when did you join Oculogica and start working on this and how old was the tech in the company? Exactly. Speaker 1 (02:12.031) Company has been around about 10 years. I joined about seven or eight years ago. Awesome. Can you give us like a story arc of like, know, what they started out doing and was it a university spin out or how did the tech start? And then where are you guys kind of at today? So our founder, Uzma Samadani, she's a neurosurgeon by trade and she was trying to figure out how to assess basically how conscious or how responsive people are that couldn't talk. They're in the hospital bed and they couldn't answer questions, right? So she thought, how can we assess this objectively? She thought of, well, let's put a TV screen in front of them with an eye tracker and see if they can watch the video, see if they're responding to the. you know, the changes in the video and whatnot. It's kind of where the idea sparked. And then from there, we saw, she saw a strong application for assessing concussion. Obviously you're very awake when you're being assessed for concussion, but the same type of technology where you watch a video moving around the screen with a high quality eye tracker was able to differentiate between someone that has a concussion, someone that's not. And so, you know, really from there we've been continued to Develop the technology and refine it and you know, allies that make it accessible to everybody Because you know those those first devices that she was doing research with you know They look kind of like a tinker day and tinker twice that with these aluminum arms and like a you know Computer monitors strapped on just probably think there's some duct tape on there. Speaker 2 (03:47.032) Definitely some 80-20 aluminum extrusion that's bolted together. No, that's awesome. And then that's kind where you guys started. And where you guys currently at today is this like a briefcase size device that can get brought to the field? Or what's the current application? Briefcase size, interesting way to say it. Yeah, it's about 12 pounds, but it's like a tabletop, kind of all in one computer, essentially. If you've seen like the, you know, like a kiosk, it's kind of like that. So it's got a touchscreen on there. So it's easy to move around, like within a clinic or a hospital. That particular one's not ready for the sidelines yet, but you know, maybe someday. Sure. so when it comes to building the eye boxes, what your first product was, what was that journey as you guys were building through the hardware? I think there's also an AI component as well. And then you just like the overall process of getting that product through the FDA. Well, initial FDA clearance, like the de novo, so we actually created a new class of classification devices within FDA. So we had to go through a whole de novo submission process to do that. And that pivotal study was actually on that early research device. And so once we got that clearance, that's kind of when I joined the company and worked on, you know, productionizing it into a real device. Speaker 1 (05:12.482) We've gone through a lot of iterations of that, physical design. Initially, it was like this huge ton of wheeled cart that you would push around the hospital. It had like an arm on there that would articulate and could be used with people lying down in a patient hospital bed. quickly realized that that was just too big. It was too cumbersome to move around, too expensive, too hard to ship, all those things. So we worked on making it smaller and smaller until we have the device that we have today, which is can easily carry it around and reposition it, know, put it in a case and transport it, all that stuff. That's great. In that, what is the core tech that enabled you to both build the benchtop version that you first started with and the one now? Can we explain or get a little detail on how it actually works? What is the secret sauce you guys uncovered? Well, it's all in the eye tracking. would say we use a very high quality eye tracking camera, high speed, 500 frames per second. And you need that in order to detect sort of those those subtle changes that happen after concussion or you've got some sort of brain injury. A lot of those things you can't pick up clinically. Like you can't just watch, you know, follow my finger and see if their eyes are following it well enough, because it's all like that subclinical really nuanced. changes. Speaker 2 (06:38.062) Yeah, so on that I was going to ask, is this like, what are you looking for in the biology, right? Is it jitter? Is it lag in the eye tracking, right? Like, what is the mechanism that is the, you when you have a concussion, your eye muscles are nervous, doesn't work as well. What's the, you know, the tell, I should say. Yeah, it's more complex than that. No two concussions are the same. Every concussion is a little bit different. There's actually, you know, potentially five, six, seven different type of subtypes or phenotypes of concussion. The industry is still sort of like converging around what those definitions are. But, you know, some people don't even have vision change after concussion, but most people do like 85 percent of Concussions do have some sort of vision deficit a lot of it is just the control of the eyes your brain and your nervous system Just can't control your eyes. And so those different differences are what we're picking up So there's we have a lot of different metrics that go into the computations It's really detailed and yeah probably beyond what I can describe all the right here But it is all this stuff, you know, there's different types of eye movements or saccades fixations You know, all those types of things and how they change is very nuanced. so we quantify that, put it into metrics and use that for the, you know, the assessment. No, that's great. So you basically have a matrix of different things you could look for. One of them might be jitter, one of them might be lag, one of them might be fixation, one of them might be, you know, random movement. And that's coupled with the video that you're playing, right? Because you're not just tracking eyes, you're giving them a stimulus and recording the response to that. So you guys have to not only develop what you're measuring, but the thing that they track while you measure it. Speaker 1 (08:25.326) Yeah. Yeah. So the stimulus is about 220 seconds long and it's a video entertaining video that moves around the perimeter of the screen. And yeah, they watch that. It's it's it's funny because people love watch taking the test because the videos are fun. So yeah. I think something that's also interesting about your guys' product is that typically when you're looking for, when I was playing football, before the season would even start, we'd take a baseline test. So that if we ever had an event where the physical therapists or the physical trainers would come around and say, hey, all the guys go sit down and take this test, basically, if you score differently after you have a big hit in the game, then that's our leading indicator that you probably can cuss. Now, you guys had a different approach where you actually don't need a baseline for you guys to actually the diagnostic and connection. I'm curious. That's pretty different. So how did you guys get past that protocol? And what allows you guys to do that without the baseline? Yeah, that's a really big important thing for technology like this because you know, an emergency room, if you come to emergency room after, I don't know, car accident or fall or even a sports injury, a lot of times you're not going to have that baseline. So you guys are fortunate to do that baseline testing. But, you know, a lot, most concussions don't have a baseline score. So the way we, you know, created it and trained the prediction algorithms and went through our validation is We just didn't use a baseline. And essentially our pivotal study, you know, like for what we got our original FDA clearance on was done in emergency rooms. think we had five different sites and four of them were emergency rooms. And so it was a perfect population to validate that where you're not going to have a baseline. We've got a big age range. So our clearances from five to 67 years of age and which is really, really helpful for a tool like this so that it can be used on almost anybody. Speaker 2 (10:22.488) No, that's awesome. I think the neat thing about that narrative is that you purposely went out to a place where you wouldn't have any training data. You didn't have a baseline to go against, and you were able to still use your clinical diagnoses. The doctors are determining to be the truth source, is the word I'll use, along the way. That's great. And so, know, through developing this, what was the bigger challenge along the way? You have really complex hardware that has to all work together at some fast frame rates and large, you know, decent amounts of data that you have to capture at one point. There is the software side, which is the, you know, algorithm that's detecting the thing and also playing the video and, you generating that video. And then there's the clinical side, which concussion diagnosis is a weird, hard place in medicine. And you have to blend all three. Which one of those was like, you think the more challenge? for the journey. No. Geez. mean, I think, you know, we always need more data. So that's a challenge, right? Getting clean data that you can apply, you know, generally is is always challenging. And so we have great research partners that that help with that. The other thing is there's just a lot of noise in eye tracking data, so it's actually really difficult to parse out, you know, what are those nuanced changes? between a control group and your case group, for example. And so we spent a lot of time, you've got a data science team that, you know, looks at that. They look at it at every angle and try to figure out, you know, what are those actual differences and what's noise. So that's probably the most challenging thing from a development perspective. You know, just wrangling that data in a really good way. The other thing from a technical perspective was, you know, just getting the optics right. Speaker 1 (12:13.806) Like I said, we've got a very high quality camera. It's got a low noise sensor, high frame rate. The optical lens is tuned for the infrared illumination that we use. And then the space constraints, too. It's easy sort of to get good optics when you have a lot of space for a big lens and all this stuff. But when you got to compact that down into something really small, that's more difficult. And make it robust enough to survive shipping, setup and all that and stay in focus. yes. No one realizes how sensitive optics are to focus. Yep, yeah we've got a whole process around making sure the camera is in perfect focus and we've got like an algorithm that measures how in focus is it and is it focused at the right point and yep we had to put a lot of effort into that. Joe, I'm curious as well, in your experience building this product, it's a De Novo, it's brand new tech, with a lot of different elements that are self-taught as well. And you're in the world of concussions, right? Which is kind of a unique space to be in, obviously, the movie concussion and things like that. What about the product or the company in the space, or maybe all of them, got you, but also what are you most passionate about within the space when it comes to the technology? Um, well, I can logic is in a really great position for me. I have a really good fit because it's technology and it's growing a business and it's helping people. so, um, you know, I've always sort of seeked out new opportunities for creating a business and helping people in some way and along that also technical challenges. So that's what's really, you know, attractive about oculogical for me. Speaker 1 (14:04.108) I love startup culture and I love building things. That's awesome. And on the building side's front, so it sounds like you guys have started really big, right? You this really big product. seems like it's like a race to get this thing smaller, more dynamic, more like able to use on the field or in different scenarios. what's like the vision there in the journey there to continue to make this product more accessible to folks? Yeah, I mean, ultimately, the vision is to get this thing in every emergency room, every urgent care, every sports team, so that everyone that gets some sort of blow to the head can get assessed and know whether or not they have a concussion. They don't have to ask themselves or wonder, did I get a concussion or did I not get a concussion? How should I treat this concussion? What should I do? I go back? Should I go back playing or should I go back to work or what should I do? So that's our vision is to basically answer those questions for everybody that has some sort of head injury. And saying, where are you guys at in that journey from, you know, how portable is it? Who can use it and buy it today? And, know, who are you trying to target here in the next generation? Like, what's the, where are you at in that story arc? Speaker 1 (15:17.91) Yeah, so right now the device is like you said, it's about 12 pounds. So it's not super portable. It's not something you would take down to the field. But you could use it in like a locker room or a physical trainer's office or something like that if they have the right training to use it and assess concussion. One thing right now is it is an aid in the diagnosis of concussion. So, you know, it's not a black and white answer. It's a data point that a physician can use to help assess whether or not this person actually has a concussion. So we are aiming to make that better and better so that it can be, so that people can have lot more confidence in the answer of, I have a concussion? Because there's a lot of gray area in how everyone in the industry assesses concussion. No one can actually even agree on what the definition of a true concussion is or what that gold standard is. So that's really difficult, but we are making a lot of strides in that direction. Ultimately, we'd like it to be something that you can just carry around down to the field, down to whatever, and assess people wherever they are. That's awesome. And what's the barrier technically, right? It's 12 pounds today. Let's call it semi briefcase size, It'll sit on table and flip open. Is it power? Is it the optics? Is it just truly like engineering the utilitarian version of the product that is small enough, right? Or is there one piece of the tech that's hard to shrink? The optics, the eye tracking is really difficult to shrink and still maintain high quality. So we are working on a handheld version and it's for actually cannabis assessment on the roadside. It uses very similar technologies and looks like binoculars. So it's a lot smaller. The challenge with getting the high quality data is once you make those sensors a lot smaller, there's just a lot more noise. Speaker 1 (17:22.753) We tried out some eyeglasses that have sensors built into them. It wasn't high enough quality data for what we needed to do. So we started engineering our own. And it uses pretty similar technology, but it's all made a lot more compact. So that's certainly very challenging to get it smaller and smaller while maintaining that high quality data. So yeah, one of the questions I honestly have for you, Grant, obviously in the product development side is what would be the process that you would go through to execute on that? Or what's the best practice in your experience working on the product? Yeah, I mean the fun part that you guys have is not only is it I'm sure some amount of power hungry So batteries are always the a size constraint, but I think the the big one is that all the three sided stool of like Cameras right you have your sensor size which you know the bigger the sensor both in physical size per pixel a number of pixels can give you You know better low light scenarios and fast response times and you can synchronize them things like that Yep, you've got the optics how big they physically are to absorb X amount of light, right? There's only so much, you know, was it lumens per square meter or, you know, light intensity per square meter. So the bigger your optics, the more light you get, the more data you can get without noise. And then the last one is frame rate. Can I have enough compute on a mobile device that is going to give me that frame rate that you were saying that you need? And that those are all the variables that go into a big, you know, blender. You kind of to figure out the right balance of the smoothie tastes good. Exactly. Yeah. And so I think that's the fun part. Yeah. Speaker 2 (18:57.71) The neat thing is you guys have enough data and are old enough as a company in technology. You know what you need You know the data set that you need and you could go iterate on frame rate sensor size light amount optic size to go Bench top right have it all on 80 20 in a bench and continue to iterate until you are As small as you want to be but still get that data Yeah, right and then you go build a product around those core technological pillars that will be the product and everything else is just a wrapper right a lithium battery plastic case, IR light that is the shape of the binoculars that you go into. that's the fever dream. Was that a pretty similar path that you guys took down to get there? Yeah, absolutely. Yep. Yeah, there's definitely trade-offs when you make it more compact and smaller. And so we've been exploring with the frame rate in particular on those embedded systems. You just don't have the power to do 500 frames per second at high resolutions like that. So we to do some technical tricks and things with cropping and regions of interest and all those things to really get down there. Yeah, no, that's awesome again. It's there's so many tricks of the trade you can do it optics You can do it in processing you can do it in storage you can do it post-process, right? Are we doing interpolation to try and you know gain some of the data we lost and yeah, you know, it's always like where do I want my complexity and there's no clear answer until you start poking around and find out what's easy and hard to manipulate Mm-hmm. Yep. Even in like the software that processes those images, there's trade-offs between, you know, you can get your noise really far down if you do a lot of like post-processing and whatnot, but then you can't run it in real time. So there's a lot of trade-offs and we've spent a lot of time tuning that to make it reliable, you fast and easy to use. Speaker 3 (20:45.774) The next question I have for you Joel is around like more like the company size like how big is the company now? We have about 10 employees and we're pretty distributed around the country. I'm in Minnesota and we do most of our like actual hard, I guess, hard tech engineering here in Minnesota because that's just so much easier in person. You know, you can work with the mechanical engineers, electrical engineers and, you know, 3D print things, look at it together, you know, put stuff together in the garage and assemble it, see how it works, iterate on that. It's super hard to do when you're distributed. But on the other hand, software. and data and stuff like that. It's much easier. It's always hard when you got the electrical engineer, the embedded software engineer, and the mechanical engineer in the room all like pointing at each other. This problem's yours, right? The sensor's misaligned. No, the software's not reading it right. The electrical engineer's like, no, you got data in the lines. And like, which one of us has got to go fix this thing? Nope, so much easier in person. And then... Yeah, yep. Speaker 2 (21:43.694) How do you guys make your devices right now? Do you use a contract manufacturer? Do you get some parts subbed out and then you do final assembly and ship? know, what's the how have you guys structured the you delivery of what I'll call the physical good? Yeah, we've got a contract manufacturer here in Minnesota. And a lot of it is, you know, sub assemblies or like, you know, like computer comes from the computer manufacturer. So we don't like build the computer itself. But then they do assembly inspection testing. And then, yeah, then they deliver to us. We do another inspection and then eventually ship you to the customer. That's awesome. So you guys are the ones that doing final inspection and ship. Like you guys are touching the product last. It's not a drop ship from a CM. Yep. No, that's awesome. And then what computer business is just me asking nerdy questions now? What computer based system are you running on? If you can share that, like is it a Linux operating system? Is it that level? And also what's the hardware who you're using? So we've got you know, like I mentioned we got the two different models that we're sort of working on right now and so the the iBox uses a Intel processor and I think Windows IOT and on it so that runs as the OS that's been a pretty good flat platform because you know we run most of the applications net and Just works works pretty well the the handheld one I was talking about has a yacht o Linux build that we're running on there So that's got an ARM IMX8 processor in there. Speaker 2 (23:12.076) Yep, we've done a ton on the IMX series and the Octo. It's so powerful that the way you can basically revision it and version it yourself and control some of that, and the FDA really likes when you're in control of the system. That's we've found. How has Windows IoT been from a compliance and regulatory space? That's just so fascinating for me that that's an operating system that Windows is going to publish updates for way more frequently. How do you go through that upgrade path and revision and what's controlled and not controlled in that process for you guys? I would say it hasn't been really an issue at all. It's worked out pretty well. Yeah, we just install updates as needed and our software updates are controlled through our own, you know, deployment process. So we've got like over the air software updates that install, but the customer can choose when to do it and it's seamless for them. Awesome. You guys have your software checkouts and your quality system that the moment before you publish an update you've ran some amount of background testing to prove that it's going to still maintain functionality that the FDA cares about and make sure you record that you tested that. yeah, we've got a rigorous QA process, the whole QMS, everything. Yeah. No, I love the that that narrative like if you haven't done a regulated product everyone thinks it's really scary and it's not that scary You just got to write it all down. I know I did this thing in the state and it sucked We we scrapped that version but I did it on this date and you know versus the everything Oh the FDA's gonna think we're bad at our jobs and we show all these bad tests It's like no they want to see all the bad test data that led you to the good test data Yeah, that narrative is the you know eight-tenths of the battle Speaker 1 (24:31.982) write down everything. Speaker 1 (24:48.778) Exactly. Yeah. Yeah. And like for like cybersecurity testing, for example, they want to see what bugs did you fix. if you don't show if you didn't find anything, you probably didn't look hard enough. Right. So they love to see that. It is that one that double-edged sword balance of like you show that you fixed too many bugs like man You should have looked at this deeper the first time But you're right if you show zero bugs you found you know month over month or year over year It means you're probably not looking hard enough Absolutely. Well, Joel, one of the last questions I always love to ask on the show, assuming that we have folks out here that are looking to get into the medical device space. Maybe they're looking at a de novo, they're venturing the world of hard tech, et cetera. What's some advice that you would have for founders in the medical hardware space that is reflective of your experience so far? Speaker 1 (25:36.401) Definitely, you know, customer testing. So making sure that the device, your product is going to be useful for them. You know, we've gone through a lot of iterations. Like I said, we had that big device, did some testing. Turns out it was not the right type of fit. You know, we could have saved money and time by, you know, maybe doing a prototype first. One of the things that I like to do from an engineering perspective is I got this pink foam and I like cut out the actual device and It's just like this hot glue and pink foam model, I guess. And I show it to people and be like, hey, what do you think of this? Would you, you how would you use it? How would you interact with it? So a lot of customer testing and that's maybe on the usability side, but even on like the clinical side, you know, if you, if we're working on a new feature or a new model or something, we ask our researchers, our partners, all of the, you know, the industry, is this useful? You know, if it has this performance level and it correlates with this, you know, clinical outcome, is that useful to you or is it not? And if it's not useful, then we'll iterate on that until we find something that is useful. So customer, customer feedback and just making sure that what you're building will actually solve the customer's needs, the market's need. I love the pink phone. We have the joke in the office that it's CAD, cardboard aided design is the very first step. Like duct tape and cardboard. Like is this the right shape for them? Could you even put your head in this thing and hold still? You might nail the eye tracking and the processing and the battery life, but if the user can't hold their head straight in it because it's the wrong angle or not adjustable enough, you've lost the market. And again, I think that's just such a powerful lesson for early stage founders. Like everyone wants to race to the product. and the technology, like get to the finish line. I'm like, no, go show really early stage mockups to everyone you can. Figma and FigJam from like UX click through, like run the app in the background where you literally, like the user tells you what button you click on and you scroll to the Figma screen that is that screen that you were mocking up, know, in PowerPoint or Figma and, for physical stuff, literally like have the unit and say, hey, hit that button. It's not gonna do anything, but then. Speaker 2 (27:50.604) you flip to the screen, right? You you hold up the thing, like this is the screen that would pop up, is this useful? And viewing those make-believe user stories, everyone thinks feels silly. Like we're all back in kindergarten playing house and playing kitchen again in like the Play Place. But it's really important to get that feedback of, I'd hate it if that's the thing that came up next, or I can't hold my head here. Yeah, it helps accelerate the velocity a lot of times because you don't want to build your whole product and then there's a major thing wrong. Right? So you can, if you iterate quickly on those design things, you can actually get the product out the door more quickly. And to that point, mean, I really do think that you're exactly right. mean, you need to talk to customers as frequently and often as possible, regardless of your size as well. think that can also be stated too. I mean, I think that your ability to have conversation with customers does a number of things. One, if you're doing the hardware, you get to know if it's going to be the right shape and feel and the usability of the product, whether it's software or it's hardware. So that's pretty consistent across the board. think that secondly, that obviously having conversations with customers gives you more confidence creates a better culture on the team on like what we're messaging, how we're feeling about the product. All of sudden investor conversations become a bit easier because you can show some early stage validation in the world of product and creating things that people want and finding this mystical land of product market fit is as simple as talking to your customers as often as possible. And I think it's just a great point to founders that are starting or even if you're, you know, if you're in growth states, if you're S and B or even enterprise, I mean, still talking to your customer, the people who are really using it is going to get you that much closer to success. Speaker 1 (29:27.148) things that we do on that front is one of our engineers joins every demo. a recent demo that our sales team does, there's an engineer on that call to hear firsthand from those customers, you know, what are their needs? You how are they using it? What do need to do? And that's, that's helped create that visibility a lot for us. And I'm sure it fosters creativity and problem solving. Because engineers want to make puzzles fit together. They're sitting there watching a user trying to use a thing and seeing where the puzzle piece doesn't fit. And it'll spark them ideas in real time, which wouldn't be translated if just the sales rep came back home and said, well, this didn't work well. The engineers can ask for bunch of detail of why not. And the sales rep will be like, well, you just need to make it better. And better is never good feedback. Yes. Speaker 1 (30:12.366) Yep, exactly. Awesome. Joel, thank you so much for being on the show. Everybody, this is the Hard Tech Podcast. I'm your host DeAndre Herikos with my co-host Grant Chapman. Thank you so much for coming. Joel, thanks for hanging out with us and spending the time. Absolutely appreciate it. Speaker 3 (30:27.948) and tune in next week. Thank you guys.