Speaker 3 (00:00.322) Welcome to the first season of the Hard Tech podcast. The Hard Tech podcast is about bringing together innovators, builders, investors and thought leaders all in the world of hard tech. In my background and starting software companies that I've scaled and exited, there's so much content out there for folks building in the software space and not as much in the hardware space. And that's exactly why the Hard Tech podcast exists. This guest is actually kind of interesting because while we at Glassboard focus on the bits or the Adams, engineering focus on the bits. It's a really in-depth conversation around what it means to have a connected device, working with both a software engineering team and a hardware engineering team to create the desired result and the outcome for a product that you're looking for. It's a fantastic conversation. Troy Kelly from eGeneric. We hope you enjoy this one. Thanks for having me. I'm just going to say upfront that we mentioned this earlier. I'm a newbie. This is the first time I've been in a podcast room like this with cameras and whatnot. But like we said, we're just here to chat. see what happens. But thanks for having me. Sure thing. Are there any podcasts that you listen to frequently? Yeah, there's quite a few actually. I'm a big fan. I've done very little like Microsoft development, but I love .NET Rocks. So that's a great one. I also became a pilot a couple of years ago. So there's several aviation podcasts that I listen to and really enjoy. Speaker 2 (01:32.162) The Hanselman's minutes, I think the guy from Microsoft, I think he's still with Microsoft actually. Scott Hanselman. Yeah, yeah, it's a tech podcast. good tech one. Have you ever listened to two Bob's? So two Bob's is for service firms. like what we, what you and I do. So Roman from SEP is the one who got me onto it. So got a shout out Roman giving credit for turning me onto it. They got two guys in the podcast are actually like marketing ad agencies, but same business model, right? We sell humans creative time and money, right? We exchange those things for each other. And the two Bob's podcast breaks down like certain really deep truths about Okay. Speaker 1 (02:11.818) running a service firm that's totally different than another industry. And half the time you're listening to like, yeah, I solved that problem five years ago. And then half the time you're listening, like I'm in this photo and I don't like it. And they're about to tell me what I've been doing wrong for 10 years. it's a really good one. It cuts really deep when it cuts. Like, Ooh, I should have, I should have seen this coming or I should have listened to this podcast three years ago and we would have avoided that one. Okay, I mean, I've already got out of this a ton of value. I'm going to check out two bobs. Yeah. Well, thanks so much for tuning in. So in this episode, we want to dive into AI and how AI is both having implications on the silicon side and also on the software side. I mean, there's a range of topics between, you know, how are enterprises introducing AI? And I think 12 months ago there was like a huge conversation around, well, it's hard for enterprises to implement AI because there's like this data problem. How do we solve the data problem in order to like implement AI? Troy, with your background and your experience at e-engineering, how have you seen the evolution even back in 2022? ChatGBT gets announced, everyone's logging in, it's changing the world. then fast forward X amount of months later, years later, how have you seen that evolve? where is the state of maybe enterprise AI, but more how you're seeing it apply to even what you guys are doing on a day-to-day basis? Yeah, I mean it's it's absolutely a game changer from a software development perspective But it is a tool that should be properly wielded You know, we I've spent a lot of time talking to our folks and kind of helping The folks at e-engineering like really appreciate like what the capabilities of the tools are while also reinforcing the fact that Speaker 2 (03:57.236) we're still the software development professionals, right? And this isn't quite garbage in garbage out yet. You can put great prompts in and still get garbage out. This is the weird effect of AI that's different than other tools that you and I have gained over the years. A lot of like simulation tools for garbage in garbage out, both for you guys, like software self-test and things like that. And for me and like FEA or CFD, you can build yourself these great simulations or tools, but if you feed a bad data or write a bad test, it's going to give you bad results and that's on you. Very rarely would you feed it the perfect set of examples in and get labs. Speaker 1 (04:31.918) bad things out. Right. And now with AI, is totally not the case. You have to check all the results in a way that is you are the professional standing behind the work at the end of the day. Yes, absolutely. You are the gatekeeper. The phrase that I keep that I'm saying more often is you're the boss, right? Like when you're working with AI, you're the boss. You're the gatekeeper. You need to make sure that the quality is there. I remember when I was first kind of experimenting, I was generating some, just some database access code, right? Like give me some Java code that will store this, you know, in the database. one particular order because that database wants this order versus the data where you're getting it from. the code was the poster child for SQL injection attack code, right? It was, it worked, but it was bad. was, and so, and things have gotten like a lot better, but we still have to have the ability to look for those things. And you know, these tools, I laugh because like Replet is one of these tools where but it was going to do bad things. Speaker 2 (05:43.478) you have people and I've so there's this vibe coding thing right like this whole idea of writing code without really knowing what's going on behind the scenes. It's so dangerous. Like for little fun things like games and stuff like that. Okay. Or the first MVP. Do I even want this software to exist? Right. That's a great use of VypCoding. Go use a no code, low code platform to get your idea in front of humans. But don't scale with that. Right. Right. Yeah. And it's kind of, we were talking earlier about somebody generating a, was it a bill of materials or something? And it looks great, you know, from the outside, but then you bring a subject matter expert in and they're like, what'd you say? This is, this is using a well pump. We're using an RV well pump for like this like really small volume medical device. Like we're going to do drug delivery at record rates. Like we're going to replace all of your red blood cells with insulin right now. Right. That was not the product we're using for just that was a good example. I thought it was a good fictitious one. But so, so interesting how, how confident these users of AI are coming to us with that they're farther along than their predecessors that our clients that are being on board with us have ever been. But this Dunning-Kruger effect of like No, no, that confidence is not well suited. Like you have 80 % more than your other, you know, two years ago you would have had, but the 20 % that you have that is bad is poison, it's poisoning the well. Right? You have to accept that most of this is subject to change because it was hallucinated. Speaker 3 (07:17.932) And I think in addition to that, it's also about knowing they might have 80 percent more, but how much do they know they have 80 percent more? It's like the text might be on the paper, but I have no idea what the text actually says. So in reality, they're about in the same spot as they were otherwise. It just looks like they're in a different place. They don't really know how to differentiate between, this I'll never forget. So my first company was a SaaS startup. And back in twenty twenty two, you we had our cheer boards all set up and whatnot. And. chat GPT comes out and I'm for the first time ever able to generate what looks like code. And so I have no access to our database. have no clue. We're on iOS, Android and a web app. And I'm thinking to myself, this is the key, right? So I'm working with a firm and I just remember going into chat GPT and just like inserting whatever the title of the JIRA ticket was into the prompt and saying, generate the code. And I just added it to all the JIRA tickets that the firm was. like working on. I'll never forget the CEO was like one of my friends' name, Sean. He like immediately slacks me. He's like delete everything. Like this means nothing. Yes. Yeah. So I think some of the advances like as particularly as we get some really cool agentic tools that a lot, I was actually just playing with this over the weekend with GitHub co-pilot and the agent mode and the ability to. give very detailed requirements, which I think that's where the skill comes, right? When you're writing these very detailed prompts saying, here's exactly how I want this code to look. Here's the libraries I want to use. Here's things that I want to do and what I don't want to do. Speaker 1 (08:53.716) Here's the data flow I need. Like it has to exist here and then be checked and then go to this place that's secure or safe or whatnot. And the agents, right, like this transition to agents where it can produce code in bulk, right? So kind of across the layers of your application and it can run commands and that sort of thing. But there's still a very valuable sort of human in the loop where you've got to sit there and you've got to use your brain. Exactly. Like, does this make sense? Drive the thing. Speaker 2 (09:26.534) And that kind of transitions over to where like we're thinking about how we help organizations use AI in solutions is that we were talking about this before, like we're super conservative about this. Like someone comes and says, I want to use AI that's going to be customer facing in a way that's not like a customer service bot or something like that. to search your repository and trying to find the section for your content. No, no, no, no. We're looking for much more practical use cases. So we're going to be working with one client where they get a bill of materials, for example, for shipping stuff, that sort of thing. And a bill of materials is very similar, their different vendors transmit them in different forms. Right. And one of them is like an actual scanned in paper PDF that you have to OCR in reality. Yeah. And they're like, takes so much time to like process these. Can we have an AI properly identify the document type and then help extract some data from this? I'm like, yes, that's a great one. But a human has to be there to verify the work. And that's where like, I see a lot of the gains is if there's something painful that you do manually and AI can help get you most of the way there. Speaker 2 (10:50.252) And now you're spending a fraction of the time reviewing the work instead of doing the grunt work. You win. Well, and ironically, if you have a human in the loop going, yes, no, yes, no, yes, no. And you're recording that data. You can use it to train your models in the future. This is the thing that like no one understands because there's two different kinds of AI. There's like LLMs that are a big statistical model of how language works. More or less, you you boil it down to the dumbest answer. It's a statistical bell curve of what word is the most likely one to come next. And then there's machine learning, which is totally different. And it's a big neural network of taking a bunch of variable inputs and predicting them out. based on prior data sets. If you zipper those together, you can make really powerful AI tools, but you need data to train the other one. So for us in like battery modeling, we're trying to model how a battery pack may respond to an input in an electric vehicle. A new chemistry comes along and we have all this drive data from how a traditional battery pack would respond, but this new chemistry has a different voltage curve or different response to power or braking. We only need a little bit of the data to say, hey, the new model works like this, go put it in the old neural net and it'll reconfigure it. We can get an estimate out. So that was something we were doing. This is what, 2013 and 2014 when we first started. So that was AI a decade ago, right? And that's the limits that it had. Now you could probably blend that together and tell an LLM to go write code, to go look in my database and pull all the data and make a new neural net and output it here. If you had to explain what, so agentic AI. I kind of know what it is. I don't exactly know what it is. If you had to kind of like describe it, like what, how would you describe it to someone? How does it work differently than maybe like a typical L and what, or a chat GPT prompt, for example. Speaker 2 (12:30.6) Yeah, for me, this is hilarious. there's there's another the guy who like created or co created Django, the Django framework, Simon Willis has excellent articles out there. But he wrote one, I think, where he went on a little bit of a rant about all the permutations of what it means to for for something to be an agent, right. And so I think there's a lot of different Terminology. Definitions, you know, for that. For me, it's when the agent is something that can kind of take actions on your behalf, hopefully, with a human in the loop to verify, but it's actually able to. So in the case of writing software, it's, okay, this is the command that you need to install some Python packages, for example. Do you want to run this? I look at the command. Yeah, that looks right. Okay. Run it. Or I'm working in the sandbox and I'm going to give you yes to everything. Don't ask me, but this is a very contained sandbox. Anything might crash itself, but we might let it dry. Yes. And that's the sandbox comment is awesome because I think for businesses that are working on agents, like, you know, like we have things like a cloud computer use and, whatever open AI's version that they, they call it where it can actually, it takes a series of snapshots of the screen and it'll actually move the mouse, run the keyboard. I'm like, please. Speaker 2 (14:04.238) that you really need a sandbox if you're going to play with that technology that is really locked down. And I know it's going to happen if it already hasn't where somebody is going to turn this loose on a desktop with a browser. They're going to go home for the evening and this thing is going to hallucinate and some weird thing. and go buy Bitcoin on the owner's behalf through their password manager and to chase wire transfer and imitate them for email to their banker. They can use their voice and language to trick the banker into giving the wire done. And the AI is like, well, this is what I was told to do. I already had to call my mom and be like, I'm me. None of the family is ever going to call you and ask you for money over the phone. Like you will get a call that sounds exactly like me. Do not like tell, tell them you're to call me back or whatever, because we're there. Like even this podcast, there's, there's going to be enough material to go run this through. What is it? 11 labs or whatever, know, however many seconds. And then now we're, our voices can be. Completely computer generated. That's how I found out my grandmother thought that I was the favorite grandson. Someone called her and said, is your favorite grandson and I need help. And she's like, Grant? And then she wired $3,000 to Mexico because she thought I was stuck in jail down there. But it was only three grand for me to find out that I was the favorite for her whole life. So I think it was money well spent. But no, the sandboxing versus the other side of that coin is just the illicit use of AI. Like what you just brought up is like, AI is not only enabling those of us trying to make the world cooler, better, faster, et cetera. It's enabling the negative side of the world of scammers or people trying to take advantage of someone else or time the markets faster than is allowed. And this is a huge gray area for everybody. don't think anyone's actively trying to legislate AI yet, because it's a tool, not a person. And it's going to be a weird outcome when the dust all settles here. Speaker 3 (16:05.58) Have you seen any like evolution, like security with AI? Um, yeah, I mean, it's a moving target, right? Like just the, the concept of prompt injection. And, um, somebody brought this up just the other day when I was talking to them, the, you know, the, these models go out and they train on websites and they were talking about how, it was Mike. It was Mike Kelly from Debt Town. Hey Mike. Uh, and, and so he, he was talking about how they're creating, they're putting data. but in the, the HTML that you can't see, right? But it's, it's nefarious type stuff, right? And, and the model doesn't. know that it's not human readable and that it's nefarious. Yeah. And so it's, it's, it is such a rapidly moving target. Like no one has a perfect answer in terms of how to prevent prompt injection and that sort of thing. And I think that's where, like, I feel like we have, some good, some good tools to bring to the shop for, some of this stuff, because we have, we, do a lot of testing and the whole idea of doing like evals. Speaker 2 (17:18.015) on something that is non-deterministic. I think I was listening to a podcast, I think it was .NET Rocks actually, where they were talking about, had someone on there talking about where you really need to write tests for a lot of these solutions, but they're not always gonna pass 100 % because of the non-deterministic nature of the models. But you need to understand what the risk profile is by actually testing the thing. over and over and over to really get a standard deviation. does that, does this, you know, make you feel good? And there is also that like standard deviation of like what can happen and then what controls can you lock down? This is that the two sides of software and this happens in firmware and in higher level software is that certain things might not be deterministic, depend on all the variable inputs that may get her all this stuff. Right. Yeah. Speaker 1 (18:07.662) but you can always limit what it can access, right? And this is what you and I were talking about. When you develop in a sandbox, you also need to develop production code to sandbox itself in some method. What can write to the database versus what can read to it? Not every user needs to do both, right? And controlling that. This is just like IP. Not everything can be protected by a patent. Sometimes you just got to keep it secret. And that's why, you know, in software you have to organize how things can access and change things. And if you report wrong data, that usually doesn't go really bad all the time. but being able to break someone else's data that you're collecting or managing is what gets really tricky. Yeah, that's so one of the features that's really cool that a lot of the model support now is function calling and this is kind of the whole It's there. It's become a little bit more standardized with something called MCP or the model context protocol Which is really cool allows the LLM to basically format Output that turns into a call to another system. But grant just like you said if you have functions that are accessing a database you you want to make sure that those are read only when as appropriate so that the model doesn't go off and do right start distra like, yes. things. Speaker 1 (19:21.39) Have you seen Silicon Valley that they look how fast it's deleting episode? Silicon Valley and HBO show is that perhaps they made a better compression algorithm and they released it to one of their users and there was a bug in it. Instead of compressing it, it deleted everything. They deleted things on their servers at such a pace, they set a record. And that's all I can think of with these AI tools. Like, look how fast it's deleting all of my data. I'll never recover again. And I think this is going back to the security, like the ability to have models that are going to create permutations of things, that's going to be a real thing. And I think even now, I have to believe that things like reCAPTCHA and those things that detect kind of these human movements, like the... It's a, they're on, they're on a limited time. 100 % are, yeah. I would agree. think I was reading some article recently where it's like keeping the human in the loop, but it's when agents can start like evolving on their own and like agents can work with agents. It's like, when I heard that I was, my brain kind of exploded and I was like, if AI is working on AI to make AI better than like, that, is that the singularity point? Speaker 2 (20:40.31) Yeah, this has been over a year ago, but Microsoft had come out with something called AutoGen, which is exactly that. You can set up agents that have different system prompts, and there's a broker that works to manage them. We did some simple use cases with, you're a tester, you're the developer, here's the thing you're building, and then there's the manager, and they would go back and forth. that definition of the agent where they had a sandbox to actually run code and observe test results. It was pretty cool to watch how that interaction goes down. But again, that's where I think on the business side, if we're constructing these agents, now it's getting exponentially more complex to make sure that the whole system of agents doesn't. do harm in some way, right? And it's tested. You think about testing the interactions of a half a dozen agents working together for a common goal. for a month 24 seven at the speed of whatever that Intel processor and video processor is cooking at. complexities goes way up. Speaker 3 (21:53.55) You brought up something that was interesting earlier and I just would maybe double click on a little bit around how do you properly prompt? Right. I think because the power of AI, you know, if you just say you're a sales expert and then give me an output. And if I typed in a very long, like detailed, you're a sales expert in X industry doing Y thing and had these were like, how, how would, how do you go about structuring that? And like, what is like your opinion on prompting and so on? My overall advice in the strategy that I've used is experiment a lot. But there are some great guides out there. So Anthropic, has the Claude models, they have a great prompting guide. OpenAI has a really good one. And there's a lot of resources out there to learn how to prompt. The book. that's a very approachable book about just generative AI. It's called Co-Intelligence. I cannot remember the author off the top of my head. He has some really good ideas in there about prompting. it's different depending on the context. If you're writing code, there's a lot of set up in terms of expectations and things like that that you're going to provide. even understanding like what input formats and how to provide data. Like a lot of the models can understand Markdown really well. And so if you're providing, you know, maybe a spec for something that you're going to do or some requirements or some sample data or something like that can be really helpful. to do the, the, you're the expert thing is great. Also while saying what you don't want it to do, cause the models. So these providers, the commercial providers get paid by the token. And a lot of times if you ask just an open ended question, you get volumes of stuff is, is of limited use. Right. And so if you come back and you say, or you start and you say, Hey, listen, Speaker 1 (23:57.283) which. Speaker 2 (24:07.522) don't generate any code right now. We're going to talk about the problem. And we're going to like talk about the steps and by the way, when we talk about steps, I don't want you to give me all the steps. Like, I'm like, I want to do step one, I want to figure out what that is. And then I want to test it and make sure that it's right. And then I want to iterate from there. And so, you know, the providers there, they make more money when there's lots of stuff that gets spit out. And structure it? Yes. Speaker 2 (24:37.058) but it's also just not useful, right? It's like, how would you want to interact with a subject matter expert human in front of you? kind of bring that to the table. Sure. You wouldn't want to ask someone a question and they go off on like a 15 minute dialogue. And so on step 15, you just like passed up one. That makes sense. Well, and think the other like analogy I've drawn for the current version of AI, both generative and agentic, is it's like rendering. If you ever like how video games are rendered or how like computer graphics are rendered, back in the day, we could only render very small, low resolution images that didn't have a lot of pixels. But if you zoomed out, you're looking at a rendered image from far away, it looks great. man, that is totally Toy Story, right? Toy Story looks great from far away. You zoom in, there's some, you you... you wish it was higher definition like today's graphics are. Whereas fast forward to today, you can zoom in on any video game in a single frame and literally read the like Haynes logo on the character's underpants. Like that's how zoom in you can get. And I feel like we're in the version of AI where we're all the way back in the 90s and we can only render so many pixels. So if you ask AI to do a very broad thing, it'll paint you a really accurate picture broadly. Be very careful starting to ask it to do very explicit things. And if you do, you need to crop the image and only render that explicit thing. That's back to your step one thing. Don't let it render too many pixels or it, know, for this isn't true, but it runs out of memory is what it feels like. Right. And then it starts hallucinating things. can't remember what was in the upper left corner by the time it gets lower, right. structuring your prompt and structuring your expectations of AI in this way is a good analogy. And as we get better models with better algorithms and faster hardware, and it can do more for prompt. Now we're going to see finer and finer resolution and you can get more and more of what I'm going call Speaker 1 (26:25.898) accuracy as you zoom in. Yeah. Is this like resonating with your experience with AI as well? generating an outline of something you want to do, whether that's running software or developing a hardware product or starting a company or finding the right birthday gift for your dad. AI is amazing at this super high level, like outline of the steps you should take. Do not then have it right down all the steps right now. It is not an expert yet to zoom in and get all those intricacies correct. Yeah. Speaker 2 (26:52.194) Yeah, you know, something else I've had a lot of success with here recently that I would recommend is like start with a very high, like just the idea and have a conversation with the LLM about how to approach that, that particular thing. And, that can really be helpful. Like one of the, one of the, the, the advantages of these tools is that I almost every day learn something that I I learned something new as part of this as maybe an option. But if you're doing like the design, maybe, maybe it's of an application or you're just doing something creative, start with maybe like, help me think through this and how, and how to think creatively about this problem. And then one of the things that you can do is you're like, okay, now we've, we've thought about this and I'm still the boss. Like we've come up with what I think is a really cool idea. now create a prompt for me that I can use to take this conversation forward. That has actually worked really well because you can kind of summarize everything that you've talked about and turn that into a prompt. And I even type in, this is something I'm going to carry this on in another conversation with an LLM. And it's going to, I'm to use Claude to do that. You know, might even specify what version you're using. That might get an interesting response from OpenAI's model, but yes. Speaker 1 (28:22.99) Well, but that's interesting because if someone has asked in the forums about the best way to prompt something like this, that actually might lead to good results. And yeah, the one topic I want to make sure we touch on here in AI before we run out of time is really how it's affecting engineering, both software and mechanical, like just engineering. I'm going to call early career progression. Senior and above engineer seniors and principals and leads and insert titles here. Anything above five years, six years of experience, AI is amazing for. It's just going to supercharge you. You already have the intuition to know when the AI is wrong. So it's wrong a bunch of times, but you can filter out the 20 or five or 30 % of the time is right and put it into your project and move faster. You can be wrong much faster than you could be wrong on your own. What I don't know is how do you become a senior engineer today when you start as a junior and you haven't had to break your fingers in the keyboard writing bad code and then airing out and you understanding the underlying fundamentals of writing software and writing good code or for myself. designing good plastic parts or designing good metal parts or for an electronics guy laying out the right circuit board. As these tools get better at taking a prompt and coming to an output, how does someone achieve senior status? Because this is blending into the same equation I had when we talking about what was remote work doing to the workforce. If you were a senior and above as an engineer, remote work makes you super effective in the short term because you can just go cook at your craft. But in the long term, might lose vision on the project and cohesiveness. That's why we're all back in person. But there was some short-term benefits to it. But when you were a green engineer getting hired at a college in a remote work environment, how do you get the mentoring and learn the intricacies of growing in your organization? think AI is doing that same function to like pure problem solving almost, or core craft maybe, is that the right word? Whether it's mechanical, electrical or software. How do you see that happening at e-engineering or from your perspective in software? Yeah, I was listening to a podcast or reading a book. can't remember where it was, but there's been this issue has existed in the medical industry for some time because there are robots that can perform surgeries. And as a result, the new surgeons aren't getting the same type of practice that the older surgeons have. so if their capacity is, if the robot's capacity is limited, Speaker 2 (30:45.908) or some, or it is like, and they have to step up and actually do the surgery that it could be a real challenge. And so what I've, I've been working with a couple different like junior developers who are, who are kind of getting in the space. And the main thing for me is this can be something that will accelerate, accelerate your learning in a way that was previously not possible. range of motion or something. Speaker 1 (31:10.835) what YouTube did for me. That's my answer. YouTube accelerated my engineering education faster than anything else in the internet did at the time. Absolutely. so, you know, that's my, my advice for writing software at least has always been write a lot of code, right? Because you got to make mistakes. And now I think things have changed where, you know, you, you have this amazing technology, but whether you're in school or you're, you're a junior developer trying to, you know, get that experience. If you're letting it do your homework for you and you don't really understand what's going on, that's not going to go well. Right. So you just, you need to, you need to be the boss, but now you've got a resource where you can explore lots of different, you can have, you can, you know, review my code. Let's talk about the performance aspects of this code or the security aspects of this code. There's a lot that you can do to accelerate your learning in a way that wasn't available. Right. I mean, like YouTube, for example, is it's, it's difficult. You, got to sit down and watch a whole YouTube video. Usually it's difficult to skip forward to the parts that really apply to you and that sort of thing, maybe. But, but now you, it's like all situationally kind of like dependent, right? You can go and fork your conversation any which way to explore a particular topic. So. I usually just say, if you love to learn and you're using this tool to accelerate your knowledge and make a lot of mistakes and things like that, exactly, speed it up. Steve Yegge, so this guy is, he worked at Google. He's worked at several different places. I think he works at a company called Sourcegraph now. It's an AI, you know, this is not a plug for the product. Speaker 1 (32:49.006) And just make them faster. Speaker 2 (33:05.344) Steve has some great articles. He started off with death of the junior developer. And then he, and then, and then he, a lot of people got angry that he wrote that. And then, so he, think he wrote the next article is like death of the lazy developer or something like that. And now I think there's even a third article and it, he, he talks about just that. Like you, if you're lazy and you let it do your work for you, that's not. You're going to miss all the fundamentals that actually make a senior developer a good developer. It's not just because they know the syntax better or other things. They know the underlying fundamentals of it in the syntax and all things that go in there so that their strategy is actually the important thing they're tuning. Yes. I think you've got like Benioff, you've got Zuckerberg. In 2025, they basically both announced that they're not hiring for like new developers in 2025. And I think to your question, it begs the question of like, think in order, regardless of the software, but hard development, whatever it may be, the skill that AI is accelerating, you basically just have to be better to get into it. because now the bar is simply higher because the access to be a junior developer is there. Because you can do more. this is Glassboard. I said this on other podcasts of ours even like Glassboard would not have existed at the capacity it is today in the 80s or 90s because I would have needed a much larger team. I couldn't have ever started with three or five people and had five clients. I would have had one client with three or five people and I would have had a team of a hundred to do five clients. Right. And you can do hardware development so much faster with the better CAD tools or simulation tools or 3D printers or CNC machines or Speaker 1 (34:49.794) or pick and place line that wasn't a million dollars, right? We had a pick and place line that's a five figure number. And I can now do so much more, so much faster with so much less capital. And my engineers are actually like, what they're expected to do in a day is insane. If you'd contrast that with an engineer from the nineties, how many products they're supposed to touch in a week and what their outcomes are and how fast they have to move. And this has been great because it's democratized engineering that those of us that are really talented can do way more. And those my clients that want to start companies and start new products don't have to be engineers and go hire way fewer engineers at way less total capital cost to go get a new product out, which ironically has made the competition in new products harder because it's easier to get there and it's more competition. So it's one of these like, I actually think it's generally good. We're getting more products that are more uniquely fit for more people that are cheaper than ever and are faster to develop than ever. And they're getting better and more data and all these things. and it's not taken away from the craft, right? The artists that are doing the industrial design or the engineers that are solving some intricate problem can just do it with less of their own time, do it with less of a team they're reliant on and be more part of the creative journey themselves, because their tools are getting better. And I think AI is doing that to software right now. mean, software used to be really hard to write in the 80s and 90s. There weren't the libraries and the tools and the standards and all this stuff. Also. Speaker 1 (36:14.05) Fast forward now, there's certain languages that are just really easy to write in because it prevents you from making a lot of computational mistakes with the words I'll use, right? And I think AI is going to further increase that. the neat thing for developers that are good, that aren't lazy, you just get to do way more, way faster. Yeah. Right. You maybe you would have needed a software team of 20 people to launch that product. Yeah. You might be able to do a 10 or five now, but those people need to be experts and use the craft. But now you have what 20 developers in the market for one company now. that's four additional, know, four X the companies that can compete and go do this thing. So for the end consumer, think you're going to get crazy good value for money in the long run. But I think leading up to there, we're going to wade through a lot of garbage because AI is generating garbage at a ferocious pace. Bad software is going to be written. Bad data is going to be presented and saved as articles that a human claim they wrote, but really they prompted AI to write this article. The next, I don't know. It moves fast. Maybe it won't be as long as five years, but I think for the next two to five years, we're in for a bumpy ride as far as content goes. Does that ring true for you? yeah. think, you know, there was, there's a lot of tools that have existed over the years that have allowed people that don't have a lot of technique, technical knowledge to create things. So Lotus notes comes to mind. yeah. Right. I mean, I, I can't, like I've been at so many companies where some really, interesting Lotus notes application, like complex Lotus notes applications have been created. And in the end, You still have to maintain it. You still have to think about how to enhance it and evolve it. And I think, yes, right now, like you have the ability to have this abstraction of a natural language interface that's creating something even more complex, which is the underlying just code. So I definitely think that we're going to be, I'm very optimistic about, about this, this sort of thing. I think the ROI, the ability to get ROI for companies that previously couldn't get there by Speaker 2 (38:18.05) but by investing because the cost is lower is great. But I also believe that we're going to have quite a few instances where people are going to come to us with something that is 70 or 80 % done. And what is done is maybe not going to be produced a level of quality. So there's going to be some messes to clean up and that sort of thing. But if it helps people innovate and get, like you said, that MVP, those early ideas going, we Speaker 2 (38:47.606) and they recognize that maybe what they produced is not the quality that's needed for the long haul, then could be good. Yeah, I think it'll be super interesting to see how this all boils down, right? The next couple of years will be a bumpy ride and then I can't wait for five or 10 years from now where it's just part of practice of the art and it's really good. And then we'll be worried about the next technology that's going to take us over. Hopefully by that time it's just, you know, real self-conscious AI that decides to either hold us hostage and make the world its or help us either way. Either way, it's certainly not my problem. I guess I've got one final question to pivot a little bit away from, from AI just around. think we get to Indianapolis service firm leaders, you know, on a podcast. Just curious and like from your respective Troy and Grant as well when it comes to building maybe in Indy, but it's just service businesses in general. Like you guys have been around for quite some time. And so what do you feel like, if someone was just starting a service business today or sorry, when the last year, like what are the core things that makes a service business successful. And I guess what has made you guys successful over the last, think, what is it 25 years? Yeah. Yeah. We're, trying to figure out what we're going to do for our 25th anniversary this year. I mean, for me, it really comes down to customer satisfaction and delivery, right? I think it's, it's, it's so much fun to start a project, right? Starting something new is always fun, but like getting it across the finish line and delivering that value and Speaker 2 (40:23.5) You know, when you're in professional services and you you're providing services to a client, like things don't go perfectly. People are going to more human. make mistakes. And I think it's part of it is how do we handle those situations? How do we work through them? How do we build trust? And and also like the the the mentality of staying focused on delivering value. We're all geeks, Like it's, some of the stuff is so much fun. one more feature. How cool would it be if the death of every good project. Right. Like if I only like I could build a framework that we could build upon and this would scale like no, please no. So like it's it's just we got we we we can have fun with the technology, but we got to keep the end goal in mind and and and deliver deliver the value high quality, you know. And I think it's in the other aspect is kind of internal, too, is people that work well together, know, teams where people that everyone's willing to help. You know, you kind of have this with at least with our teams, you know, you kind of have this self organization that happens where you put these individuals on a team and they all have different strengths and weaknesses. And where where the team members come together to support one another, you know, I'm I'm Speaker 2 (42:02.166) I grew up playing basketball and that sort of thing, right? Like, so it's the teams that gel and anticipate each other's moves on the court and things like that, those are the ones that really, you know, win. And it's the same, that same kind of mentality is just having a really, a well-functioning team is huge. And that comes down to relationships and treating each other with respect and being professional. And the ironic part is the best gelling teams are the meanest to each other on the surface, but they all actually love each other and take bullets for each other. The more a nerd centric organization, this just my experience, both at Purdue, professionally here and other places, that if they're actively rubbing each other in front of clients, that team is so confident their actual ability to deliver, they're the team you want. You want the team that is so self-deprecating amongst each other that they couldn't possibly say this unless they truly were just the best. And that's one of the things I've seen. What you really hit on there as far as like that delivery side is it's one of the benefits we get from our startup clients. know, small and medium businesses are usually cash flowing their next development enterprises. Some magician in their executive team has greenlit funding to do this next thing. So the money's coming in startup land. Like you ship or you die. Like we have raised our A round. We are getting no more money. We have to make revenue before we die. And that pressure of ship or die actually makes engineers better. Most people think engineers do best given infinite time, money and resources. We actually do our best work when we're given just the right amount of constraints and we know the one knob we can turn. We're gonna find the ultimate solution to that equation, right? Every new variable you add makes an engineer's job more complex, right? And hitting that maximum value proposition point harder. Clients that come with a clear vision, a clear budget, a clear set of knowns, and they let you know, here's the variables we're willing to give on and you... Speaker 1 (44:00.268) you fix this one for us and let me know if we can move B but really don't want to and let us then suggest solutions that lead to those outcomes. And I think that you nailed it perfectly that like it's all about delivery because it's ship or die. Yeah, yep. Everybody, this is the Hard Tech Podcast. Troy, thanks so much for joining us. Tune in next week. Thanks, y'all. Thank you.