Production horror stories with Dan Neciu === Paul: [00:00:00] Hi there, and welcome to PodRocket, a web development podcast brought to you by LogRocket. LogRocket provides AI first session replay and analytics, which surface UX and technical issues impacting user experiences. Start understanding where your users are struggling and try it for free at logrocket. com today. We're actually going to be talking a little bit about struggling users and finding issues because we have Dan Nichu on with us today. Dan is a technical co founder. ~He's, uh, ~on LinkedIn, he says he's a staff engineer,~ but, ~but over working at career OS, and we're going to talk about, as you say, Dan, production horror stories. Welcome to the podcast. Dan: ~Hi, ~hi Paul, it's nice to be here. Thank you for having me. Paul: So recently you gave a talk and you went over bugs, right? ~And, and, or, ~or as you like to call them, the bugs bunnies,~ uh,~ which was a great name. ~Why let's, let's start with that. Why do you call bugs, bugs, bunnies?~ Dan: ~I think, uh, for me, it's, uh, a way to like, humorize the issues that happen in production and in a way not to be scared, uh, about all the problems that can appear. Um, and it's also a little bit like a metaphor because, you know, Bugs Bunny used to like, um, always out with Elmer Fudd and Daffy Duck, and it's the same with bugs in.~ ~In, uh, in software, right? That they somehow always managed to find a way to get in production and cause issues for everybody.~ Paul: ~It's definitely adds some personification to what a bug is. And, uh, you know, anybody listening should go watch the talk. Cause some of the, when you show some bugs bunnies, some of them are nice and fat and plump and it definitely adds some humor. Uh, ~so~ I mean, ~I guess we're going to talk about some bugs and what they mean and how you can ~sort of ~avoid them when you're developing software, either for yourself or for your company. ~Uh, ~I guess to kick it off, What do you [00:01:00] think are some of the strongest patterns that you see with bugs? Is it something that is like land sweeping and it takes down all the servers? Or is it something that's like, you don't notice? Where do you find the most impactful bugs happening,~ uh,~ in these analysis and conversations you're having? Dan: Yeah, of course ~the, um, ~the most problematic are the ones that take down everything, like we saw. Actually last week or two weeks ago when CrowdStrike ~like took ~took down half the internet ~Uh with uh ~with an issue, so we don't want that for sure and ~how many ~how much money was lost probably in the billions ~um ~but For me as an engineer,~ uh, the ~the most ~I guess ~Problematic issues are the things you cannot find or the things you cannot know, ~you know, like, um, You~ You investigate, for example, why 3 percent of the orders are not being completed in an e commerce application and you just cannot find,~ um, ~the reason, ~you know, ~and you spend weeks analyzing, trying different things, deleting code, adding code, ~you know, Like ~monitoring everywhere just to [00:02:00] see,~ um, ~where,~ uh, ~this problem can be. So ~this, ~this for me is like the hard part,~ the, ~the detective work that you have to do on a daily basis, just to ~find, ~find some of the issues that are happening in production. Paul: In your talk, you mentioned some of the detective work that you've had to do for these use cases and, ~you know, ~lessons learned, ~I guess. Yeah. I mean, ~as an engineer myself, I'm like, Oh, well, we put monitoring, we put data dog or insert another, like Splunkin or whatever you want to do. Why didn't I catch it? So I'd love to hear some of your takeaways about ~why things, ~things you learned, why it happened, and maybe why me as an engineer, I'm missing it, even though I put the grafana, I put whatever on why did I miss it? Where are we going wrong? Dan: Yeah, I think the biggest issue I guess is the way we architecturize our code. ~Um, we, ~we don't ~really, ~really code in isolation. ~So, you know, ~you write a piece of a component, ~let's say, ~especially in front end,~ uh, ~you write a component, you add tests to it, you make sure it's really, really isolated and nothing can go wrong. But then a new requirement comes and, you know, someone adds something else, [00:03:00] someone adds something else, and It's starts to leak in like all sorts of business requirements inside of the component. And then it gets complicated and complicated and complicated. Then ~it, ~it talks to other components in different ways. And again,~ um, ~after months and months in development of many people adding more and more stuff,~ um, ~things happen, ~you know, ~and things slip through the cracks of your unit test or your end to end test. And that's why ~You know, ~things get to production. Even if you have ~like, uh, ~monitoring software,~ um, you know, ~they catch the errors, but Not until it's too late, ~you know, like you don't you don't prevent them ~so I think a really good part would be ~like ~just to make sure to keep your Code very very isolated and follow best practices for ~design ~design patterns Paul: so you're saying it really ~like ~comes down to the architecture ~at the front of ~at the front of the pipe. Dan: for sure And I think like keeping it simple. It's always better, ~you know~ Paul: If you're working with an existing stack, you're joining a company, and you're pushing out features, maybe you don't have complete [00:04:00] control over the architecture, and you're trying to work with what you've got. So there are some great examples that you mentioned, ~you know, ~I'm thinking about the hundred 25, 000 Netlify bill. ~I mean, ~there's YouTube videos about that too. I know Prime has a video on it. That's a funny story. How can we avoid situations like that? What were your takeaways? Dan: ~Uh, so, ~for the bills that pile up when you're not paying attention, of course,~ um,~ you always have to take into account traffic. ~Like, ~this, for me, it's ~like, ~even when you're building a static website or you're thinking, ah, I'm just gonna host it on a CDN or whatever,~ um,~ there are people out there who just, Want to see the world burn, ~you know, ~like they just want to create chaos ~and ~and sometimes as a joke You know, they do it as a joke. They do it for fun And they don't realize that there's a person there who built this and it can actually harm him Like a hundred thousand dollar netlify bill can do to you. So always You have to take that into account and monitor your traffic ~have ~have alerts in place every time there's a [00:05:00] spike ~Um, ~you know a lot of these software as a service tools, ~they don't ~they don't give you the option to ~You know ~Have limits in place, ~you know, ~like netlify doesn't have an option that says hey if it goes to a hundred dollars Stop the website, ~you know, they ~they don't allow that so ~um ~You have to do it yourself. You have to build it yourself and ~um ~Yeah, just make sure You monitor ~your ~Your services every time Paul: Do you feel like traffic is one of the biggest surface attack areas due to its,~ you know, like ~push to the side, like, Oh yeah, like ~we'll, ~we'll rate limit ~it's, it's, ~it's whatever. Do you feel like that's a large service area that's underestimated? Dan: yeah for sure and I guess ~you ~you can have like firewall services that can protect you ~um, ~you have all these captcha that can Basically detect if all of the traffic is coming from one ip or things that are in the same area and then just cut them off ~Uh, ~and you can do this ~for ~for sure that will protect you and But it is one of the vulnerabilities we have [00:06:00] because, ~you know, ~it's the one place where everything is open to the internet and anyone can access it. Like other services that you have in your application. If you build like microservices or stuff, they can be in a closed network and cannot be reached by outside forces. Paul: Let's talk a little bit about UI because I know you love CSS, Dan. And ~uh, there, ~there was a great example you gave ~about a, ~about a button. The button was fine. The button was working. But it killed some of the sales. Can you talk to us a little bit about that and the takeaway about how you would catch that? Because in a unit test, people test Okay, is the button clickable? Does it work? What happened? And how can we avoid a situation like that? Dan: yeah, it's a pretty funny story. ~I mean, ~funny now, not funny then, but,~ um, where ~we added ~a, ~an extra loading state to a button. So when ~you, ~something was happening on the page, we wanted the users to feel like we're working, the server is working. We're fetching some data,~ uh, ~pretty basic stuff. So when this was happening, The button had a loading state and a little spinner on it,~ um, ~which had a nice [00:07:00] animation. ~Uh, ~the problem was that ~in where ~this button was used everywhere in the app, but in one place in particular, it had a different framework behind it. So in the checkout page, we weren't using ~the, um, ~the two way data writing framework that we had in the other side of the app. So when ~the, ~the loading state was applied at a button, it was never removed. So, the button kept on loading, and of course, the user,~ uh, ~saw that it was loading, saw that the page was working on something, and waited, and waited, and waited. And then, you know, when the loading didn't go, they get frustrated, and it's ~like, well, ~let me refresh, and because of this, ~you know, word ~orders were lost, and,~ um,~ yeah, Paul: You're like, what if it double charges me if I clicked it and it wasn't ready? I don't know. I don't want to click it, right? Okay. Dan: scared a lot of people ~the ~the problem was that it didn't scare enough people to notice it immediately So we released it. ~Uh, ~and of course ~it we ~we didn't catch it. ~Um, ~but ~the ~the thing is That users didn't click on it, but not enough to trigger an [00:08:00] alert and make us think, okay, we're losing sales. What's happening. It just lost a little bit, a little bit every day. And we just figured it out at the end of the month when,~ uh,~ we compared this year's month with last year's month. And then we dug a little bit deeper and saw that the checkout events were not the same as previous months. And we,~ uh,~ correlated with the release and find out ~what, ~what was the issue. Paul: So it was via detective work of looking at the code diff itself. Wow. Okay. Dan: Well, ~we, we started looking at the, um, because the, um,~ the loading issue was not that easy to find. It didn't happen every time. It only happened when you change the payment. So if you had a payment and then you change the other payment, then this issue would happen. So of course, no QA caught it. ~Um, ~No end to end test caught it because the button was working, like I said. ~So, ~it was really hard to find out, so we just had to track analytic events,~ uh,~ like order completed,~ uh,~ added to checkout, and just see the difference between, okay, [00:09:00] why doesn't the checkout,~ uh,~ complete, where did it dip, and why,~ um, When, ~when did they started? And then we went, like you said, and went commit by commit and found a one that can actually,~ uh,~ affect the checkout. Paul: looking back, do you think that was something that should have been remedied with like, you know, we talked about top of the pipe, the architecture, the way you're thinking about it, or is this is something that is more, Hey, you should use tracking whenever you can. And you should think about the tracking. Well, what's your takeaway there? Dan: ~Yeah, I think, I think a big, um,~ a big thing we never think about is how huge CSS is and in every day now I can in front end applications. So we assume that ~okay, Um,~ frontend is React or Vue or Angular. But, ~you know, ~all of these have one thing in common is the styles that we apply to our pages. And if one of these styles has a problem, for example, if it hides the button completely and people cannot click it, or what happened in my case, you know, it can create really, really [00:10:00] big issues. And these issues are not caught by unit testing, maybe by end to end testing. Thankfully, now,~ uh, ~we have visual tests that can actually test. Components that they are behaving as they should. ~Um, ~but again, this was a really edge case. ~Um, ~but it did happen to me twice. ~So,~ Paul: Twice in terms of like a half working checkout that was sneaky. Dan: exactly, yeah, once with this button and ~another, ~another time again with the payments. ~Uh, ~the selecting payment methods. So, what we did is we had a library that was used for UI. I don't remember the name. It was in view, but anyway, we decided to remove it so we can reduce our bundle size. And we didn't realize it, but a key component of this CSS that was being used was in the checkout form when you selected payments. So that didn't work anymore. Now this time we did catch it immediately. So it went up in [00:11:00] two hours people stopped buying This was for a food delivery app. So of course we it had a lot of new users So 60 percent didn't have a payment method added. So when they tried to add one or change it, it didn't work So we saw immediately how orders went down And we reverted everything. Paul: So that was more of like a refactor, I guess you could say. Dan: Yeah, Paul: How do you suggest people handle refactors responsibly, given this experience and other experiences you've had? Dan: I think~ the ~the most important thing to do is to have tests in place ~Um, ~I wouldn't even try to do a refactor especially on checkout or out on authentication without ~Proper ~proper unit tests end to end tests integration tests,~ uh, ~especially if you have like a really really big project~ um Yeah, ~secondly, I would do the refactoring Gradual, so I wouldn't do ~like ~a big project that takes three four six months and then [00:12:00] just roll it out like crazy I would just do it Gradually do a little bit a little bit and test it out in production always do ~like ~deploys to only 10 percent of your audience or even smaller just to test like rolling deploys. ~Um,~ Paul: ~deploys.~ Dan: also have feature toggles in place. ~Um, ~this is another thing that I really, really like to do is when I'm doing A really big refactor just keep the old version.~ Uh, ~don't delete it build something new and have a feature toggle that tests, ~um ~the old version with the new version, especially with tracking in place so you can see that ~um ~Everything is behaving normally regardless of versions. Paul: So you have two versions running live, you can cut between them or revert back. And do you consider this a must for whatever projects you're building and refactoring moving forward? Because here's one thing, Dan, is people listening to this, they're gonna go, that's great, Dan. It sounds good. But ~do I need ~do I really need to do that? Dan: ~Well, ~I would [00:13:00] say for sure if you're refactoring the checkout like Paul: Okay, Dan: it's payments Yes. Yes take another month Like estimate more and just do this just to be careful ~Uh, ~of course if it's features in like other stuff of the app, I would not do it every time. ~Um ~It takes a lot of time, but if you have ~the ~the setup already You ~Um, ~yeah, it doesn't hurt. ~Um, ~but normally I understand, like I'm in a startup now, so I understand the need for speed. So people don't have time, like to write a hundred tests before starting writing the code. I get that. ~Um, ~but for sure, if you're touching login, authentication or checkout, you need tests in place and you need ~like, uh, ~at least a fail safe. ~Like, uh, ~there's this concept of, ~you know, like to. ~two door decisions. ~You, you don't want to have like, uh, ~you don't want to make an architectural decision where ~you've, ~you've went through the door and you cannot go back. So ~this, ~this is a really good practice when you're deploying, refactoring, or touching sensitive. Paul: always having the ability to go back, essentially. Dan: yeah, and [00:14:00] test it gradually, like I said. Paul: Yeah, I feel like we've said the word testing 10 times in the past 10 minutes,~ which, ~which is fine, but it makes me want to ask, how has your testing methodology, we could call it, your architecture of writing tests changed over time because I know when I first started writing tests or tried test driven development, my tests are trash compared to where they are now. They're probably still trash, but hey, they've changed. So how have yours changed over time? Dan: ~I also think mine are trash, but, you know, actually I have this shirt that says you're trash to me, so~ Paul: ~With a raccoon on it, right?~ Dan: ~With a raccoon on it, yes, so~ Paul: ~you're affectionately my trash because they love trash.~ Dan: So I've been, I think, writing code for ~like ~13 years After college, not counting college. So at the beginning I was very, very anti testing. ~Um, ~I felt like they were a waste of time. Like, why should we do it?~ Um, ~you're always testing the happy path anyway. So QA can work, ~you know, ~I always felt that until I got ~like.~ Sense of reality. Like it hit me when we had to,~ um, this is, ~this is the first time I saw the value of them. Like we were supposed to migrate,~ um,~ a project from one framework to another. And [00:15:00] thankfully it had like thousands of tests, thousands. And I, I didn't write those tests, but just like moving,~ um,~ this project from one framework to another, and then relying on tests that pass and fail. And just getting everything to green was. Incredible. Like the project was so big that manually you couldn't do all the edge cases, like to check all the, there were different,~ uh,~ users, personas. It was huge, huge, huge. And just having ~like 7, 000 or. It was ~70, 000,~ uh, ~unit tests that ran and verified every component that you're writing correctly and they do what needs to be done. Really, really made me a believer,~ uh,~ in unit tests and end to end tests. And then, of course, technology evolved and, ~you know, end to end tests, uh, uh, ten ~Unit tests are really, really fast. They can run in parallel. It doesn't take minutes. It's in seconds and bam, everything you see, if you have a problem or not. It's the same with end to end tests. They used to be really, really slow and my laptop would be on fire if I wanted to [00:16:00] run them, but now thanks to technology and containers and everything and all these tools that you have to write end to end tests like Cypress or Playwright ~Um, ~everything works really well. So I'm really glad for that. Like ~the, the,~ how technology has evolved,~ uh, ~and how easy writing end to end tests or,~ uh, ~integration test or unit test is right now compared to how hard it was Seven years ago where it was super hard to mock everything So yeah, ~you ~you actually feel in power now and you know testing It's easy, especially with ai like you can just give the code And chad gpt would write you like a hundred unit test. Let me Paul: one thing it's really good at. Yeah.~ Uh, ~so ~you've, ~you've grown more affectionate of testing. It's become easier to test. So you encourage people to test more. How do you think about the test that you write? Are you more of an end to end person, more of a unit test person? You could say it depends. It changes project to project, but more importantly, focusing on how [00:17:00] 13 years of development. Dan: ~so as You Now I'm doing more, ~I'm doing frontend and backend, but I like the frontend part more. So when I do that, of course, I like writing components and I like using storybook and just seeing the components come to life without being part of the project yet, without having any logic. ~Uh, ~and now I really like writing visual tests,~ uh,~ that Just click on your component and make sure it behaves as it should on different states So if it's loading if it's not loading if it's big if it's not big depending on all the props combination Make sure that your component Actually looks good on every With on every,~ um,~ device you can check for iOS for Android. It's really, really cool. ~Like they,~ there are a lot of,~ uh,~ libraries out there that does this. ~Uh, ~storybook has some visual testing integrated in it. It's really, really nice. ~Uh, ~so I'm a big fan of this recently.~ Uh, ~before,~ uh, I was, ~[00:18:00] I wasn't that much of, um, end to end test guy because,~ Um,~ I don't know, seemed like they took ~a lot, ~a lot of time and they complicated the project too much. It was really hard to maintain,~ um,~ and you're really dependent on the backend. So what do you do? Do you mock the backend? Do you don't? A lot of questions and no easy answer and,~ um,~ Yeah, I don't know. More recently. I also like contract testing. So this is ~like, uh, uh, ~a testing practice between the front end and the back end with a library in the middle that makes sure That the back end is always sending what it's supposed to be sending and it doesn't break the front end Which I found to be really nice, but again a little tricky to maintain but when you have ~um ~One service that is sending data to a hundred microservices that also send data and everything happens in a gateway This becomes ~really ~really critical at scale. Paul: One thing we do like doing on this podcast is name [00:19:00] dropping particular technology and frameworks. And you did mention a storybook. You like using storybook for the visual testing. What else do you like to use? Or ~have you, you know, ~maybe you haven't used it, but you seem interested in it. ~You know, ~other folks are talking about it just so people listening can take a look at what they might use for visual test or for the end to end test or for the contract testing that you mentioned. Dan: So for contract testing, I really like this framework or library called PACT I think it's spelled with two A's, P A A C T, PACT which basically creates a PACT between different, uh, endpoints,~ uh,~ which is really nice for end to end testing. I love using playwright recently. ~Um, ~because Cypress, I don't know, got kind of ~become~ stale ~in the, ~in the industry a little bit,~ um,~ and. Playwright is like the new kid on the block. They're owned by Microsoft. I don't know, seems like they're trying really new things,~ uh,~ and really cool people in the industry that work there. ~Uh, ~other technologies that I like to use,~ um,~ I don't [00:20:00] know, Vite and VTest is ~really, ~really cool. ~Um,~ Paul: Yeah, that one's really popular. Dan: yeah, for sure. ~I mean, ~Jest used to be okay unit testing, but VTest is like super speed, super fast. Everything works ~really, ~really nicely.~ Um, ~yeah. Paul: Yeah, having the test be fast is huge, because ~like, ~even if you have the test written, I've seen people they don't even run it because it takes like two minutes, God forbid. Dan: yeah, I know. And they just, ~I mean, ~I used to do this as well when we had too many tests where you just push the code and let the pipeline, CICD pipeline, see if it works or not, and, ~you know, ~trust ~in the, in, ~in GitHub to see if your tests pass. Paul: do you find that in this stage of your career and in leadership that you are taking this pie chart of how much time is spent on new features? ~Yes. ~versus tests and has it finally ~like ~settled into a balance or are you still pushing on like I want to test more. I want to figure out more ways to sniff out the bugs [00:21:00] before they happen. Dan: Yeah, I'm going to say it depends, but Paul: It depends. That's fine. That's you know. Dan: no, no, no. ~Um,~ like I said, I'm in a startup now, so I for sure,~ um,~ push for tests when they need to, like if it's a big feature, it's if it's like a sensitive feature, a feature ~that's been used by~ that is going to be used by multiple.~ Um, ~it all depends on,~ uh,~ on how often we think that feature is going to change. ~Um, ~if we think, okay, we're going to build this in two weeks and then that's over, we move on to the next thing. Maybe that doesn't need,~ uh,~ Specific testing, but if we are going to have a feature that's going to be improved every time, when improved and improved, improved, for sure, we're going to push for testing there because that will save us a lot of trouble when we want to figure out what this feature does, because testing in the end, it's also good for documentation, not just, ~you know, to, ~to make sure that,~ uh,~ your bugs are not going to production. Paul: And what about network stuff, Dan? Because, ~you know, ~it's always DNS. That's the moniker. It's always the problem is always [00:22:00] DNS. The bugs always DNS. And you did mention in your talk, even your own microservices. Taking down your own application because they're so chatty, ~is it, is, ~are there easy ways to test for that? Because when you think about ~an end to end test or a, uh, or excuse me, ~a unit test,~ you know, ~how do you replicate that data? ~There's actually, Prime recently had a video that he put out that he said, I only test in production, or forgive me, ~Theo Brown recently put out a video where he was talking about an article where somebody wrote, I only test in production, here's why. And it had to do some, not only with the data that you're playing with, but the networking. which is so core to how these things crop up. So how do you defend against that? Does that get roped into this whole testing plan? Dan: I think when you reach a,~ um,~ a certain level where you have multiple microservices,~ like, um, ~where the communication is critical, you start doing like penetration testing at each service level. So each microservice ~you, ~you test it, make sure it correctly handles,~ um,~ a specific load and ~you also~ when you reach that point and you test ~for~ For heavy load then you also have fallbacks in place. What happens if it's overloaded, ~you know, ~and I really like this article [00:23:00] or talk that happened. I think 10 years ago where netflix was using this chaos monkey theory where they just Introduced faults in all their microservice randomly to see what the system did because they had like 5, 000 microservices or something like that. So of course, what happens when one randomly goes out? What do you do? What happens when there's a bug in seven microservices at the same time? You know, ~and, and this,~ this practice of ~chaos, uh, ~introducing chaos ~in, ~into your workload,~ um, ~prepares you, ~you know, ~for the worst thing that can happen. Paul: And this chaos metric that they injected into their miro services, was this something they did in like an end to end isolated environment, or did they do it somewhat in production and in a. sectioned off, ~you know, ~deployment or something ~to, ~to actually sniff it out. So you don't get a checkout button that bleeds 30 percent of the time. Dan: ~Uh, ~I don't remember exactly, I think they did it, not in production, but ~in, um,~ In their stage environment where I don't remember exactly if they had continuous [00:24:00] integration setup Continuous deployment, but ~I ~I think ~the ~the common practice of course nowadays is to have you know ~um ~a different setup where you can Actually break things without worrying, ~you know, ~like even when I was at the food delivery company. We actually had A beta environment that was being used by internal people like all the employees 5 000 people ~You ~And it was being used for two weeks before we deployed that to production. ~Uh, ~that way everything was caught by our employees and the employees had benefits for using that,~ uh,~ app. They had ~like ~no delivery fee or ~like ~20 per month,~ uh,~ to order for free from there and people use it. And we actually caught a lot of bugs,~ uh,~ that would have gone to production because of it. Paul: It's almost like a forgiving production environment because everybody's cool. Dan: ~Yeah, yeah, it's, ~it's a little forgiving. Not so forgiving when someone,~ like,~ from the sea level complains about something and everyone acts like the house is on fire and you have people screaming around, but still. Paul: But still, it's very useful. [00:25:00] That's cool to hear about. ~Um, ~incentivize test pro program for your employees. Dan: Yes. Paul: You should do that for your employees. Dan: We do something, we have a feature in our app, which is ~sort of ~like a chat box. And,~ um,~ once every two weeks, instead of using Slack, which we normally use, we just use our internal tool,~ uh,~ to make sure everything is working correctly. If all the features we want are working correctly, which is ~really, ~really nice. Paul: so Dan, we are running up on time here. ~Uh, ~which is unfortunate because I would love to ask about test driven development and what you think of that. Should the test come before? Should they come after? I would love a really quick response to that. To wrap up our conversation, could you tell everybody where the word bug comes from? Because this is something that,~ you know, ~I remember hearing only once before back in school, but it's quite a funny story and I love the way you explained it. Dan: Yeah. So I also love it. Like~ Like usually. Um, ~when I was thinking about bugs, I was thinking, God,~ the, the,~ the issues in production are called bugs because bugs are scary or gross or, you know, weird. ~Um, ~but actually ~the, ~the true story behind it [00:26:00] was that ~in the year, ~In the 40s at,~ um, ~I think it was Stanford or Harvard or somewhere, they,~ uh, ~had this big computer, a Mach 2, one of the first computers that they were using to calculate something or another, and it wasn't working correctly. And they tried to fix the software to ~try to, uh, ~find out what's happening. ~Uh, ~and they couldn't find out the reason until they started opening up the computer and they saw,~ like, ~A bug that was actually stuck to the electrical circuit. And of course,~ uh, ~then the name stuck and became popular by Dr. Grace Hopper, who actually invented the COBOL programming language. And, ~you know, she, ~she made it mainstream basically. And for, I don't know, 82 years, it's still going strong. ~I say, ~I think even after a hundred years,~ uh, ~people still use bugs ~in, in, ~in software development. Paul: Well, Dan, thank you so much for your time coming in. and chatting with us. If people wanted to learn more about what you do, whether that be relating to the testing and bug speak that we're doing right now or not, what [00:27:00] are your socials? Do you blog and where can folks find more? Dan: ~So, um, ~you can find me on my website, netjudan. dev. ~Uh, ~I'm also very active on LinkedIn and Twitter or X as it's called now. So you can find me, I'm at netjudan everywhere more or less. ~Um, ~yeah. If you have any questions, just hit me up. I'm actually pretty,~ um,~ pretty online most of the time. Paul: And just for everybody listening, Dan's name,~ uh,~ Nechu is spelled N E C I U. If you're trying to look it up, don't use C H. It's N E C I U. Dan: Thank you for that. Paul: Yeah. Well, Dan, thank you again for coming on. It's been a pleasure. Dan: Thank you for having me, Paul.