MIKE: Hello, and welcome to another episode of the Acima Development Podcast. I'm Mike, and I'm hosting again today. With me, I have Kyle from our platform engineering team, who often joins us, and Matt from...well, you do a bunch of things, Matt [laughs]. He's a leader, I'll say that, and he's a director here at Acima, and is involved in doing a lot of things. I'd like to defer introducing the topic and tell a story. I'm going to do that for...oh, and we also got with us, oh, there we go. We've got Ryan, who's a delivery manager here at Acima and should have some great input. I'm going to start by telling a story. Oh boy, how long ago was this? Over, actually, I don't remember how long ago it was. I think [laughs] I don't even remember which company it was at [laughs], whether it was at Acima or not. I think that this was, like, 12 years ago, so I think this was prior to Acima. I was interviewing somebody, and I thought that I'd give him a coding test. It was a little hard to tell whether he knew what he was talking about or not. And I think I asked him to write some Ruby code that would output a Fibonacci sequence, output the first 10 digits of a Fibonacci sequence. And he said, "Okay,” and he just got quiet for a bit. This was a remote interview, by the way. I think he was overseas. I waited a couple of minutes. He came, and he dropped some code, and there it was, and it worked. It was weird. It was, like, weird code [laughs]. There's stuff that's idiomatically normal for the language that you're writing in. And this was Ruby code, but it barely looked like Ruby code. It looked like somebody had pulled it out of some other piece of code that was doing something else and then tweaked it. It barely fit. It was just really strange. And you don't usually write really strange code when you're on the spot. You're going to write something really simple. And you might have a couple of quirks, but you're not going to have something that's just really weird. So, I finished our interview, and I went and I Googled some of the code because it was weird enough that it was unique. I Googled for it, and it came up immediately on Google. He just copied and pasted from something he’d found online. And not only did he not get hired, but the contract shop he was coming with took a real ding because of that. There were some bad vibes thereafter [chuckles] between us and the shop because you sent us somebody, and he just cheated on the interview. That was before all of the AI tools that we've got now [chuckles], before ChatGPT. Think about all the changes that have happened over the past 15 years or so. This was prior to that, and still managed to cheat. Well, he didn't get away with it [laughs], but he still managed to cheat the interview. And that introduces our topic today. We're going to be talking about how to deal with technology, how to do interviewing in a world where we have technology now, where it makes it very easy for people to pull out answers quickly and hard to verify. This is more broadly applicable even outside of engineering, you think about in schools trying to do testing. It's a general problem. We're going to talk specifically about the software engineering world, because that's what we do, and how to do interviews. How to do your interviews broadly, but specifically, how do you do interviews when it's really easy to cheat? So, I've told a story about [chuckles] somebody doing that sort of thing. Any of you have some stories or similar experiences that you've had? MATT: Well, I don't know if I have a story. However, prior to this call, I was just conducting an interview. And something I think is really important, especially these days where a lot of people are working remote and interviews happen virtually, make sure their camera's on. And ask questions that, A, might not be as easy to Google, but require some conversation. And a demonstration of understanding of concepts, I think, is important. I think these days you have to dig a little bit deeper and ask the whys, just not the hows, on a lot of these technical questions. MIKE: I've had other interviews where I've asked somebody a question, a technical question that was a little broad about architecture, and they answered it really fast. He talked about extracting services, for example, to clean up code and decouple things. And they answered them really well and quickly on all of them, but their answers were brief. This was, like, they gave the right answer really quickly and then never elaborated. And I'm thinking about one example where this candidate did that repeatedly, all the right answers, one after the other, like, "Okay, they seem like a pretty good candidate." Then, when we actually employed them, they didn't seem to do anything [laughs]. They apparently were really good at either Googling the question or had researched interview questions before, so they knew what to say, but they didn't actually have the deep knowledge. They were able to give quick answers. So, one thing I've learned is a red flag, and you've touched on it, Matt, you've got to ask questions that allow them to elaborate. Talk about that “Why?” That conversation is hard to duplicate. You can't fake it in the same way. If you can come up with a tool that will do that, well, then you've got a tool that can maybe do the job, but we’re not there yet. MATT: Or if I have my phone with ChatGPT on listening in conversational chat, it's going to respond as fast as I could, right? And that makes it a little tricky. By the way, I think I was present in that same interview with you. MIKE: [laughs]. You may have been. RYAN: So, I guess the question here is, are you guys against using such tools like Google and LLMs, agents out there that an interviewee can use to answer your question? MATT: Well, I want people to have an understanding of what they're being asked. I use tools all the time to help me with my work, and that's just the way of the world these days, right? If tools are available, let's use them. But I need to understand what I'm doing, or the next problem that comes along, I'm not going to solve properly. And these tools are not a replacement for a human being. All of us have probably used Codepilot at some point, or Copilot at some point, or Cody, or something similar out there, right? That code needs review, and it doesn't always solve your problem the way that problem should be solved. So, yes, I support using tools, but if I'm hiring someone, I want them to understand the problems they're trying to solve. RYAN: And I think that is the crucial point here that we want to discuss is, like, because ChatGPT or any of the LLM model what do they do? They go and surf the internet, right? So, when you get your answer back, it doesn't know whether that's the right answer or not. It just knows that this is an example it found somewhere on the internet, right? And the person who put that together had to understand what they put together. So, when Mike said, "Well, this guy found that answer on the internet, and it worked.” So, why does it matter if it's strange? If he understands and he can explain why, so why does it matter at that point if it's strange? MIKE: Well, you hit on something important. If he had told me, "Well, I'm going to Google something and then talk about it," then that would have changed the whole conversation, and we might have even hired the guy. That's not what happened. He passed off the work as his own. MATT: Integrity is important. MIKE: Yeah. So, there's deception there. I expect people to Google for their job [laughs]. I think we've talked about it here. That's a key aspect of engineering. It's hard to do engineering without Google. I remember the first time I used it. My boss told me, “Go use Google.” RYAN: It's going to change the nature of how we do interviews because there's no way we can...even with camera on, and even if we ask the questions that need interaction, there's no way to prevent them from looking up information on the side screen. We can't see it. I mean, you can probably have a big enough screen with multiple windows open. You can do that right there and get away with it pretty easily. But I think the nature of the interview has to change. Instead of asking, “Hey, go write me a function that generates Fibonacci or do a binary sort,” or something like that...because that kind of stuff is not useful, right? But you can say, “Hey, I have this problem here. How do you solve it for me? And feel free to use the tool that's available out there. You can use ChatGPT or whatever out there. But you need to be able to explain why you put that solution together for me and explain it to us,” right? I think that the nature of the interview has to be like that, instead of asking what or how to do some basic stuff. We have to move beyond that type of interview. MATT: I agree with that. You know, I'm not one who's a real big fan of code challenges anyway, but, you know, maybe it's as easy as, “Explain to me in pseudocode how you would solve this problem,” right? And I'm not going to give you some academic computer science problem because most of the time they aren't real applicable in the real world unless you're doing low-level stuff, right? But a real-world problem. RYAN: Yeah, yeah, yeah. MIKE: I had the same thought about pseudocode as I was preparing for this session. I thought, you know, pseudocode goes a long way because you can't usually Google it [laughs]. But even -- RYAN: But you can put your thought process in there, and you can see it, you know, forming. And even though it doesn't compile, compiling might give people, the interviewee, that anxiety, like, oh, code doesn't compile [chuckles], you know, and then they just drift away from the main problem they’re trying to solve. MIKE: Which is what you're after, right? You're trying to see how people think about the problem. MATT: Yes. MIKE: The solution itself is almost irrelevant [laughs]. MATT: It's the approach. I wholeheartedly agree. And I think there's a soft skill involved here as well. And that is just the more you interview people, the more you can read people, right? The more human interaction you have, the more you can read them. If I see someone on camera and they're taking a second to answer a question, I can see if the gears are turning in their head, versus if they're trying to type something on a keyboard quickly, or vice versa, right? If someone's thinking something through, I think for the most part, you can pick up on that. But we're getting into a whole new world of the way we have to do things these days. KYLE: I have to kind of tack on with what Matt was saying there. That's kind of...when I've been interviewing, I haven't necessarily cared even what level a person is because regardless of the level, it's how much can they learn? And I think that's more of what I focus on in interviews is like, does this person seem like someone that I can teach, someone that can pick up on what I'm trying to tell them? And, respectively, a junior I would give more leeway to. But it's one of those things where, like we've spoken, do we really care that it's solving these generic coding challenges, or handing them a marker and telling them to go write on the whiteboard, even on camera? Just, I want to see what your thought process is. Can you solve this? And I almost feel like we can still use those coding examples or whatever it is that we're wanting to use. But even historically before AI, I would give someone homework in the sense of like, “Hey, go spin up an EC2 instance, and give me a script to do that.” And then, I would judge based upon the quality of that script. What script did they do it in? Did they use Terraform? Did they use Pulumi? How did they do it? How repeatable is it, or did they just give me a set of instructions to do it? What was the thought process behind this, and how far along are they? I was judging on those criteria because you can, even in today's world, you can ask AI, “Well, how do I spin up an EC2 instance?” Well, depending on, you know, it's going to give you answers, right? But are those the answers that you want to give during an interview? So, I think we kind of need to take those into consideration, too, because, yeah, these generic answers aren't what we're all going to be wanting. And even taking issues that we're currently running into as a team, I guess, and saying, “Hey, this is what I'm actually facing. If you were on my team, how would you solve this?” and kind of seeing how they would think through that that would be helpful, too. MATT: One of the things, and Mike's probably sick of hearing me say this because I sound like a broken record in all of our meetings, more important to me than ability to write code is ability to communicate and ability to solve problems. If you have those two things, the code can be taught. I don't see that as even being a big hurdle. But those are the things I'm really looking for, and AI just can't fake that. KYLE: Well, and in today's world, right? I mean, code can be taught. At some point, will code even be needed? You know, that troubleshooting and ability to learn and communicate is going to be the top skills that we're looking for. MIKE: I think it already was, and it just increases it, right? KYLE: Yeah, more prevalent, right? MIKE: Yeah, because you're removing some of the esoteric knowledge that was, to some degree, a waste of time, you know [chuckles]. If you have to spend a long time learning the language, that may not be a good indication that you're using the right tools for the job. Instead, you want somebody who can use the best human skills, their ingenuity, their creativity, to come up with an effective solution, and then make tools available that can make it easy to put that solution into practice. KYLE: An engineer I used to work with at a previous company I always thought he had an interesting mentality for when he was doing interviews for his team, and this was way before AI. But I would ask him...because we were a Java shop. But I was in one of his interviews, and I was like, “Why did you not have any Java-related questions? Why aren't we worried about whether or not these people know Java?” And he said, “If I'm looking for a Java developer, it's just that. They are a developer. I'm looking for an engineer. I'm looking for somebody that can use any language, so I don't care how much they know Java.” And that's always kind of stuck with me. It's just kind of like, okay, there are situations...I understand there are situations where you want a Ruby pro or a Java pro, but there are situations where you're wanting an engineer, someone that can use any tool, and you want to vet out that situation. MATT: I think I want those more so than a specialist in any language. KYLE: Yeah, exactly. MIKE: Yeah. In the end, our job [chuckles] isn't to write code; our job is to solve problems. And the difference matters. MATT: It does. I don't need to be able to write code to make software do something, right? I need to know how to solve the problem that we need to solve, and I can use code assistance to help me write the code if it's in a new language. I can, you know, yes, I need an understanding of software engineering and design patterns and, you know, all of those important things. But anyone who wants to or is a software engineer should really focus on being language agnostic because you should be using the right tools for the job at the right time. MIKE: I don't want to give the wrong impression that you shouldn't get good at using your tools. MATT: Oh, absolutely not. MIKE: I agree with what you're saying. But one thing I've heard said is you should get really good at least one language, and I agree with that. It's a mark of professionalism [laughs]. And, you know, if you care about the tools you use, you'll learn at least one of them really, really well. And they do differ. It does matter some. Ryan is really good at functional language, who’s here with us, and they tend to approach a problem in a different way than more procedural languages. Likewise, if I was interviewing somebody who's completely unfamiliar with object-oriented aspects of a language and I was doing Java, I'd [chuckles] know that there was going to be a lot more time required for them to learn the tool. So, I do think that there's something to be said for knowing something, but I think that that kind of expertise will probably decline somewhat in importance over the coming years as the ability to automate a lot of that work increases. MATT: Someone can be good at a language, you know, and you see this really often in the Ruby community because all the bootcamps for years were Ruby, right? And people know how to do things in Ruby, but they don't understand the language and what's happening behind the scenes and under the hood. So, I think, to your point, Mike, I think knowing a language really well and being an expert at a language, you have to understand what's going on and why you're doing the things you're doing. And I agree with that 100%. KYLE: This kind of goes back to a previous topic which we had, which was learning languages. I can't remember what the topic was specifically, but it's entirely that, where you get good at one language, and it makes learning new languages that much easier. And that will show in an interview. RYAN: I have a story about, like, doing interviewing. So, I'm a big fan of Elon Musk [laughs]. People don't like him. It doesn't matter. The way the guy approaches interviews or his work ethic it's just something I'm a big fan. So, I follow him closely and listen to a lot of the things that he posts. So, one of the things he said about how you weed out the people who just know how to talk really well during an interview versus the people who actually do the work on a specific project is to ask them to tell him about the detail of that project. So, he was like, you know, “Tell me something that you...some code you worked on in the past,” and then drill them on it because the people who actually worked on it they’d get excited about that kind of stuff. They would go into detail that you didn't even care to know, but that would tell him whether this person actually worked on something, or they just read about it, or just heard about it. Because when you get to a certain degree, you can talk your way through a lot of these things without having the detail of it, right? But when you have to actually explain the detail of a certain implementation to solve that problem, then it shows whether you actually worked on that problem or not. So, one of the guys that I interviewed with in the past he kept going on and on about how performance tuning is one of his biggest hobbies that he enjoys. It's not even his main job. His main job is a developer, but he would spend time on his own and try to get into codes and try to fine-tune it, right? And I thought that's an interesting subject. Then I started drilling him on it and said, “Okay, so what...” It was in .NET, and it was an ASP.NET web application, but they run on Windows server. It was not even on .NET Core. But anyway, so when I started drilling him on what he'd done on it, he'd actually start explaining stuff that people not only wouldn't know. He would go into how he would go to the web server, change the thread pool so that you have, by default, 100 thread pool. He'd maximize it up to 1,000 thread pool on the web server because he'd only get one dev machine to test his code. So, he wanted to be able to run concurrent, high-concurrency code, [inaudible 22:19] high throughput. He was like, “I figured out how to get into the configure at the root and change the thread pool to allow 1,000 threads by default.” And he would increase the performance for the process power of the CPU up to 100%. He'd max out that CPU running on that server. And he'd just go and do that kind of stuff. When you hear that, it's the enthusiasm behind the story. The excitement behind the story is kind of a telltale sign that he actually worked on it, and he knows what he’s doing there. I mean, of course, I hired that guy [laughter]. But nobody goes into a web server and changes the thread pools. And when you work on Windows, it's like the worker thread pool versus the IO thread pool are different. And people who don't just write web server code they don't understand what that means. One is run the receiving of the request come in. One is run all of the IOs processing behind the scene. But they have two different thread pools. And he actually understood how to explain all of that. I was like, “Wow, okay, you impressed me [laughs].” But, you know, I mean, that's the kind of thing I've been using as my tactic on the interview. I ask that kind of question, drill them on it. And if they don't give me enough information, then I'll just mark that as, you know, he didn't really actually work on it. MIKE: That's actually become my favorite interview question as well. “Tell me about a project you're proud of and go into detail about what was involved.” It's interesting hearing what people are proud of, and then interesting hearing about the details. If they’re like, “Oh, yeah, I don't remember it very well...” RYAN: You didn’t work on it [laughter]. I would make that assumption. If you don't remember, you didn't work on it. Because if you actually spend days and nights trying to make it work, you remember why it didn't work or why it worked [laughs]. MIKE: Absolutely. MATT: Days and nights. RYAN: They might not remember the code level, but they know exactly what the problem is, how they solved it, how they figured out what the problem is. Most of the time, you just had a problem. You don't know what the root cause is. And just to spend time on it and figure out what the root cause is, it kind of differentiates the type of people who have no problem putting the time to either solve a problem that they're interested in. Whereas if somebody just showed up and, you know, meet their hours and go home, it's -- MIKE: I can think of...Yeah, you just talked about that. I can think of a clever solution I was proud of that I implemented over 20 years ago [chuckles]. And I couldn't tell you at all what the code did. I remember this because a few years later, the company reached out, and I hadn't worked there in years. The company reached out to me. Like, “Do you remember how this worked [chuckles]?” Because it was in Java, and I don't think they had any Java engineers on staff. They didn't know what was going on. So, they just reached out. Like, “Hey, we found in the commit log who wrote this,” tracked me down somehow and reached out to me and said, “Hey, can you help us with this?” And I talked them through it because I could still, right? Because I remember that, that I could go back and talk about it. And I remembered because when I looked at the code, I didn't recognize the code at all. I hadn't looked at that code in years, and I had no familiarity with it. But I remember exactly how it worked. And if somebody can't say that, I think you're right. They weren't engaged or, more likely, they really weren't making those decisions. MATT: They weren't interested in making a difference. And those are the kind of people we want, right? People who come in, want to make a difference, want to be innovative, and think about those tough problems. The level of gratification you get out of solving really tough problems for a company, even if you don't get compensated for them, it's still huge. It's a big deal. And it's something to be proud of. MIKE: Yeah. We could probably all think about solutions we're really proud of, even from a long time ago. RYAN: That's why we get into tech in the first place, right? MIKE: [laughs] Yeah. RYAN: It’s those kinds of little wins that keep us going [laughs]. MIKE: We all probably have some significant other in our lives who's seen us reach that moment where we solve the problem, whether it's debugging [laughs], or you finally get there. And they know [laughs]. They've seen it a few times [laughs]. MATT: Absolutely possessed or obsessive in the process. You haven't been to sleep for two days [laughs]. RYAN: There was a...Oh man, this was a fun problem that I had to deal with. So, I was a .NET programmer for a long time [laughs], so a lot of my fun stories come from the .NET days. I think by the time I got to Haskell, things just got easier because of Haskell [laughs]. It’s crazy because when you get to Haskell, it gets boring because things just work [laughs]. I think that’s -- MATT: Said no one ever, Ryan. [laughter] RYAN: I was like -- MIKE: It's true. It just takes longer to get to the working. RYAN: Yeah. So, when it works, it just runs, right? You don't have to do anything. So, on that day, we released these brand-new features. I used to work for this analytics company where we bring in customer or user survey data and then do all kinds of analysis on it. So, the different kinds of math equations that we have to run on the data one guy decided that instead of using the data that had been passed through the pipeline, he would clone it so that he can make the calculation on it without affecting the original data going. It sounds like a good idea because there's a lot of calculations that he had to run on that piece of data. So, every single piece of data going in will get replicated two or three times based on the equation that we had to run on it. And then, at the end, he would just return the result back and then just change the initial object, right? Just by copying it three or four times, it tripled or quadrupled the amount of memory we used on the web server [laughs]. And we have to roll back, like, the release night it come in the next [inaudible 28:57] because it merged, and we didn't have that kind of like we roll back plan that we have here where we just deploy the previous thing. It merged with the code when it build. So, now we have to undo what he did and then release a new version on the release night. And just all of a sudden, the server would just markdown. Everything slowed down to a halt when we started running automation tests. Automation tests, what they do is they run hundreds of thousands of requests through the application [laughs]. The server just exploded in the middle of the night, on release night. Oh, that was a fun problem, got it figured out by the time everyone realized that it was the memory consumption. And I was like, what jinx? And then, by the time we looked at the codebase, it was a copy. It was a copy, and Windows and .NET didn't really do a very good job of memory management. When you make a copy of an instance, it doesn't reuse memory. It just copies a whole brand-new copy of it. So, by the time we got through it, it was three days’ worth of death march to roll back this code and retest it [laughs]. But then, when I go from .NET into Haskell, and I was like, how does that work with the immutability, right? Because when you change something, you practically have to make a copy of it. You can't just change the object it passed in. It doesn't allow you to change the object it passed in. You have to make a copy of it. And I have a nightmare of that problem that I had to deal with. I was like, so, how does that work in here? But it turns out Haskell has something called light thread. So, basically, it reused memory behind the scenes without even compromising the ability of immutability of the data. So, with the data coming, you can't change it. But if you changed it on a copy, it still somehow referenced the old data. So, it didn't make a full copy of it [laughs]. MIKE: Nice. MATT: There's that passion we were talking about. [laughter] MIKE: Yes. RYAN: I just take a look [inaudible 30:56], product right? The way it sucked in this entire XML, and it just parsed out a piece of it lazily. And that’s kind of, like, the power of the Haskell right there. It just got me excited. But the thing about Haskell when it work like that, we hardly see any problem anymore, right? We keep rolling out [inaudible 31:14] product and the XML...look at the Grand [inaudible 31:18] XML. It was huge. But it didn't bring down the server at all. Memory consumption on it is so small on it. But, anyway, that was a fun problem to solve [laughs]. MIKE: As Matt was saying, there's the passion. You care about that. You love the tools that you're using. Love the aspects...but there's probably things you don't like about the tool, too. But there's things that you just absolutely love about it, and you just can't help but want to talk about. I heard, I don't know why it took me so long to hear, but I heard some time in the last year the joke, “So, how do you know somebody's going to run a marathon? They'll tell you.” MATT: Oh, they'll tell you. RYAN: Yep. [laughter] MATT: Burning Man, they'll tell you. MIKE: [laughs] It's the same deal, right? If you care about it that much, you just can't help yourself. You're just going to be talking about it. And that's the kind of questions that I think are evergreen, right? They're going to keep working. They're going to work 10 years from now because what you're really exploring is somebody's humanity, and that's a powerful thing. RYAN: It’s a bragging right. When you solve a problem, it feels good, and then the company would benefit from it. It feels good. The younger years of my life [laughs], I didn't care. I’d sleep in the office if I had to [laughs]. KYLE: I was analyzing Ryan's stories as he was going along and thinking how many questions I didn't have to ask him if I was interviewing him, right? Because now I know that Ryan can troubleshoot. I know that he has a really good in-depth understanding of C#. He's translated that over to another language and made that applicable to another language. So many of these different aspects of an interview that we are looking for were solved with one question saying, “Tell us about when you solved a really big problem and how you felt about it.” MATT: Memory management. [crosstalk 33:26] You know, the interview I just did, I had a list of about nine questions that I was going to ask. I asked my first question. By asking my first question, he answered all nine of my questions that I was going to ask throughout the interview. Because he was very passionate about what he did, he demonstrated his knowledge of what he did. And I didn't have to ask him any technical questions because it was really clear he had a great understanding of technology and the technology he needed to use. He talked about memory leaks. He talked about resource management. He talked about interfaces with APIs, like all of these things I was going to ask him about. My first question, which was one of the questions Mike likes to ask, and it's, you know, “Tell me about what you've been doing and what about in your career have you done that you're really excited about?” And he just went off. And about a half an hour later, I said, “I don't have any more questions.” [laughter] MIKE: I found another question that kind of goes the same direction but flips it. “Tell me about a time something you did failed.” [laughter] RYAN: It's kind of like [inaudible 34:56] break production [laughs]. You need to break production to earn that [inaudible 35:02] [laughter]. MIKE: And I asked that one. “Tell me about a time you broke production.” If they say, “I haven't,” you know you don't hire them because either they haven't been in the industry very long, or they're lying. [laughter] MATT: Yep. Anyone with any amount of experience has brought down production. And we all wear that badge [laughter]. It's when you do it often when it becomes a problem. RYAN: Right. With the same scar over and over again. [laughter] MIKE: That question's great because, again, is somebody going to Google, get on ChatGPT, find, you know [laughs], examples of how bad somebody did? It's not something that's...you can't make a shallow replica, right? You kind of have to go into your human experience. And you can hear about the problem-solving skills. A lot of times if they're not talking about who they collaborated with, then that's suspicious, right? So, you broke it alone. You solved it alone. Why was nobody else involved? There's a lot of things that come up. MATT: Yeah, and if you ask them, “Tell me about a time you brought down production,” and part of that response isn't how you fixed production, also a red flag. MIKE: Yes [laughs]. RYAN: Troubleshooting is a skill that can be taught. You're just going to have to learn over time with the work that you've done. There are people who have that instinct. You can hear them go through the problem one by one. There are people who’s like, “I don't know where to start.” You've got to start somewhere. I mean, they can fix the problem if you tell them what it is, but they just don't know how to start. MATT: Yep. And starting somewhere is better than not starting at all. RYAN: Right. So, back to the AI topic with interview, the junior definition is going to change a lot, right? The junior dev definition. We no longer need to ask people what is the definition of, you know, and something anymore, right [laughs]? Use ChatGPT and [inaudible 37:23] [laughs]. MIKE: Yeah, it expands your brain. RYAN: Right [laughs]? Crazy. MIKE: And there's technologies that have been doing that for millennia. Writing expanded our brains because it allowed us to expand our memories. Now your memory can last, not just short-term, but for years, and even across generations. And across geographic differences, you can send the book somewhere. So, writing was able to expand our brains and has had a dramatic influence on human culture. And think about the printing press that allowed that to be expanded dramatically and computing. The rise of computing has allowed people to expand their mental capacity, same process, I think. Now we get to expand our functional intelligence. Some of the things that before we had to rely on amassing this deep well of knowledge are less important. And we can use our problem-solving skills more quickly. That's a big deal, right? That’s a big deal. MATT: Yeah, what we do isn't going away. It's just evolving. MIKE: Right. It's interesting you say the definition of a junior engineer changes. That's true. They don't have to have as much of the, like you said, the specifics. They don't even have to know it that well because they can use some tools to help...to bring about, how do I do this? Well, it’ll show you right away. But the problem-solving skill, that's something that you have to practice. RYAN: Yeah. I don't know if you guys have done this before, but like, I would print out an exception stack and have somebody read it and tell me what they think. I've done that before in the interview because, like, if you can't follow this mumbo-jumbo blob of text to find out where things break, you're not quite at the mid or the senior level yet. You're not even there. You're mid-level. Expect to at least know how to read an exception stack and figure out where things break [laughs]. MATT: I love looking at Grafana logs every day, Ryan. [laughter] RYAN: Well, that's the thing, though. When you see a problem, what do you do first, right? You hear that, I would go look at log, or I would go look at the exception return. Where do they start? And that's important. MIKE: I've done something similar by just showing a file. You’re doing a remote interview, pull up a file in your code, right? And say, “What does this do?” RYAN: Yeah. You can -- MIKE: And it goes a long way. Go ahead. RYAN: Yeah, you can read between the lines, and most of the time you can read between the lines and just guess 90% of the function there, unless it's a trick question. MATT: Well, if you understand code, right? MIKE: Exactly. MATT: And it weeds people out really quickly. And I've seen Mike do it. I mean, we've been in so many interviews together. I probably can’t even count. But some of the answers you see with that question are really surprising. And it's a great question because it shows if someone can follow code, understand where things are going, what the dependencies are, what's coming in and out, some key things that you just have to understand. MIKE: And the questions they ask are sometimes more important than what they tell you. MATT: Almost always. KYLE: It makes me think back to...I had an interview several years back, now at this point, but somebody did that. They threw out code in front of me and they said, “Tell me what's wrong here.” And I looked at it, and I was just kind of like, “I have no idea.” And I looked at it a little bit closer, and finally I was just like, “I don't know that there's anything wrong with the code, but this comment doesn't make sense.” You know, they’d commented the code. And he's like, “Oh, that's what I was looking for.” He'd thrown code in front of me, and he's just like, “Yeah, we wanted to make sure, basically, that you were paying attention to all aspects of the coding window.” And I was just like, okay [laughs], you know. But he was literally looking for that I would read the comment and then read the code and determine if they actually fit together. And stuff like that can even be helpful. MIKE: Nice. We've covered a lot of ground here on what matters for interviews. We've talked about not just doing a coding challenge because it doesn't really get, many times, to the right kind of information that you're looking for. But if you do...and sometimes it makes sense to talk about bigger problems and then do it in pseudocode. Do it at a higher level where people are talking through their thinking. Ask why rather than what. MATT: Some advice for those of you out there listening that may not have a lot of experience with this. Be honest. Be transparent. Ask questions. Those are the things that are going to get you hired. MIKE: And that's the main thing we've ended up focusing on here, right, is asking people to talk about what engages them, and then, let them do the interview [laughs]. Let the person you're interviewing speak for themselves and reveal what they care about, what they're good at. And those kinds of probing questions work and will continue to work for a long time. I think it's been a great session. Hopefully, you can take this with you in your own interviews. Until next time on the Acima Development Podcast.