MIKE: Hello and welcome to another episode of the Acima Development Podcast. I'm Mike. I'm hosting again today. I've got a great panel with me here today. I've got Justin, Will, Dave, and Kyle. And we are going to talk about the future and let me start by going back to the past. So, a hundred and fifty years ago-ish, we'll say, more or less [laughs], almost everybody had a farm. It's just what you did. Maybe I could go back a little further, maybe 200 years. But if you go back within a reasonable time period, there were people in cities who had businesses, industries. It's not that industry didn't exist, but it was very much on a...basically, a home scale, right? You couldn't really have factories because nothing had been automated to that point, and they didn't have machines to make factories work. So, you might have a lot of people working together in the same building, but that's about as far as you got in terms of a large-scale industry. And, again, almost everybody was in farms. And if you had said to somebody in that time, “Well, you know, in the near future, nobody's going to be working on farms, and [chuckles] you're all going to be living in the city. And a lot of people are just going to have jobs like standing at a counter and feeding you food,” they would laugh in your face. It was just patently ridiculous on its face. There's absolutely no way that would happen. And then, a few changes to enable automation led to an industrial revolution and World Wars, [laughs] and to the point now where we’re somewhere between one and two percent of people actually work on farms. And the economy has just completely transformed. To take that a little bit further into the future, when I graduated from high school and was looking forward to my future careers, you know, what career I might take, I never considered web development because that career didn't exist [laughter]. I couldn't have wanted to work that job because there was no web to work for, you know, there was no popular internet. I couldn't have even conceived of it because nobody had. Maybe a few innovators who thought, oh, maybe these early browsers will accomplish something. Yeah, it just plain didn't exist. And so, most [chuckles] of my career has been in a field that didn't exist when I graduated from high school, which brings us to today. If you look at areas of exponential growth, things seemed normal for a while, and then, suddenly, they're different. And exponential growth is really hard for us to reason about because we tend to think linearly. You think linear growth, you know, it'll happen gradually over time, but that's not how exponential growth works. And technological change tends to be exponential when you have people in a large group together collaborating. You can go even further back, go to the printing press, which enabled the rise of culture and modern innovation and science because people could share information with each other. And the internet has provided a similar set of growth where we can get access to information so quickly. It enables exponential growth in technology. And there are new technologies, like AI, that have promised to change software engineering. And the thing about this also is it's really hard to predict what it's going to look like in the future, you know, because you follow a trajectory unexpected things happen, right? You have tipping effects that change things. I really wouldn't have thought in 2005, in a few years, we would have a phone that would make us all just retreat to ourselves, teenagers learned to stop writing [laughter], and a total change in society where people quit sharing the same reality because they watch different news sources. Like, this is going to happen from a phone in the next decade? What? And these things are really hard to predict, but they happen. So, we are going to go out on a limb today and start doing some speculation, and we're going to be wrong. But [laughs] –- DAVE: We hope. We hope. MIKE: [laughs] There are some things that we're probably going to see, and this is a chance to talk about what we see coming up in the future. And there's some value that comes from this because we can start planning some trajectories for what we might want to be doing in the next few years. Because there are some things I think we can see, and then there are some things we're not going to see, and we're going to go back, and we're going to laugh at ourselves [laughs]. But it's a chance to lay something down and give ourselves something to laugh at five years from now when we realize we were totally wrong, or maybe see that we saw some things. That's my lead-in. So, I could ask a leading question. First of all, any immediate thoughts? But then following that up with a question, what do you think is going to be most impactful from these new technologies? I'm going to talk specifically about AI because that seems to be the big thing right now. Five years ago, everybody thought it was crypto. It's a thing, but it's not ‘the’ thing, right [laughs]? And -- DAVE: That's a separate discussion. MIKE: That is a separate discussion. Probably not going to be relevant here because probably isn't relevant. That's not the [inaudible 05:30] thing, but this AI thing seems like it might be, even though that's been going on since, like, the ‘50s. DAVE: I'll give the metaphor that I gave in the pre-show, which is that a year ago, people were asking me, well, programmers, “Is AI going to take over my job?” And I'm like, no there's no way AI can understand at the high orchestration level. You can refactor a function, but you can't hold up the whole app. And a year later, like, I slowly changed over a year. I'm like, ooh, the junior programmers are in trouble, but the seniors are okay. And then, I'm like, oh, the seniors might be struggling as well. And the metaphor that I gave at the top of the call or in the pre-show was it's 1970, and we are welders watching the car industry wheel robots, welding robots off of the truck, and things are going to change. And one of the welders at the company is going to become the robot manager who is in charge of orchestrating everything and keeping everything, and everyone else is going to get jobs in construction. That's what my dad did when the mine shut down in the 1980s. He'd been doing deconstruction on rock underground, and he's like, well, I can go dig stuff up and tear it up above ground. But he had to pivot because the uranium industry went away, and our industry isn't going away. Like, everybody still needs their web page. Everybody...well, okay, it's 2025. Everybody needs their podcast, and it's still happening, but I genuinely think we're going to see a lot of industry change. WILL: Okay. So, I’ll say, all right, I'm hearing you. And what I have seen personally, like, I'm not a [inaudible 07:04] right? Like, I'm a technologist. I'm a future-looking dude. That's why I'm still in the business. And I've used ChatGPT...not ChatGPT, I've used Copilot. Copilot is the thing I've used. I have used it to great success patching up things that I find tedious and unpleasant, and you know I'm talking about unit tests. And I love it. And I love it. I love it. I could say like, "Listen, I want a test that does this, and this, and this." And it will get my mocks right. And it'll get my imports right. It'll write out the thing. And it will give me what is roughly analogous to, like, if I had a very energetic remote developer on fiber that was putting something out, and they didn't particularly care about the quality. They just wanted their 5 bucks. And I could take that, and I can give you something good because I know what I'm doing. And I could do this myself; I just don't want to. And I’ve found that to be very useful in a healthy 20%, if I'm being generous and optimistic, which is my nature, 20% productivity bump. I'm getting more done faster. I am not sweating. It seems like you've maybe started the year, started last year, let's say, right, 2024 that's where you were sitting. 2025, you're like, okay, I don't know, right, talk to me about what changed and what you're seeing, the specific terms. DAVE: I resonate with what you just said about Copilot making things easy. There's an xkcd comic, number 1168, I just Googled it, which, basically, they're sitting over a bomb, and the screen says, "In order to disarm the bomb, enter a tar command with all the switches correct from memory on the first try." And he looks up and he says, "I'm so sorry." [laughter] And nobody knows which switch is the tar, right? I do. I do. I read that, and I'm like, ZXVF to unzip and...anyway, yeah, that's me. WILL: Yeah, David. Yeah. DAVE: That's me. Like, I got it. And what I use Copilot all the time for now is give me the Rails generate command to alter this table at a unique index, because you can do this on the command line, right? You can do Rails generate migration, and then you can say, add deleted at to customers or add extra columns to customers. And then, you can type space, and you can do delete it at colon timestamp colon index. If you're going to say customer is going to belong to widget, you can say widget ID colon references. And that literally tells the database to set up a foreign key, and it's going to build the model and build it up. But I can't ever remember the CLI syntax. I just remember that it exists, and it's super convenient. And if you know the change that you want fits in that little DSL, it's so much faster to just write the command, hit enter, look at the file just real quick to double check it, and then run the migration, and you're done, especially when you're, like, greenfield development, right? It's real fast. And that is what I'm loving. I can sit it down, or I can say, “Give me the command to get the URL of the homepage of the project that I'm currently in the directory of in Git. I want to ask Git, Git remote show origin," that sort of thing. And I love that Copilot comes back and tells me things because I tore Git all the way down to the bolts in 2010. And I wrote all these scripts to help me manage how Git works on Git version 0.7. There's a lot of development that's happened since then. And Git remote show origin Git was the only way to get the URL of the fetch URL push URL, and you could massage that and turn it into the homepage. And Git remote show will contact the origin and say, "Give me all your data or give me the information and give me the name of every branch because I'm going to have to build the remote's origin thing for this guy." So, it's like a four-second thing on a big project, on a big server. And I was passing along something, and I asked the AI, "How do I do this da, da, da?" And it mentioned, “Oh yeah, by the way, you want to use Git remote," and it was like, "get URL," or something like that. And I'm like, that's a thing? It is a thing. It only gets the URL, and it's instant because it reads your git config. And I'm like, I needed this 10 years ago, but I solved it back then, and I never circled back because it worked. I love Copilot. It knows the switches, and it knows the latest thing. WILL: Yeah, okay, sure. MIKE: But what makes you change your opinion? What makes you change your opinion to think, wow, this is going to do more than giving me Git commands; this is going to be able to architect an app and build it out? What makes you think that we're going to go to that next level? Because there's a substantial jump between those two. DAVE: Oh, absolutely. WILL: I'm just going to say, give Mike's answer first, and that's fine. But then when we're done doing [inaudible 12:17] now we can do what separates the real pros and the seniors from the other stuff, which is not greenfield. This field is brown and dirty. There are barrels poking out of the ground. Nobody knows what's in them. There are rocks. There are stones. There's rebar poking up, like –- DAVE: And it's not hygienic either. It’s got [inaudible 12:43] MIKE: [laughs] WILL: [inaudible 12:43] The children must play. And I'm in here to make this into a preschool [laughs]. JUSTIN: So, I got a couple of thoughts on that. One is the context window for all of these tools, you know, the LLM tools have expanded to the point where you can get an entire project into the context window. And that allows you to, you know, a number of things, one of which is, like, it can figure out where to add the changes that you want to change. Like, hey, I want to move this widget 30 pixels to the right on every page. Or you can craft your query like that, and it will give you what it thinks that you should do. So, that's getting better. But having said that, I read a really cool article, probably two or three weeks ago, about how, to your point, Will, LLMs right now give you 70% of the way, and it's that last 30% that shows whether or not you are, like, a senior engineer or not. And it'll give you a project and everything but resolving all the little issues that pop up because of hallucinations that are hard to track down that’s still -- DAVE: Recognizing it's a hallucination at all, yeah. JUSTIN: Yeah, and handling the edge cases. So, that, I think, is where we are currently, at least where I'm at currently with the tools. And getting there, you know, here we are 2 years into this revolution, and where are we going to be 5 years into it from now? 70% today and in 2 years, and then, in 5 years, are we leveled off. Or are we going to continue to see steady growth? I don't know. If we see any growth at all, that'll go from 70 to 80 to 85, and that last little bit I'm hesitant to say we'll ever get there. But the people who are doing the work are going to get more and more productive, hopefully. WILL: One, are we seeing AI development accelerate so that we're seeing more progress faster, or are we seeing plateauing? Because I'll be honest with you, what I'm seeing personally is a plateauing and a refining of all the output. DAVE: I have some data for you. WILL: I'm not seeing exponentially...well, I mean, -- JUSTIN: We went from here to here really, really fast. But now it seems like it's just barely increasing. DAVE: Yeah. So, it’s a lot -- WILL: It feels like it’s slowing down, but I could be wrong. And the second question, and this is just really specific, is like, how many thousand lines of code of context can I get into an LLM? Because, I mean, really? My understanding of the LLM is that it's an exponential growth in complexity as your context size increases because there isn't a semantic understanding of what all this stuff means per se, per se right? It doesn't translate into the symbols like we do. And so, you know, just, like, 2000 lines, 3000 lines, 10, 000 lines that's not, like, a linear model increase. Does anybody what that context is? MIKE: Yeah. So, Transformers, which is the tech that's been used for all of the LLMs, it was a technology that came out, was it 10 years ago or something? And the way that those work mathematically, that is true. It grows exponentially by the context length. However, however, there are workarounds. Those are moving forward quickly because you can take groups of things and compress them and put them together. And that's really how our brains work anyway, right? We think about things hierarchically. And by approaching it hierarchically, you can actually address larger and larger windows. And now they're into millions of tokens. So, yeah, much longer than just a few thousand. DAVE: Yeah, ChatGPT I just asked it, “What's your context?” And it's like, “4K,” And I'm like, really? I was playing with Mistral last night, and it's, like, 130k. What are you doing? But that was in the last 12 months. When I was first going out to Hugging Face last February, and it's like 2024, so a year ago, 8K was big. And the LLaMA-30b had just dropped, and everyone was freaking out about it. And now they’re like, 405b, I mean, they're just getting bigger and bigger. So, I said logarithmic, but it's the same thing as the exponential. We're just on the bad side of it, which is that if you want to double the things you can think about, or if you want AI to go up, like, 1%, you got a double its capacity. And if you want to go up another 1%, you've got to double that capacity. So, we're on the bad side of that. But what we're seeing is, like what Mike said, specific breakthroughs. Like, somebody will attack a specific problem. In the last 12 months, they've solved...in image generation, they've solved the hand problem much better. It still happens a little bit, you know, seven fingers or, three thumbs, or the fingers are on backwards, too many arms, that sort of thing, good time. But I'm not seeing that anymore. I went out and built a character for a comic. There's an AI that will let you generate, like, graphic novel-type stuff, and you can develop characters. And it had trouble generating you the same face two images in a row. You could do four in a row, but they would all look similar in different poses, but your next slot, completely different all re-randomized. They've solved that. You can literally just say, “I like this one. This is the face. Please make everything look like that.” So, I genuinely think this is coming. The general trend line is, we just keep adding more and more and more and more. And I think, eventually, we're going to get to a point where the width is big enough and the speed at which it tries and tries again to basically cheat...you can have twice as much AI if you just run it twice, right? And the same idea. Cross-intentional stuff. This blew my mind. I watched this a couple of months ago. The way they made it so that you could say, make me an image of, you know, a whiskey bottle made out of yarn, right, is while they were doing all the Stable Diffusion, like the pixels blurred, now resharpen, now fix it, they were doing that. At the same time, a completely different text-based AI was being told this image has a girl in a red sweater. This image has an angry woman in it. This image has a dog that is happy. And the genius move is somebody said, let's tie these two together. And so Stable Diffusion when it was blurring and sharpening was also being fed, this is what the English...and, like you said, AIs don't think the way they are. It was not, this tells you what the image looks like. No, it's like, this is the statistical probability of the words that you might hear in sequence correlate to pixels in sequence over here. But the end result, once you've thrown 24 trillion tokens at it is that you can say, "Show me a whiskey bottle that’s, you know..." And you can tell it crazy, crazy things. And if it knows... “Show me a whiskey bottle with irritable bowel syndrome.” You're going to get an image if you do that. MIKE: [laughs] DAVE: I haven't tried it. I think it'll be funny if I do, right? That kind of thing. And it'll do it. And so, it's a trend line. I genuinely think...I guess what I'm saying is I am startled by the amount of things that are now trivially easy to do today that I was certain a year ago we wouldn't be able to do. Are we going to be out of a job? No, absolutely not. There's going to be people managing the AI. Until the robot platforms come out, humanity has a chance, but we'll be the operators running stuff. But yeah, we're going to lose stuff. Real quick tangent, my father-in-law was a Renaissance man, machine operator, machinist. He built engines in his garage as a hobby, like, literally gasoline engines that were...they would hold pressure, spark, flame. I mean, he would machine things to extremely tight tolerances. He made his own bolts on one project, literally turning them on the lathe. And he had the entire skill set from woodworking to metalwork to engineering to metallurgy. Like, he knew how to temper steel to different properties, like, he knew the full stack. And when I need something built, I go to Amazon, and I order something from China. And the Renaissance people are going away, people who worked in their garage building engines as a hobby don't exist anymore. And they were shade tree...you could find one on every street 30 years ago, and now they're dying out. And I think that's what we're going to see. The people like us that are crusty, old farts that we actually know what transistor-transistor logic is, we're going to be surplus to requirements. The next generation of programmers are not going to be able to...they are going to detect hallucinations like we are, but they're going to do it on a completely different basis. They're not going to come to it from knowing everything that the software is built on. They're going to know it by recognizing patterns. And like, oh, yeah, last time I told you to do this, it didn't work. Or, oh, you told me this, but I need to test it, and it'll just be a habit, and it will go along. And then, somebody will write an agent to do it, too. JUSTIN: Go ahead, Kyle. KYLE: I was just going to say, even without AI, though, we're hitting that right now. Juniors that I've met they don't necessarily know how a computer works. I mean, we're always having to adapt and expand. And that's where I'm listening to some of this, and is it something that will take our jobs and we'll be replaced? And that's where I have a hard time thinking. Well, no, not unless you let it. And what I mean by that is we're all going to adapt. We're all going to be doing it differently. Programmers, you know, they were in C, or C++, or any of those languages that were not memory safe. Those are not popular languages now. We have languages that handle that...who a few years ago would have used Stack Overflow. We have Stack Overflow. That has sped things up. I think things are just going -- JUSTIN: Actually, Stack Overflow is going away, but anyway. KYLE: Well, I'm saying, the progression, right? But I'm just saying, if you adapt to the changes, the world does move faster, but the demand really has not decreased. It's just increased. Like, do things faster, do more. And so, I don't know that the jobs that we have today will exist, but they'll be different and plentiful. DAVE: That's the way, yes, -- WILL: Well, and -- DAVE: The job you have right now is going to go away no matter what. You're not working on the same stuff today that you were two years ago. We're not going to lose our jobs and starve and die because of AI, just things are changing, right? I don't build WordPress blogs in PHP anymore. I don't solder anymore. KYLE: Thank God. DAVE: My job has moved on. Those jobs have gone away, but I am still employed and employable, and that will never change until I die, hopefully. WILL: Well, I mean, until you fall further and further behind the [inaudible 23:58]. I mean, it is a situation where if we all get 20% more efficient, then some of us are going to, I mean, the weak gazelles in the herd are going to get called out, and we're in the middle of that. JUSTIN: I just have a question for the team. I mean, we're here in a podcast, a development podcast. What can developers do to not be forced out with the next wave of things? Ideally, we all want jobs, and this wave of more efficiency means that maybe there'll be less jobs. But, ideally, we want to know how to use those tools and be one of the people that is advancing and using it rather than being run over by the car. MIKE: So, something that was said a minute ago, most people aren't writing in C, I think it was Kyle, aren't writing in C anymore. But it doesn't mean people quit coding. Most people aren't writing in Assembly languages anymore either, which is what...people certainly aren't using punch cards [chuckles] very much to program a computer. There might be somebody out there that has to run a legacy machine. It could exist [chuckles], but it's very limited. But we're all still coding, but the nature of coding may, is, may change. A couple of years ago, I read an ad...there was an ad for Anthropic, a company that's gotten really prominent in the last couple of years, where they were asking for a prompt engineer. And I think they were offering a 300,000 a year salary, and they couldn't find anybody. DAVE: Oh wow. MIKE: Because it didn't exist. They couldn't find anybody who had experience doing that because nobody had done that yet. But, a year from now, two years from now? So, here's a prediction that some people can hold me to and see if I'm totally foolish. Five years from now, if you can't be good at prompting an LLM, I think you're out of a job because I think that becomes the new language. DAVE: In this industry, yeah. MIKE: In this industry. I think we may even have new programming languages that arise that are designed to be friendly to AI rather than friendly to whatever the language constraints are right now, and they'll compile down to whatever. That will fundamentally change what we're doing. And it may look a lot more like product management, where we're gathering requirements and expressing them really, really well for a good product manager. It may look a lot more like product management than loops, right? DAVE: I have a corollary prediction which is going to be really, really interesting. Source code is a thing that you run through a compiler to talk to the computer. But it's written to be read and understood by a human which means good programmers write source code that other humans can read and can interact with. I would not be surprised in the next five years if the prompt becomes a version of source code, and if we actually start seeing people who start to say, “This prompt and that prompt, this one is more efficient, but this one is more robust. And oh, and by the way, this one, a human can't understand it. They need an agent to do it. So, it's actually bad. Let's use the other one.” You're going to see people...you need to write a prompt so that a human can understand it. The AI will turn it into transforms. WILL: I'm a hard, hard, hard no on that. I think natural human language, English, spoken English, human language is absolutely terrible, like, atrociously bad for the kind of precise interoperability, for the type of precise design requirements. It is atrocious for that. And we have tried to develop sort of specific language around, like, legal contracts, and it's wretched. It's like we have...we spend billions every year arguing about the specifics of legal contracts, which are not, I mean, they're not on the same level of sort of specificity and complication sort of like source code is. We couldn't do it. So, I think, I actually think source code for your modern high-level languages like Ruby it's actually an excellent example of something that is source code but is very readable on a human level. I think we'll see something a lot like that. DAVE: I agree. I think a prompt language might be coming, yeah. MIKE: Yeah, now, that's exactly what I was suggesting. I think we're going to have a language that's up at the next level. We don't write machine...we don't write Assembly code. Most of us don't write compiled, a lot of us don't anyway, compiled languages anymore; we do interpreted languages. Put another layer on top of that. Now we've got the prompt language, which is just...will have to be exactly as precise as what the lower-level languages were. But it's a specification of what you want from that project. And the people who are very detail-oriented, just like today, the people who are very detail-oriented, and are able to communicate with precision what exactly they want the system to do will be successful. Because, exactly like you said, Will, if it's not precise, it's going to be wrong. And the need for precision is not going to go away, I don't think. JUSTIN: Okay, so I want to walk through, like, a typical workflow five years from now. So -- DAVE: Before we jump into that, can I -- JUSTIN: Let's put our imagination caps on and suppose we're working for a web retail company, who could it be, who is selling...they're creating a new form to gather data, or something like that. I don't know. They're doing a know-your-customer data entry, okay? So, you have the product manager, and then you have the engineer. And they're sitting down together, and they both have their laptops in front of them, or maybe their headsets. And the product manager says, I want to collect, you know, first name, last name, date of birth, driver's license number, SSN. And the other hidden person in the room is the AI, the Figma AI, so suppose Figma is listening, and Figma is listening to that. And what happens next? I imagine that Figma will be listening to this and real time Figma has, up in front of it, the web page with their company's UI. WILL: Design template, right? And one of you got to style that, right? JUSTIN: And these things appear on the screen. [crosstalk 30:47] And these fields appear on the screen as you go along. And the engineer is not doing anything at this point. DAVE: He's steering the UI, though, in real-time, right? Like, make the social security bigger, make this field required. JUSTIN: Yeah, exactly. And so, it's like, hey, make this field required, a little star shows up next to it. So, in real-time, they're having this conversation, and these things show up on the Figma UI with the company's contextual UI as all part of it. So, they go back and forth with the engineer and the product designer or the product owner. They go back and forth until they reach something that looks good. What happens next? WILL: Then we're talking about it. It's like, okay, this is what I need. Then you've got the senior engineer, right? And the senior engineer has a good understanding of the topology, right? The various microservices that need to get looped in the sort of ops resources that need to be allocated. And so, he is going to talk through the architecture, right? He's going to talk through the architecture and be like, okay, I need to go, you know, the last [inaudible 31:52] JUSTIN: So, is copilot listening? WILL: Yeah, yeah. Let me finish [laughs]. I need to go out to Galactus and get date of birth so we can send everybody a birthday card. And then, I need to go to thus and such service that says, okay, this is the thing. And all the while, boop boop, boop, I need to go here and get this. I need to go here and get this. I need to go here and get this. And the AI is saying, “Okay, you need this thing.” And then, I'm going to create an epic for this sub-team. It's like, okay, boom, I need this from this. It's going to take the meeting transcript and an understanding of the thing. It goes to the sub-team for that engineer. And then, you've got a, I would say, probably an asynchronous kind of a thing where it's like, okay, we need this. It looks like this. This is how this thing goes for you. There's still going to be people involved in the steps. But all of the incessant meetings and sync-ups and stuff like that it knows what the org chart is. It knows what the subsystems are. It knows who the person in charge of those things is. And then, you have the follow-up meeting where, like, okay, we take it I'm Galactus, right? And I need the customer information, and then it describes that. And I say, like, okay, this is it. This is the...I think a lot of it is going to be just people are going to look a lot more like...they're going to look a lot more like technical program and product managers where you're the engineering manager for the AI. But you're going to code, and you're going to need to code because there's still going to need to be a body who can say, "This is good, bad or indifferent," you know what I mean? JUSTIN: I mean, you've got to have somebody who knows it. But when you're at the start of the project, you may not need somebody who is actually coding because you can go in. You have the OpenAI, sorry, OpenAPI specs for all the services you currently have, and they could be represented graphically somehow. And you could, as you're chatting, Copilot is going in and giving you the list of those things. And you say, "Hey, I want that one," and it gets dragged over. And you say, "Hey, I want that service," and it gets dragged over. And so, these connections that happen, I mean, theoretically, the code between the front end and the back end and then the back end into another back end all of that could be done automatically. You do have to have the engineer to make sure that it’s making sense. But it'll go in. It'll check the parameters that exist already. And then, you'll decide, oh, we don't have a data store for this yet. Go make, based on my GitHub template, go make a new project that has a service that stores these in our standard data store and that will follow all our standards and everything else. This is sounding very hypothetical, right? It's crazy. Maybe not 5 years, but maybe 10. DAVE: You haven't said anything that I don't think we could build right now with a tiger team, legitimately. JUSTIN: Great. DAVE: I genuinely don't feel...To Will's point a minute ago, you were saying because it's fuzzy and we need precision, it's never going to happen. And I see the same problem you do, but I arrive at the opposite conclusion, which is that AI's superpower is that you can give it a typo. You can be imprecise. You can be sloppy. And it's actually smart enough to not just untangle what you said but infer things three leaps away from it, and I think that's going to be a superpower. I've been playing with storytelling LLMs, and they really, like, Mistral is not going to generate JSON. It can't do it. It can't even keep quotes lined up in English text, but it can tell a compelling emotional story, that kind of thing. And it's really, really good at making inferences. This is the last thing I'll say about [inaudible 36:06], and I’ll let you guys take the floor. But the prompt engineer...the stuff that I'm doing right now is I will write a story where husband and wife are arguing, and he gets mad, and he wants to walk away. And she's like, “You can't walk away from me.” She grabs him by the arm and pulls him back. And I'm like, no, she would not do that. That's wrong to the character. So, I have to tell the AI, “She's not going to grab him,” and then regenerate. “You can't walk away, grabs him by the arm.” I'm like, why did you ignore that? “Well, the narrative tension needed to escalate.” I'm like, oh, okay. And you end up going to the wife and then changing. Oh, I see she's got vindictive as the first trait on her personality. I'm going to move that down three and move supportive up to the top. Regenerate it. And all of a sudden, it's a completely different story. She's like, "Don't walk away from me. I need you to understand me". And it's a completely different tone to it. You can give the AI 7500 tokens of this is a young boy, and he's got some interesting exotic powers, and he's gone to a school to learn. But you can take a very small AI with very low context, and you can say, "This guy is Harry Potter," and it just got 10, 000 tokens worth of inferences out of that. That's what prompt engineering I think...it's not headed there. It's there right now, at least in the fiction category. And I think when you marry that to Copilot, it's going to be terrifying because you'll have sloppy. You'll accept slop but emit precise. That's going to be a good day. WILL: I feel like one thing that I think will really...I think a quantum leap in terms of AI's ability to generate software is the ability to sort of...so, you have a self-training model. I build these things, right? And it's like, well, does it render the webpage, or does it not? So, one of my superpowers...this is something that I do all the time because, sadly, I have progressed at a point in my career where people give me just the worst, just the worst jobs. I get nothing but the absolute worst jobs, worst jobs [laughter]. And it's like one of my models, right? This is a simple model, and I got it when I was developing really wretched stuff, embedded system device drivers, which are hell on earth to debug. But this is just a simple intellectual way to make nearly anything work, and you just debug from the known to the unknown. And what I'll do is I'll have some component. I don't even want to tell you what I had to work on today. It would curl your hair. It would be terrible. But I had this front-end monstrosity. It wasn't working. And I'm like, okay, listen, we're just going to gut it. We're going to gut it down. I'm going to tear this thing down to the studs, and it's going to print, “Hello, World.” That's literally all it did is, like, print “Hello World,” and I got that working. And then, I started adding things in brick by brick by brick. DAVE: You got a working piece. WILL: Until I found the thing that was broken, right? An LLM could do that, an LLM could absolutely do that. And then, you have the model, which is training itself. I'm training myself. It's like, okay, I'll try this. I'll try this. I'll try this. I'll try this. And we could find ourselves in a situation where a lot of my job is just wretched, ditch-digging work where I'll get up into the muck and just stay here until it goes. Well, an AI could do that job, and I could just be like, this thing doesn't compile. Why doesn't it compile? Make it compile. And the AI could be like, okay, I'll just tear it down to the studs, and I'll add things back in line by line until it works. And that's a slow process, but you could just be like, all right, I'm going to bed. Call me back in the morning. Let me see what you found. And I'll say another thing. You would need this thing. But one of the things I'm grappling with right now at my large, big box retailer, which is not Rent-A-Center. It's not Rent-A-Center. I bet it's better than this [inaudible 40:30] DAVE: So, you're working for a competitor? How dare you? WILL: Yeah, anyway. So, I'm [inaudible 40:34] that a big box retailer with a lot of legacy problems. And one of the things that I'm grappling with right now is migrations, where you're migrating one bad service to another thing, right? And so, you migrate it over. You run into real problems, real serious issues when you don't have the operational discipline to finish one migration before starting the next migration, then your problems compound in a truly brutal way. Well, those migrations are extraordinarily painful and the source of a lot of brownfield misery. But if you had an AI that could generate products and iterate itself and say, “Okay, here's the old one, and here's the new one. Are they the same? No? Fix it.” And that's sort of when you have the template, what you could do is you could execute these massive migrations automagically, close to automagically. Then you could really start to...and it is a dumb ditch-digging miserable process, but that's the kind of thing where you could accelerate development tremendously. MIKE: So, you're talking about what sounds like reinforcement learning from human feedback. DAVE: There's an AI listening to this podcast a year from now, as it's ingestion, and it's going, I wonder if I need to be thinking about how the whole system hangs together [laughter]. MIKE: Reinforcement learning from human feedback is what allows ChatGPT to work. They train it, and it's off based on terabytes of data, right? And then, they show it to a human. Here's what I think you meant. And they say, “No, you can go this way,” and it learns from that. And they do that through iterations. So, practices generating the text and then gets human feedback and based on that, over a number of iterations, improves to better match what humans want. You're talking about taking one system matching it to another one, and it gets it wrong. Get some feedback on that and then keep iterating to get it right. There exist algorithms today to help with that. Also, some of the newer algorithms that exist are no longer bound by a fixed amount of calculation. That is, it comes up with a chain of reasoning where it comes up with the next step, and then the next step, and the next step, which has an indefinite number of steps, right? DAVE: Oh wow. MIKE: And it will generate based on that approach. The latest OpenAI models, for example, are built that way. And that changes the way that these problems get approached to be closer to the kind of human-driven algorithm that you're describing. So, I think that we, again, on this one, I'm not making a prediction on because I think it's really hard to say whether that really will be the key. But there is a chance, because there are hundreds of billions of dollars being poured into this, there is a chance with all of that mental energy, all of that compute going into it, that somebody is going to find the algorithm that will continue because they're continuing right now to find new algorithms that push it forward. JUSTIN: Yeah, we're so early in this. DAVE: And where we focus our attention it goes fast. Like, the grass is greener where you water it, and that's really, really true right now in the AI side of things. They're taking programmers that are complaining about their jobs and saying, “What are they complaining about? How can we help with this?” I love TDD because it's a completely different way of testing. It needs a different name because people think, oh, I'm just going to write the tests that are at the end I'm just going to write them at the beginning. It's not like that. It's a completely different discipline entirely. It's a completely different experience. But most programmers don't do that, they prefer to test after, and there's all kinds of anti-patterns that start to happen as a result of this. And the Copilot stuff, and Gemini, and stuff that's out there is optimized to solve that pain. It will write your crappy after-tests for you, but nobody's working on the TDD thing. So, hang on, I just had a business idea. I got to write something down [laughter]. MIKE: So, we've talked a lot about what we see things looking like in five years, higher level languages relying on the AI to do a lot of the grunt work. Potentially, developing some of the processes we use to deal with ugly code, with code that is in bad shape, in a bad environment, bad all-around situation, industry, you name it, it'll help you with those things. And if we do, are jobs going away? Yes? No? It seems like, generally, the answer is no. We still need people building stuff. But our expertise will have to change. DAVE: Where do we need people right now? That's going to shift. We're just going to be sidling side to side to side to side. And a hundred years from now, only 1% of us will be farmers. And they'll be producing the food for the world but in AI, yeah. WILL: Yeah, I don't worry about myself so much just because I've been doing this for a long time, and I know a lot of things that I don't even know how I know them. I just know them. I just know. And I don't need to know a language. You could drop me in a Go project tomorrow, and if I had Copilot or something like that, I'd be fine. I'd be fine. It's like, how do I set it? Oh, that's the core routine? Okay, that's fine. How do I do a loop?- How do I do a thread? Okay. How do I make my API calls? Okay. I'm done. Moving, moving. Chop, chop chop. No problem. I worry more about the junior developers that are coming online right now because I know how I learned it and that sort of, like, old-growth, forest free reign kind of a thing that's not going to be tolerated. I feel like we're eating our seed corn on a large degree in this industry right now. And yeah, Justin, go ahead. JUSTIN: Yeah, so this hits close to home because my oldest child he is graduating from university in a year from now with a computer science degree. And chatting with him, the degree that he got from Utah State University, which is the northern end of Utah, it was very much a classic computer science degree. It had more to do with my computer science degree than with perhaps the prompt engineering that you see. So, he's deep in algorithms. He's deep in the classics. And he used C++ of all things, memory allocation, all that, you know, close to the metals type of stuff, not terribly close to the metal. He's not writing in Fortran or things like that. But at the same time, I'm chatting with him and trying to figure out, okay, what are you going to come out and be able to do? And he's having trouble right now getting an internship, whereas a couple of years ago, he'd be snapped up right away. And not only that, he's, like, at the top of his class. He's keeping his scholarship, so he's got, like, a 3.85 GPA, which is hard to do as a computer science major. But yeah, we're trying to get him an internship, and I'm worried about him getting his first job, so... MIKE: I have a nephew in the exact same situation. He graduated six, eight months ago, still doesn't have a job. DAVE: I think we have another podcast topic there because nobody wants to hire the core thing. They're looking for people that are amazing on their side hustles. Side hustle is the wrong word, the other things, right? You don't hire somebody who meets the job requirement and says, "I guess I can do the thing you want to do." You hire the person who comes in and says, "Yes, I can do that. Also, I care about this, and I have a degree in, you know, I have a Juris Doctor because I went to law school, or I have this other thing, or I've worked in finance." That's what you hire for. And when you're very, very young, the side hustle is completely different. You want to hire the kid who stays late without being asked or cleans up his desk or her desk without being asked to. Or one of the best personality tests I've ever seen...I was in church and one of the senior church officials was standing there, and there was a gum wrapper on the floor of the chapel. And the senior church rep went over and bent down. And I'm in a church where hierarchy is kind of important. Somebody that high, you're not supposed to make that person pick up litter. And he did. And everyone in the room went, “Oh wow.” And that's not in the job rec for the thing he was doing, right? But that's the difference between getting this job and having a career that will go. KYLE: There was something else that Justin brought up, too, and that was the C++. Now, I went as a computer engineer, so I did C++ as well, but I'm aware that other universities do C++, but others will do C# Java, which is one step higher, I guess. JUSTIN: That was in there, too, but yeah. KYLE: But, yeah, no the point that I'm trying to make here is that we might need to, I mean, this might be an institutional problem, too, because a lot of this is theory. They're teaching theory-based learning and it's all algorithms and stuff like that. We're not actually teaching engineers coding. We're not teaching them prompting and stuff like that. So, it might be a problem at the fundamental, like, schooling level before they can get the jobs. MIKE: This is not a new concern. For years, people said, “Well, we need to give people more practical, hands-on stuff, trade school stuff for engineers rather than the theory.” And I might push back on that. I think it was Will who said we're eating our own seed corn. I think that's true. And I think that's closer to the problem because, honestly, people who understand what's going on and can understand what happens when the prompt doesn't give you your answer are going to be incredibly valuable, I think, in the coming years. I think that we need to have that pipeline of people who actually get the things that are happening at a lower level. And they're not going to be that useful right up front. They're going to have to take some time to learn it. They're going to have to learn those skills, but you need both if you want to be really successful. I do think that there is a shortage right now in the industry of companies that are willing to invest in their future. There's some uncertainty. I think it also has to do with the change in inflation. When you get different interest rates, it changes where the companies want to invest. We had really low interest rates for a long time, so companies would invest in their people. Interest rates go up they say, “Oh, we can make money other ways, so they drop all the people.” DAVE: [inaudible 51:43] operational, yeah. MIKE: I think that's a lot of the problem in the computer science industry right now. It's just a lot of big tech is firing because they can make money through investment. However,...go ahead. WILL: I mean, I'd say having gone pretty far past a regular old bachelor's degree, there are a lot of professional fields where you're not working with a bachelor's degree. You don't work as a doctor. You don't work as a psychologist. You don't work all kinds of things without post-bachelor's education. KYLE: Post engineering. WILL: [inaudible 52:21] is a master's degree minimum field to be in. You don't know shit with a bachelor's degree. I'm sorry, you just don't. It's just too much. You can be smart, and you can be trained. And most people with a bachelor's degree and a good GPA are smart, and hardworking, and motivated, and absolutely can be trained. And after they get that sort of finishing school, which we've been dumping off on industry, right? Historically, like, the first...it's just no. The first year out of bachelor's you kind of suck, and you do what you can, and we give you jobs that you can take on, but we're investing in you with the expectation that you will pay us back later. A lot of managers have exploited that thing, and they underpay their people when they start producing, and then that's kind of a self-defeating cycle. But that was how it was always supposed to...how it was always intended to work. And I think that's just how it's going to go. But, I mean, honestly, we've already seen them come for the boot campers. First, they came for the boot campers, and I said nothing, for I was not a boot camper. Then they came for the undergraduate interns, and I said nothing. That was not [laughs], you know. And then, they came for the senior engineers, and there was no one left [laughs]. MIKE: No one left because the AIs are already doing everything else [chuckles]. WILL: Listen, you know, if I'm that guy...I got to make it...well, no, that's not true. I got to make it in another 20 years, man [laughs]. I don't know, maybe I'll be the guy shutting the lights off, you know, walking out the door. But I'm consoled by the fact that having seen the incredible leaps and bounds the industry has taken over my career, they just want more. The more we give them, the more they want, the more we give them, the hungrier they get. [inaudible 54:30] DAVE: This stuff is cool. WILL: Yeah, this stuff is cool. And it's an arms race. It's an arms race, right? Oh, I have a hundred engineers. I double their productivity, right? Now I'm taking ground on my compiler. So, I've got features they don't have. I'm in markets that they're not in. I'm doing better things, right? And then, your competitor has to hire another hundred engineers to keep up because they just want more and more and more and more and more. I think that if we can get really, really negative and pessimistic, that is something to keep in mind. You know, Rent-A-Center, I don't know who you guys...who's you guys’ number one competitor? You guys got a competitor who you’re worried about. MIKE: There's a couple of them. WILL: What's that? MIKE: Oh, there's a couple of them. I guess we could name them [laughs]. There's – WILL: Well, no, whoever it is. If you guys do something better, well, then they got to do it, too. And they do something better than you, you got to get...and so, it's one of those things where engineering departments they're pretty durable. Head count is pretty durable. DAVE: Is it though? Because the AI is taking over call centers. We're seeing call centers get gutted by, like, 90%. And that's right below the interns, and then they're going to come for the interns. The key thing, though, is that if AIs take over the interns, we're no longer taking interns and producing junior programmers and that pipeline of improving people. And that goes back to...a lot of the Renaissance people, I think, are going to go away, but also, we're going to get shot in the foot. We're going to have to find a way to backfill those basic skills or to obviate them so that you don't need them. KYLE: And does that kind of go back to what Will was pointing out? Does that then mean that future engineers have a master's or a doctorate degree in order to get into the field? DAVE: I don't know. MIKE: There may be a cycle. There may be a time when we get greater credentials for a while in this kind of arms race. [inaudible 56:36] I'm good. Eventually, you're going to run out of people. They're going to run out of the junior engineers. And I think it's possible that those requirements come back down, and you get closer to the boot camps where, wow, we need to actually backfill these people. We need people who know what they're doing. These tools work, but we need some people to run them. The pendulum could swing the other way. DAVE: And all the people that are not going to be working in programming in 15 years are going to be working at something else because their programming needs are going to be solved. So, they're going to have an opportunity to do other things. We couldn't have been programmers 100 years ago, even if we had the computers, because we were too busy farming because we were starving. When you say they're coming for my job, that is the same sentence as they freed me up to do some really cool stuff later. But if you don't know what that cool stuff is, it's a negative statement. Like, they came for my job. KYLE: I mean, the automated checkout stands were going to take over, right? DAVE: Yeah. KYLE: And they haven't fully taken over. People are pushing back on those even, so we don't know where they're going to go. The other thing that we haven't really pointed out, too, is, I don't think we're seeing much of it yet, but policies and regulations. Are those coming, or are we just going to let this go, you know, unyielded? MIKE: It depends on the political -- WILL: All I could say is, to my mind, it is absolutely stunning. So, my wife works in healthcare, right? She works in healthcare. She's a therapist. And it is stunning the lack of regulation and the power that just a regular, old software developer, high school diploma, maybe [chuckles], is just rocking in out the street. I'm not trying to say, like, you have to have this, and this, and this to be good at your job, or anything of the sort. But there is nothing, nothing, nothing whatsoever [inaudible 58:44] you know. People are doing this stuff in India, where it's just like, is this guy even accountable to United States' criminal codes? MIKE: Right. [laughs] DAVE: Nope. We know the answer to that one. Yeah, that's actually a good point. WILL: Maybe, maybe not, you know. DAVE: It's something I was going to say at the top of the call, but it's relevant. The problem that I think AI doesn't have, and is a long way away from, is being trustworthy for a mission-critical, low-resolution task. I do not want an AI in my pacemaker trying to decide or hallucinate when it should send that pulse. I don't want an AI in my CPAP. There is an AI in my CPAP, but it's not deciding valvular. It's looking at the entire night going, we're coming up on the time when he breathes easier. Let's take the pressure down so that we're not doing it for him. It is in there. But the mission-critical part of this, right, where you've got machines that will dispense insulin and if it's the wrong dose, you die. That kind of mission critical stuff I would not toss that to a sketchy LLM without a whole bunch of agent LLMs going, is this reasonable? And even then, I'd still want somebody doing post-mortems on the failures. WILL: But even in that point…so, I've worked tangentially in certified- software doing static analysis tools for making sure that your avionics controllers and stuff like that are going to do the job that they need to do. And I think AI can contribute tremendously to the effectiveness of that stuff. DAVE: Oh, absolutely. WILL: Like, through better static analysis tools, through more robust and durable test suites, just, like, really, really , really, really testing the ever-living snot out of all of this stuff and letting people do better work. And yeah, okay, they could generate some code, but we have run it through, and we have to test it. And you're having smart people be more effective, be more efficient. And I think that's fine. But, man, I don't know. I think, David, I think you might have sold me, I don't know. Because as I'm walking through the problems, and I'm saying like, okay, this is what I'm grappling with, right? And I didn't spend a lot of time thinking about, and here's how I would build a tool, or a Transformer, like, tooling around shaping this problem into something that the AI could mill down. MIKE: Well, that's actually probably a good place to end this, talking about avionics. Let's bring this in for a landing. DAVE: I like it. MIKE: We asked some questions, where are things going to go? I think we kind of all have a lot of agreement that they're going to change a lot. We don't know exactly what they're going to change like, but there's some suggestions as to the direction things are going, more effective tools, allowing us to do some things more effectively, and maybe even fundamentally transforming our industry. And there's absolutely a need to stay on top of it. That's never changed. It's been the case all along in this industry, but perhaps even more so now. And it may go in crazy ways we haven't thought of, you know, it's not going to be subtle changes. It may be pretty dramatic ones. That's not necessarily something to be scared of. It's potentially something to be excited about, but we need to be prepared and watching. Keep your finger in the wind, which way is this going? Or else you might end up in trouble. Thank you. Hopefully, listeners, you've gotten something from this and have some ideas to chew on as to where to take your career. And until next time on the Acima Development Podcast.