LaunchPod AI - Nan Yu === Nan: ~what was so, tedious. Terms of the user experience , and the friction involved that you used to just not do it at all, ?~ ~The value you were getting from it was zero. And by applying AI you end up making the cost negligible, And the result isn't, you've saved some money. The result is the thing you weren't doing at all before. Now you're doing it. A huge amount, ? You were getting zero value from this process that theoretically could have existed, but now you're getting almost unbounded value because you can just keep running it on repeat, ?~ ~And just turn on the faucet and it's just gonna run. ? ~~That new value discovery is where the real opportunity is.~ ~Welcome to Launch Pod ai, the show from Log Rocket where we sit down with top product and digital leaders to talk real practical ways.~ ~They're using AI with their teams to move faster and be smarter. Today we're joined again by non you VP of product at Linear. I'll be honest, this conversation skewed to the future almost immediately, but there's few better than non, to talk about what AI means for the future of product he shares. Why you're wrong to think of AI as a way to reduce costs and why the real value is in enabling a new work that was pre.~ ~Previously impossible. The three must haves. He's deduced from across the most successful AI products to produce consistently great user outcomes and the behind the scenes details of linear's new agent interaction guidelines. So here's our episode with non you~[00:00:00] What was so tedious in terms of the user experience and the friction involved that you used to just not do it at all. The value you were getting from it was zero, and by applying AI, you end up making the cost negligible and the result isn't, you've saved some money. The result is the thing you weren't doing at all before. Now you're doing it a huge amount. You were getting zero value from this. Process that theoretically could have existed, but now you're getting almost unbounded value because you can just keep running it on repeat and just turn on the faucet and it's just gonna run that new value. Discovery is where the real opportunity is. Welcome to Launch Pod ai, the show from Log Rocket, where we sit down with top product and digital leaders to talk real practical ways. They're using AI with their teams to move faster and be smarter. Today we're joined again by non you VP of Product at Linear. I'll be honest, this conversation skewed to the future almost immediately, but there's few better than non, to talk about what AI means for the future of product he shares. Why you're wrong to think of AI as a way to reduce costs and why the real value is in enabling a new work that was previously [00:01:00] impossible. The three must haves he's deduced from across the most successful AI products. To produce consistently great user outcomes and the behind the scenes details of Linear's new agent interaction guidelines. So here's our episode with non you. All right. What's up Na. How you doing, man? Glad to have you back on the show. Yeah. Good to be back, Jeff. Jeff: I think you're a guy who generally needs no introduction in the circles of people who are probably gonna listen to the show. You lead product over at Linear doing cool stuff on ai. You guys launched a cool guidelines to agent interaction and generally been at the forefront of, people who are really into product management. If you don't know linear, you're not one of the cool kids, I'd say. But yeah, let's just dive into it, you laid out this kind of view of at heart, a lot of AI tools are just kind of like harnesses, can you maybe explain that for a sec? Nan: Yeah, yeah, for sure. I, I think like we are all very accustomed to interacting with models in a direct way now, right? All of the sort of chat interfaces where, you're typing in stuff from scratch and prompting models and having them do stuff for you. But if you look at, you [00:02:00] know, the tools that have been adopted, even, even like the chat the chat interfaces, right? That's essentially a, a sort of UI harness in order to utilize right underlying models. So you can think of that chat interface being a harness, you can think of something like like Cursor being a harness, right? There's all sorts of, prompting and context engineering and automations and stuff being run. Underneath the covers every single time you say anything into, into cursor. And you can think of harnesses as that sort of embedded set of instructions but also some sort of baked in streams of context and data, right? So for something like Cursor, it would be like, Hey, you have your whole code base, right? That's part of , fundamental set of context that it's constantly examining and kind of working over. In order to achieve the result that you want. Right? And if you kind of switch gears and you look at something else that has very good adoption, like granola it's a very similar kind of story where there's an underlying kind of data stream. There's a sort of automated set of processes that it goes through. There's almost like a rigid set of instructions that it utilizes for in, in order to set correct expectations, right? You know, what's gonna come out the other [00:03:00] end of it. And you know that in the middle of that, you know, there's a, there's a model doing a lot of work. But it's really all of the, the sort of kind of context harnessing around the model, that makes this , an interesting app and an interesting workflow. Jeff: Yeah, I mean, I think we've all had the experience probably early on of let's just try to go straight at it and say, Hey, do this thing. And , like you said, there's no kind of context around it, there's no direction, there's nothing to it. And the models kind of suck at just being intelligent. You got into a bit of this, but like that kinda seems to weigh into how linear thought about. What does AI mean in this world for, how do you build great product with ai and how do you, how do you think about incorporating it? Nan: Even if you kind of, kind of, zoom out a little bit, right? And, and think about what is the process of developing an AI powered application, Jeff: I think that's what everyone's kind of trying to figure out right now. Nan: , Yeah, for sure. I think it, it starts off with, you playing around inside of a sort of general chat bot kind of interface, right? You can use clot or chat JT or whatever. And you have a bunch of contexts in front of you, ? Like in a form of like documents , and canned prompts and all this kind of stuff, and you're just like, copy pasting up a storms [00:04:00] you're kind of step by step trying stuff to try to get yourself a result. And in order to have any, hope of having a good scalable outcome, you have to have a single example of a good result. Right? And you can prompt and reprompt and mess around with stuff all you want, right? But you have to get it to a point to say look, it's, it's possible to go from whatever my input sources are into. You know, the sort of fully transformed version of the output, right? And, you know, when you're, when you're messing around a chat bot, it could just be like a, like A-J-S-O-N output or something like that. But okay, now that you kind of have that, kind of golden path traced out, next question is like, how do I make this repeatable? Because I think a lot of the disappointment that people have trying to deploy AI at scale in organizations is that it's just not repeatable. Someone might have an interesting result once or twice, or someone with really good technique knows what to do, but it's so tedious. There's a huge skill ramp you need to everyone to go through know, deployment is extremely uneven, right? Some people get a lot of value out of it, but, but , that's gonna follow like a power law, right? A couple of people are gonna be super great at it, and then , the median result is just gonna be fairly disappointing. So when you're building [00:05:00] apps that are AI powered, ? What you really wanna do is to say okay, how do I like, get the median outcome to be really good? And and that's, all about building the correct harness so that. All your user is doing is, you know, kind of validating outputs and pointing it at the right data sources to begin with. Jeff: I think we've all seen the, you know, the LinkedIn posts around this is, you know, the magic thing I set up in 97 steps That I was able to get rid of my entire, you know, x, y, Z team with, and, you know, save $28 trillion or $20 million or whatever, and then, you know, they'll send it to you. I've tried a few of them , and just played around with it. And generally the output, is not even good at best o often. That's what goes from it being a good process for, you know, I'm sure it works for them. It's optimized for exactly the, the thing that, that one personnel at one team does. But building the product is how do you make that kinda work across a lot of different people, a lot of different contexts. Or account for all that. And make sure you have, , the harness where you're picking the right models, giving it the right input, like you said, you know, giving it the right JSON formats if needed, or specifying the json outputs to move to the next step. There's a million little steps that, you know, you gotta be really, really dialed on to make it great. Nan: and it's real engineering work, right? I think [00:06:00] that's the other part of it. You know, like people kind of hand wave the stuff and they say oh look, I, I set up this like 95 step automation process. And it, and it totally works except it's brittle. It's not durable to to changes in the environments. You can't reproduce it across multiple people. And, there's this big kind of story people tell around cost savings. I think that cost savings is like kind of the most uninteresting story by itself, because cost savings is okay, you were spending a hundred bucks and now you're spending 10. Okay, you saved $90. Great, but then what's next? ? 90 bucks it's not gonna change the world. Right? What you need to do is, asked the question about what was so, tedious. What was so bad in order, , in terms of the user experience , and the friction involved that you used to just not do it at all, right? The value you were getting from it was zero. And by applying AI in a well harnessed manner you end up making the cost negligible, Like free ish. , And the result isn't, you've saved some money. The result is the thing you weren't doing at all before. Now you're doing it. A huge amount, ? You were getting zero value from this process that theoretically could have existed, but now you're getting almost [00:07:00] unbounded value because you can just keep running it on repeat, ? And just turn on the faucet and it's just gonna run. ? It's like that, that sort of new value discovery is where I think the real opportunity is. Jeff: This is something that's been near and dear to our hearts at Log Rocket, right? ' 'cause We built session replace software. The good part is that when something breaks, you can see it, you can look at it, you can figure out why it broke. You can see what happened to the user. The bad news is even if you have, you know, a couple million sessions a month or even a couple tens of thousands, there's no world you're gonna watch that in. Not, not even close, ? I've said this since I got here in 2018, watching sessions is kinda a stupid use of time but , we worked really hard to build, basically the agent that can watch the sessions for you and tell you what's important and now you can get value out of, you know, you have 10 million sessions you can get, you can figure out what's going on in the 10 million and what you actually need to look at. Like you said, you didn't do it before 'cause it just wasn't worth the time to sit there and watch hours and hours and hours of the sessions. , And I think I was playing around with it. Had a query that watched 24 hours of replay in five seconds for me, and found some interesting things about conversion and friction. That's the level of, gain we're talking about here, that people should be looking for. Not, like you said, can I save 90 bucks and a hundred dollars fee? Nan: There's two aspects of this that are, I think, are sort of underappreciated, right? One of them is consistency. Even [00:08:00] if you, let's say you like, oh, you know, I have an unlimited budget. I'm gonna hire an army of people to watch videos, to watch replays. And for the report back, now, those a hundred people you hire, those a thousand people that you hire, they're all gonna have a different point of view on what it means to, have a user interaction or whatever it is. And now, and like they're gonna come back with reports, but the reports are not gonna be internally consistent. Jeff: Humans are humans. Nan: yeah. There's so much variation right across each each person. So even if you could bear the cost, which is, you know, we're talking about stuff that you, you wouldn't bear the cost, but even if you could, the results wouldn't be very good. They would just have no internal consistency. If you have a model doing all of this work, , it's gonna run through the same prompt every single time. It's gonna be based on the same base model. And , if you tune the heat to the right level, you're gonna have. Apples to apples across multiple session replays or whatever it is that , you're looking for. So , the kind of scalability that you get out of it wasn't really even achievable before, right? If you could have unlimited money and it was gonna help. And I think the other way to think about why this is , a powerful and interesting thing to do is computers are. Are they intelligent? Yeah, they're intelligent. But more important than that, they have intelligence. They were always really diligent, ? They were [00:09:00] always able to just have infinite patience and always pay attention and do the same thing a thousand times or whatever it is, right? They, they have, they're truly superhuman in terms of their diligence. But the thing that, you know, that harnessing lets you do is that it lets you apply , the sort of diligence that computers, innately come with They can look at a session replay the moment it's finished every time, no matter from what time zone in a fully consistent manner. A different kind of promise than you're able to do. With a human workforce trying to do the same thing. Jeff: And that applies to a lot places, right? Like you can start to understand, you say maybe on your guys' side for, tickets like this is this kind of ticket or, or this is describing this kind of issue or. Moving this feature set and it can start to, you know, computers were, like you said, infinitely diligent. They'd sit there, they had more patience than anyone could ever hope to have, but they were kinda like super linear and deterministic. If you do it right. , You're operating in a non-deterministic way, but you can kind of close the bounds a bit , of how wide that determinism goes. You're not always getting the same answer, but , you can generally, with the same inputs, you'll be able to get about the same answer and you can categorize it and do things like that were just untenable before and now you can kind marry that. That diligence [00:10:00] and determinism or, you know, kind of medium determinism if you do it right. Right. I think where you get completely wild answers and diligence is not useful, but I think that brings up an interesting point, which is cool, so it used to be you coded something and it was, A to B2C to D. There was a workflow, it was immutable. It was what it was until you recoded it. Now with this kind of. Wider area of where you might land from, any input. , It feels like at some level, the discipline you practice , and I guess a lot of the teams you serve too, ? Kinda changes the goalposts. Like you can ship something , and the UI looks great and it looks fine, but it's mostly, you know, you brought this up the other day, like it's a just chat box you can make, put it there. It's what comes out the other end. That matters. So like the job isn't done when you ship anymore. The job is now did you do the right thing? Do you, did you build it good? And, and that takes time to prove. Nan: Yeah. I think it's hard we haven't even developed like the language around this, right? We can, we can say these general things like, oh, is the quality of the response Jeff: Yeah, that's totally when I'm gonna blame my lack of sentencing right there on. It's just, we haven't developed a language around it. Not, I just completely didn't get it together. Nan: Yeah, I mean, in all honesty, we, we haven't, right? Because [00:11:00] I think our, the whole concept of shipping software has changed. Like software used to be like primarily about the data model and the ui, right? If you nail the data model and you have the right ui, then that's that's like the ceiling of what you could hope for, basically. And now it's well the data model is a blob of text and the UI is you know, just text output or a very simple input output kind of a setup. So. None of that is very complicated. There's no new invention happening there. Really. It's just a matter of the right text comes out, I talk to my marketing team about this all the time. They're like, Hey, can we, when can we market , this thing that we're developing? I'm like, look, I don't know. We're done with the ui, right? , We're done with hooking up all the data and things like that. So we're getting results, but like we have to first learn if the results are any good. And, you know, and understand when the results are good and when they're not. And like that, that part of it is it's hard to even discuss, ? You can talk about like evals and things like that, but like evals only get you so far. A big part of this is trying to run it against, a lot of test data , and you know, kind of canary deployments and things like that. And try to figure out like, , are people like getting good use out of this? Jeff: , that actually brings a good question. Like how, how have you guys tackled that there? . Because , like you said, , you can [00:12:00] ship you eye all you want. You can have , the wrapper of the feature. This has been, you know, a thing that . We ran into hard for you guys, there's probably a whole process there as well. Guys have talked publicly that , you use your own features before you launch it, but Obviously there's more than, than just using your own stuff. 'cause there's a wide world out there. What does that look like Nan: It's a combination of techniques. Honestly, it looks a lot like how a lot of like consumer applications , are managed where you have to get qualitative feedback from people who are using it. And they're like, how do you feel about it? When does it make you feel good? When does it make you feel bad? There's more statistical methods where you define some kind of performance goals and you kind of see what conversion rates and things like that are. And you know, so you, you have to look for like moments where there's some, signal Of feedback , from the user. And it can't be like, oh, gimme thumbs up, thumbs down. It can't be that right? Because no one opt into that stuff. That's not good enough, right? It's there, there has to be some signal about Hey, is the result that I am providing you providing value? And if you know you have to instrument something , , that tells you that and you might have to get creative with it, right? So we have a thing, for example, that helps you write project updates so like things that we can analyze [00:13:00] on, there are things like, are the engagement on the, the ones that were assisted, are they as good as the ones that, were written completely organically? what's the edit distance when we offer a suggestion, are people. Accepting it verbatim? Or are they going in and they're, you know, they're editing it and what's the right edit distance, right? Because if you, if people are just copy pasting, then who knows if , they like it or not, right? But are they going in and they're tweaking little bits oh yeah, that's kinda mostly right. I'm just gonna tweak it a little bit. It's okay, that's kind of what we're looking for. So , there's these kinds of behaviors that you have to tease out of the system to really give an honest evaluation, right? Of am I actually providing value in this instance? Jeff: there's a guy, a Des , who has done a lot of events around the country with us and has, has been on the podcast. He called it Tweak Time which I thought was a really interesting kind of way of looking, right? How do you measure is the output, right? And the more I've kind of dove into the subject, the more you start to realize like there are a lot of things where zero. Is a bad answer. You know, too high is , obviously a really bad answer. Right? Like you said, if they're going in and rewriting the whole thing, that's terrible. But if they're going in just copy pasting, it's one of two things, right? You either got it dead on perfect, which just seems unlikely. People have taste, people have like their own. [00:14:00] Way they wanna write or they're just copy pasting and going. And it seems more likely it's that one thing I've seen , and that we've used here a bit, has just been, I've worked deeply with customers and ask them like, is this good?, How much are you finding is wrong? You know, do you trust it? Nan: I think , the other thing that, again, it's very, very like consumer products, right? Which is you kinda have to watch people use it a little bit and to sort of see what. See what they do with it and how they interact , and , what are they doing? The work around the, the quirks , and intricacies , and things like that. 'Cause , it's very easy, right? You know, like there used to be this really cheap way to measure if something was good, which is this engagement. Like Hey I have a blogging, platform. How many words did you write? Is like a good measure of how much value you're kind of putting into it or whatever. How many posts did you make? How many? How many? How many? It's like how many isn't good anymore, right? You can generate infinite amounts of stuff with with AI and LLMs. So you can't do the cheap thing and say how many, right? You kinda have to like, watch people do stuff like, okay, are they just like shoving a bunch of LOP into the world and then calling it a day and not really thinking about stuff, or are they, actually utilizing your tools , to create valuable work? And I, I think that's something where. You know, [00:15:00] again, we're, we're, we're kinda at the beginning of being able to sort of measure that stuff. But on the topic of session replays observing people use it directly is a useful way to do it as well. Jeff: I've spent innumerable hours over the past several months just talking to people and walking through. You know, this is as they've got, you know, new feature sets into their deployment. how is it, show me how you use it. , What's problematic? What has been the output? Or just, you know, walk through. Just show me how you use it. Don't even think I'm here I think nothing's gonna ever replace that. You bring up, I think the other thing people need to think about when it comes to building AI tools is. At some level , we're seeing,, yes, UIs are, are simplifying and right, you just have the chat box or something like that. But also in a lot of places it's kind of disappearing and the work, there's no more proof of work, there's no more, you know, visualization of these things happening. Are you worried at all about when it starts to be too automatic or too just magical people start to take it for granted? Nan: It's been interesting to me to see how fast we've done the speed run of. claim everything is, is AI powered to no, you can't even talk about that anymore because it's so table stakes, Uh, Jeff: like less than a year. Nan: yeah, yeah. It was hyper fast. And so I think that look at, at the end of the day, if you're [00:16:00] providing value for people , in terms of the the output, they'll buy your service. I kind of did this thought experiment about let's say you were trying to create some kind of lead gen tool, right? Like for sales or something like that. And we used to think about it as oh yeah, well here's a, here's an entry field that you could type stuff into and maybe you can connect it to some sources for enrichment, like LinkedIn or something like that. But like ultimately you want leads to show up. So if you turn on a tool and you say, go, and then leads start coming in and they're good leads, like sure, there's all sorts of magic happening right at the end of the day, but you just care about having the result. You're paying for the result, right? You know, that thing they say about oh, people don't buy quarter inch drill bits, they buy quarter inch holes in their wall. And it's yeah, true. Right? What's way better than gonna the hardware store, buying the drill bit, putting in my drill, doing the work. It's Hey, that'd be nice if there's a hole right there. And then it just showed up. If I could do that, then , that's way superior. Right? And I think that, we're having this moment in software where that's basically what's happening. Jeff: . I think last time we talked, you talked about one of the things that linear really looks at is developer engagement. 'Cause you wanna be the tool , that, engineers are engaged with and that that's a core part of their, day-to-day kind of existence. And this kind of evaporates that at some level. Right? How [00:17:00] do you think about, you know, that mission or that thought process or guiding principle along with, you know, all lot what AI is doing is, is removing the need for people to actually physically, , or, you know, digitally engage with a lot of these tools. Nan: this kind of relates to what I said earlier about like there's no, there's no cheap way to measure value anymore. You can't just do counting stats and hope that it's, that's enough, right? , you have to go a step further in understanding value or you just have to think about it , and use different proxies. So when we think about,, how often people use linear , and what they do with it, it's not like more time spent in the tool is better for us. . That's generally not what we want. You know, engineers , are probably working, these days it's a little bit different now, right? They used to be working their ides, you know, primarily, but now they can, now they, they might be like prompting agents instead of linear even. Right. , That'll happen too. , But it's not you know, hey, we, we released like a feature for dashboards and I need people to make a ton of dashboards , and spend their life in dashboards or anything like that. That's not, that's not the right way to think about , the sort of value that's. That's being gained from that, right? You have to kind of back up further and say Hey, , what's the natural kind of cadence to use this? You know, what's the concentration that we expect in [00:18:00] organizations in terms of, getting value from these things? And, you know, think of it a little bit critically about that, right? And understand that as long as they're getting the result that they want. And as long as there's a regular engagement with your tool, that's probably a good starting point, right? You don't have to try to over optimize and sort of pump up numbers because pumping numbers is like really easy now. Jeff: If you're enjoying Launch Pod, the best way you can show support is simple. Follow the show so you never miss an episode. Leave us a quick review to help more product leaders find it, and share this episode with a friend or colleague who'd get value from it. Every follow, review and share really helps us grow. Two things I want to kind of make sure we cover still. guys put out , these agent interaction guidelines, which I thought were really neat it was a really cool set of things because I think it's important for people to keep in mind. I won't read through the whole thing, but the one that really, really stuck with me , and I think I saw you post about it is that an agent can't be held accountable. What brought you guys to put this out? Like it's six kind of really tight. Guidelines here, like you said, it's, aging could be held accountable, but it also goes behavioral right? Agent , should disclose that it's an agent if you're interacting with it and all these kind of things. What was the thought behind this? Nan: The impetus of it was we opened up linear as a platform [00:19:00] for agents to live within and operate within and. We went into it sort of naively thinking that, hey, you could just treat agents like other users, right? That would be like a really easy on-ramp for people. You could interact with them through text and the same way you interact with like a, like a coworker. And that's a very appealing kind of story to think about. then we start seeing a lot of developers and vendors start building agents. Having them live inside of linear, and you quickly understand that there's a new set of problems that everyone has to solve, that everyone's running into the same kind of like UX barriers and the same kind of frictions and uncertainties. So we wanted to write something that says look, this is what we've been learning from working with a lot of different developers. So if you're writing an agent for the first time , and, trying to intuit how it should behave, here's a set of principles to kind of start with. And the last one, I think the most important one is that an agent can't be held accountable, right? , If you let an agent run loose in your system and it does a bunch of stuff that you don't like, then. There's no one to blame, . , But yourself, you can't blame the agent. The agent doesn't understand responsibility. And I, I think that that's something that is really important to keep in mind [00:20:00] when you know, thinking about what kinds of abilities. And what kinds of restraints right to put on these kinds of age ancient systems. Jeff: the onus is on you, right? At some level , you're responsible if you create the agent. It goes haywire. You can't blame something else. But definitely a, a good doc, like I said I don't wanna spoiler the whole thing 'cause people should go check it out. There's good thoughts though too on also if you are building with ai, read through. Because if you're trying to build it into your product, it. Helps kinda give a couple interesting points on like how you should think about these things. Inhabiting your product, right? The idea around the agent should inhabit your platform natively. I think we've all kind of seen like the bolt-on thing that doesn't really do anything useful. That seems more almost like product tip than, than really, you know, the same level as you can't blame an agent that just seems more like. Build Well versus kind of a, a safety or, or kind of blame thing. Nan: I think some of these, , these these items and including that last one, , we've, , tried to make it a sort of native aspect of the platform that we're building, right? So, for the accountability one, right? Like we started off with this concept that you can assign an issue to an agent, right? It would just [00:21:00] inhabit the assignee spot in the same way that a person would. And and we shifted towards a concept we call like delegation. Where the person that's responsible is you. It's Jeff, it's non, and even if you've asked an agent to do the work, right, , it's just your sidekick, ? It shows up on the ticket, but it shows up as a a secondary assignee, so to speak. And that way you can understand, you know, where you're gonna be looking to , in terms of finding the result, but you are the primary person who is responsible for, the completion and the, the Jeff: Yeah, Nan: of that particular issue. Jeff: That's a really interesting kind of like output , of, you know, this general guideline because we had we had someone from like a, a medical tech company and what they had done is they built. Basically, the zoom summarizer, but that for doctor patient interactions. And so, you know, it took all that and did it in a HIPAA safe way , and made sure not to pass certain data, but it summarized the appointment so the doctors didn't have to taking notes while, , you were talking they could pay more attention. It recorded the deals of what you as a patient would say. So, you know, the doctor had a good record, but also it took note of like prescriptions the doctor wanted to, to give and, and. You know, , the kind of treatment [00:22:00] plans and things like that to ensure that it was applied correctly, but at the end, none of that could be entered into the actual system of record without the doctor approving it. The doctor had to look at the notes and say, yes, that's accurate. The doctor had to look all, thank goodness, had to look at the prescription and say, yes, that's correct. Nan: Like these kinds of things result in, changes in people's behavior. Right. And also in change, like emphasis on what people's jobs actually are. Because like with engineers, it's so much of your work now is like reviewing generated code. And so being a good code reviewer is now like such a core scale , for engineers who wanna use the latest tools and who want to be on the cutting edge and be the most productive and things like that. That was never the emphasis, the emphasis is always like generating original code, right? Hey, here's a math problem code, and write the solution or whatever. But that's not, that's just not even a, a facsimile of how people's jobs get done anymore. I don't think we've caught up socially, right? I don't think interview processes have caught up except for maybe at you know, very early stage startups. And I, I don't think that we've as a community really kind of acknowledged this yet. We're starting to see it like happen like very quickly. Jeff: what does this actually mean when, as, as more and more and more code is being produced by [00:23:00] these kind of adjunct tools or, or co-pilots or depending on your process, right? It seems like there's companies, especially newer ones that from the scratch they were very small when this stuff gonna happen. They were able to really adopt it and it's kind of built in and, and they're just. Flying along. I can't imagine that's the same. Not even because some of these bigger companies are a little older or, or slower, but just there's so much context to have and some of these huge code bases, these older legacy tools, I, I don't think at, some of those companies, it is the same thing , where engineers are just reviewing, because to write the code, you know, the agents don't have the context window , maybe now they're just starting to, but do you think we're gonna see kind of a difference there or, or a stratification of how people operate that way? Nan: I think there's a convergence on all honesty, right? Because no one's really expecting these things to hold the entire codebase in context. So it's all about like, how is it able to use like code search tools? How is it able to selectively think about certain files and forget other ones, those sorts of more dynamic techniques that are sort of naturally either part of the harness or part of the fundamental behavior of the agent itself. So , I think that there's a lot of those kinds of things which kind of mitigate that, right? Like in terms of that [00:24:00] pure context window thing that said, you know, there's, there are certain patterns which are much easier for AI to reason about, right? If you, have more, direct implementations, if you're using libraries that are fairly common and in the training set and you know, versus you have a ton of weird in interaction that You know, only like the one genius at your company, like understands, right? And you're, you're in one of those situations, right? That's always gonna be hard. That's hard for people right to, to deal with. And it's especially hard for for coding agents as well. Jeff: one thing more for the product team that we've seen is just this move to being able to move a lot faster. The time it takes to write, you know, a thousand lines of code used to be quite a while. You can put together kind of a framework or an early idea for something like the cost of failing is a lot lower, or the cost of failing at an early ideas is way, way lower. How have you guys looked at that? Or like, how's that affected over at linear? Like how does you know that kind of, whatever you wanna call it, vibe, coding. Prototyping or any implementation, you know, of the rough idea there. Nan: , It's affected it in sort of two specific ways, The first one is you don't have to distract engineers anymore, people just, you know, here's the way, [00:25:00] just engineer, Hey hey Jeff. I thought the code was supposed to work like this, but the behavior seems a little different. What's going on in there? Do we have the right case statements? I don't have to ask you that question. I can ask a coding agent hey, the same question basically, and be like, tell me exactly how this is supposed to work. 'cause I'm seeing , this thing, and they go yeah. And then it'll tell you like, here's what's the code does here. It seems like it's doing this on purpose, or no. It seems like there's a bug in this area because there's like a missing case statement or something like that. Right? That's something it could, it could verify for you. I didn't have to take somebody out of their flow state and have them. do a little side quest and, and think about it, and small changes, like trivial changes hey, let's change some padding here, let's update some copy. Those kinds of things, like our designers are just doing directly now, ? You can call that vibe coding, you can call that, whatever it is. Right? But it's just they're just, you know, they're, and they're not even like opening up a IDE or cursor or whatever it is, right? They're literally like in linear, like mentioning one of our coding agents. It could be Cursor or Cogen or, or Charlie or whoever. And they'll say Hey, here's a, you know, here's what I wanna do on this screen. Change the padding here. You know, here's the color. And, and they can specify to their heart's intent, right? They can be real nitpicky about stuff. And again, infinite patience, right? The designer can [00:26:00] be super precise and no detail will be, will be missed. So I, I think one big way, ? Where you're increasing productivity because you're just not. Distracting your core developers and also you're getting stuff done and getting value in a way where you, you couldn't before. And then I the other thing we were talking about you know, reducing risk. Yeah. You can try new stuff like designer code there and be like, Hey, I wanna try this completely kind of divergent idea. Just make me a prototype for that. Just like in the current code base, maybe a prototype. Let's deploy , like a canary branch and let's just see how it feels. And like you could have asked an engineer to do that before, like you could have, but. They'll kind of give you a side eye. They'll be like, okay, you really wanna try this? I mean, it's gonna take me like a few hours and is that really what we wanna do? I got other things to do, man. And then you might've said, okay, yeah, forget about it because maybe there's only a 10% chance that it would've worked out. And like you are like, look, I'm not gonna waste a day of somebody for a 10% good outcome. But in the good outcome case, it might be really good, right? Like you, you might wanna try that. , So now you just can, you can try it, right? And then if, if it feels promising, then you can, you can be, Hey, look, this feels pretty promising, right? If we make some couple tweaks, then I think we can actually like, take this to production. That's a much more fun conversation to have with a senior developer, right? Then can you try [00:27:00] my hair brained idea? Jeff: Yeah, because , that's the thing, right? Like you, if you have 10, 10% ideas, one of 'em is probably a hit. If they all have high payoff. But you know, in the past, running 10, 10% ideas is probably a great way to get an engineering leader or worse to come down on, you're pretty hard Nan: Yeah. Yeah. Jeff: like, engineering time's expensive or it was, but when the cost of new ideas is, is low, try everything. See what you can do is, you know, try the wacky idea that has low chance by high payoff. And I think we'll see some pretty cool stuff coming out as a result. Nan: I think this was the dynamic that we're talking about, right? It's not something that you were doing already and we made it cheaper, right? It's something that you, you weren't even gonna do it in the first place, you're getting value where there wasn't value before. Jeff: Yeah, exactly. right, , one last question one thing that has come up, I feel like again and again, and it's across industry, it's across function. It, it's like everywhere is people talking about this idea this 98 step thing replaced this entire, junior team , like people are talking about, do we hire less junior people? Do you not hire junior people? Do we not need. That function more. 'cause a lot of that, you know, for better or for worse, a lot of what that role often did in teams was the kind of more manual, , [00:28:00] repetitive stuff that, that you did make the case. Like, all right, this is worth doing and having a person do, but it's not fun. It's like it would just throw a junior person in and it was part of a training path to get, you know, up more. But if we remove all that, great. We save some money now, but in 10 years you know, I think everyone, hates to think about the fact that we were all a gen, you know, you and I and everyone was, once you know that, bright-eyed kid. Coming in for the first time going what do I do? And we had to learn somewhere. , And that was part of learning. , And if you stop having those jobs we're, gonna find ourselves short of some people in about 10 years. Nan: I think a lot of people talk about, just conceptually this idea about hey, AI is replacing junior level work and it's gonna, reduce the incentive to hire young people. Right. That's kind of what they're saying. and I, I don't know. I'm fairly skeptical of that argument. I think that, that you, you might find some localized instances where , that happens, but we're about to have a generation , of new employees, ? Fresh out of college who have spent their entire high school and college, with chat GBT as just like part of their life. It's a thing they took for granted when they were trying to do [00:29:00] work. I think that they understand how to leverage these tools way better , than the folks with more experience in the industry. . . It's the thing they reach for, right? When Toby over at Shopify says, Hey, we, we need to reflexively reach for AI as our tool. He's, you know, kind of like forcing people to do it and putting incentives in place in their payrolls and all that kind of stuff, right? But what these junior guys, this is just what they do anyway. You don't need to tell 'em. I think that, companies are gonna find that they have a lot to learn. from folks that, might not have as much experience in the workforce, but they are native to using AI tools, and to utilizing models in creative ways, in, in every aspect of their work and personal life. I think that, we are definitely reacting to the , very early stages, but, you kind of have to let the story play out a little bit. And especially if you're projecting 10 years, it's like in technology. Projecting 10 years out is is a little bit impossible. Jeff: In tech 10 years is, is basic forever. And given the accelerated pace we've seen. From, developments since you know what GPT-3 came out , in 2023 Nan: Yeah. Something Jeff: 10, I mean, 10 years sounds like it's a, almost a millennia at that point. So you know, who knows? I generally think it'll be okay. I got my 12-year-old [00:30:00] using lovable and he sat down, I think whipped out a off brand version of Flappy Bird in about, , two and a half minutes. And I was like, oh, that's you. You know this better than I do already. So. There we go. Well, none, I, I always love having you come on. Thank you so much for coming. Thanks for walking through both how you guys at Linear are looking at a lot of this stuff, but also just generally, what does this mean? And, the kind of idea of harnessing AI , and building the framework around it. How do we look at engagement? What does it mean and how do we kinda operate and build great tools? 'cause I think we are still really at the beginning and. Forget 10 years. 10 months down the road, it's gonna be completely different. So, this is great man. If people wanna kind of ask further questions or even just say, you know, Hey, that was great. Good job. Is, is LinkedIn the best place to reach out? Is there, is there a better spot? Nan: probably on on X is probably the best. Best place. My handle is the non U-T-H-E-N-A-N-Y-U. I'm pretty active. My dms are open. You can find me there. Jeff: Thanks for coming on. I really appreciate it. This is a blast. And yeah, hopefully keep in touch. Nan: Yeah. Thanks Jeff. Jeff: Thanks. Bye.