This editable transcript was computer generated and might contain errors. People can also change the text after it was created. David Egts: Gunnar what's new? Gunnar Hellekson: They want to give a unsolicited advertisement and this is an unpaid. David Egts: Yeah. Yeah. Gunnar Hellekson: Unpaid endorsement for the AppleCare program. Let me tell… David Egts: Okay, okay. Gunnar Hellekson: what I did. I have this iPad that I use every day for a variety of things and always works like this about three days before the Apple care of the warranties about to expire. suddenly it's getting Phantom touches is what I will describe it as so in the operating corner of it. David Egts: mmm Gunnar Hellekson: It would behave as though I had touched it when in fact, I wasn't even touching the system right, but it would Tested Scrolls the top on its own. this is broken. So, this thing's almost out of warranty. So I'm gonna take it to the shop and take it to the shop. And of course can't reproduce the problem. Of course, right but as my Apple friend is diagnosing the problem. he's gonna go reprovision the system He's going to wipe it clean install everything and see if he can reduce So he's in the middle of that operation and… David Egts: Okay. Gunnar Hellekson: then realizes that he can't actually create a data connection to this iPad. And he goes and… David Egts: Gunnar Hellekson: tries another cable and he tries another machine and then he realizes he can't even charge this iPad. David Egts: Wow. Gunnar Hellekson: And he's like I have to send this in because I can't do anything with it here. I have to go send this in and maybe they'll give you a replacement. And so I said yes, please this is a much better outcome that I was expecting and here I am today. David Egts: Yeah. Gunnar Hellekson: I got a brand new shiny iPad. No charge. David Egts: Wow. Yeah. Gunnar Hellekson: Yeah. totally lucked out With one day to go on the warranty. a Triumph as far as I'm concerned,… David Egts: Wow. Gunnar Hellekson: so that's all good. David Egts: Yeah, you're beating the system. Gunnar Hellekson: I beat the system. Gunnar Hellekson: That's right, and thank you to my apple and… David Egts: so are Gunnar Hellekson: thank you to my Apple friend. Yeah. David Egts: are you typically an extended warranty guy with what's your position on that? Gunnar Hellekson: Usually no. I'm not an extended warranty guy,… David Egts: Yeah. Gunnar Hellekson: but in the case of the Apple stuff which tends to be Gunnar Hellekson: Heavily used always on my person and often dropped. It makes it worth it. David Egts: Yes. Yeah, yeah. Gunnar Hellekson: Because you got basically only one thing has to go wrong for it to pay for itself. and in the duty life of most of these devices They're gonna break at some point for one reason or another. It's absolutely worth it. Absolutely worth. David Egts: That's great. That's great. Yeah. Gunnar Hellekson: Was great. Yeah, how about you what's going on? David Egts: I finished off a book called after world. I don't know… Gunnar Hellekson: okay. No, what's that all about? David Egts: if you haven't heard So I recommend it if you're not depressed. in all seriousness, it's a super sad fiction book. and so imagine it's a hundred years from now and it's overpopulation and everything. And so the decision to save the planet was to get rid of all the people. and it's basically like I was a Cormac McCarthy's the road sort of vibe to it and with Gunnar Hellekson: Yeah, sure. Yeah. David Egts: That plus 1984 where you had the dictionaries of new speak put in, that sort of theme worked in of and all that. and with a newer touch with AI in terms of all these sensors capturing Basically quote unquote uploading people to this computer to relieve the environmental consequences. And so when people put into this computer and simulation right there,… Gunnar Hellekson: Okay. David Egts: they're Consciousness per se and it's like a recording of all of their behaviors and it's like an agent model of them being generated and this is the David Egts: Story of the last person and there's an AI that is writing her story and the AI ends up falling in love with the last person. and so it's like my gosh, and it was great. But I was like and I'm glad to be done with it and move on to something else is heavy it was heavy. Gunnar Hellekson: Yeah, that's pretty dark. David Egts: But if you're in a good mood and try it out that'll fix your good mood. So it's like 00:05:00 David Egts: No, it was pretty wild. Yeah. Yeah. but we're gonna be talking about stuff it's not nearly as depressing right on the show. Gunnar Hellekson: don't Gunnar Hellekson: But we talked about this earlier ve had this is also pretty dark. David Egts: Mm- Yes. Yeah. So yeah, we're gonna talk about an X openai employee that wrote a bunch of essays and put them out on the internet and if you want to go through 165 page PDF of all the details of where we could potentially go. It's almost like a worst case scenario, people check it out and… Gunnar Hellekson: Mm-hmm David Egts: and we're gonna unpack it and go through it. Yeah. But yeah,… Gunnar Hellekson: Yeah, yeah, that's right. That's right. David Egts: but meanwhile on The Cutting Room floor, if you don't have enough things to be sad about we also have Harold Tiff's portable nuclear bomb Shield. Gunnar Hellekson: yeah, yeah, which is adorably naive David Egts: Yeah. … Gunnar Hellekson: Is that fair? David Egts: what yeah, so basically it's like something you would put in a suitcase and then I guess it's pieces of lead that you would pull out of the suitcase and then turn into this Shield that you would lie underneath it stand up against the wall and have it be behind you and to get through a nuclear blast and all that, but it was patented in 1960. Gunnar Hellekson: That's Yeah, so making any knockoffs right because I guess David Egts: Yeah, the patents expired so it's fair game. for people to do your own knockoff versions of it and everything and you could probably nowadays with the tinfoil hat thing. it's like A tin foil covering if you wanted to do tin foil to keep the government out of you completely,… Gunnar Hellekson: right David Egts: right? Gunnar Hellekson: Yeah, keep you from what they call it chemtrails. David Egts: Yeah, yeah, right. Yeah. Yeah. Yeah,… Gunnar Hellekson: Yeah. David Egts: so people can check that out. But yeah, yeah and it's so for people to get all this wonderfully sad things. Where should we be sending them? Gunnar Hellekson: They should dry their tears and go to dgshow.org. That's a d isn't Dave. She's in Gunnar show.org. Yep. David Egts: Yep. Yeah. Yeah, So the website it was situational h awareness. If you go there you get all the essays pull down a single PDF of all of them 165 pages written by a guy Leopold ashenbrenner and what he graduated from Columbia at 19. So smart smart person. Gunnar Hellekson: Yeah, he's smart guys. So a blender can character, right? Yeah. David Egts: Yep, and he was on the Super alignment team and openai and he started seeing a warning signs and then he wrote these things and started sharing them out with people and all that and then I guess depending upon who you talk to it led to his dismissal at openai. And now these things are out there and people could read them and… Gunnar Hellekson: Yes. David Egts: that's what we're gonna unpack. Gunnar Hellekson: Yeah, That's right. and before we start Dave we should also talk a good provider. Our friends here some context on what level is up to now because I think that speaks directly to the topic, right? David Egts: yeah, yeah, so I was at starting a venture capital fund or something like that for super intelligence but do you have more on that? Gunnar Hellekson: Yeah, so he's basically in the business or he is starting a business to go solve the problem that he has describing in great detail in these essays. So not unmotivated. David Egts: Yes, yeah. Right, Yeah, and maybe it's the other way. It's like he sees a market opportunity, right and… Gunnar Hellekson: Yeah. David Egts: just in the same way that our Barber sees a haircut and everybody so Gunnar Hellekson: Yeah, that's right. Like we were saying earlier. the role of these essays is to invent the idea of bad breath. Then sell some mouthwash. 00:10:00 David Egts: Yeah, AI bad breath. Yep. Exactly. Yeah. Yeah. Gunnar Hellekson: That's right. Gunnar Hellekson: where should we start? Yeah. David Egts: Where do it's a man the 165 pages of Lots of them at the exponential scale, which is I always love seeing those right that and logarithmic scale stuff. It's Pretty wild, but he talks about. ooms orders of magnitude right of looking at you… Gunnar Hellekson: Yes. David Egts: whether it's energy consumption or compute needed for the AI models and stuff like that and the grass stuff out and it's pretty interesting to see things like David Egts: The progression of GPT 2 to 3 to 4 and then you plot the line out right of where it's going and part of his. thought is that it's like hey, we're talking about llms today, and if we look at The relative intelligence and the energy consumption and the compute needed it plots along a pretty interesting line. and if you plot out where things could David Egts: Pan out, one of his premises is that we're going to go from llms that we're talking about today to agents to artificial general, intelligence Yeah, and… Gunnar Hellekson: AGI David Egts: then ultimately super intelligence which is intelligence. It's smarter than we are and his promise is that we're going to hit super intelligence around 2030. Which is not far off. Yeah. Gunnar Hellekson: Yeah, he says. That's right. and So part of again this reasoning here is you gotta hold on to this tightly. Right? So in the same way that in the hardware business we talk about Moore's Law, right things are gonna double every 18 months. He says we're gonna move orders of magnitude in a certain amount of time and… David Egts: Mm-hmm Gunnar Hellekson: he says look at the progress that we've made in the past. You can kind of extrapolate that and he's got some official looking graphs to kind of show this we are not just doubling and X gains over a certain periods of time and together with that gain in performance and gain and efficiency also comes gains in power consumption, right which we'll talk about in a second and he also introduces this idea of unhobbling, which is really important to his thesis… David Egts: Yes, yes. Gunnar Hellekson: which is we're making a certain number of gains now, but eventually we are going to play the game in a different way by removing some limitations that we put in place an unhobble these artificial intelligences either through he talks about through agents, right which will allow them to interact with other programs. Maybe even the physical world in a different way and through these agents. We are going to further improve their performance, right? David Egts: yeah, what and the way I understand unhobbling it reminds me of what you said about Moore's law about, if you only look at it's like if I do things exactly the same way the number of transistors on a diet will double every 18 months and we found that that wasn't true but the reality is that people would figure out new processes and new things with Material Science. And I would consider that unhobbling on a chip density standpoint. And 3D circuits right 3D transistors. Gunnar Hellekson: right David Egts: So instead of having a single substrate, they could actually go vertically And so this would allow you to have not possibly even some step functions of you make this breakthrough and it's a shortcut and then he talks too about like that. David Egts: It's not just the humans doing the training. It's like what happens when you start automating the AI training and things get even faster and you're building this flywheel. Gunnar Hellekson: Yeah, yeah, that's And in the course of making this argument, he makes a really interesting point about one of the limitations with the way that we train AI today is enormous amount of power consumption, but it's all basically using the same Corpus which is the internet if you're trying to capture everything we know about the world Good the Bad the… David Egts: Yes. Yep. 00:15:00 Gunnar Hellekson: You're gonna take a snapshot what amounts to a snapshot of the internet, That's kind of what you have available and get some really you had an interesting statistic. I wish I don't have it on my fingertips, but about the amount of redundant data there is on the internet, You can actually boil the entire internet down to a fairly manageable size All Things Considered,… David Egts: Yep. Gunnar Hellekson: right and that every AI we have today is basically being trained on the same. base data Which… David Egts: Yep. Gunnar Hellekson: which was I thought that was really interesting. So in other words The Innovation is not going to come from pouring more data in which is what we've been doing to date right but we're gonna have to either find new data or we're going to have to find more clever ways of using the data that we've already got David Egts: Yeah, that reminds me of it's a Microsoft fee Phi. not large language models are actually smaller language models and what? their premise is it and this is to get language models to run on your phone and all that where you don't have massive amounts of memory and compute and battery power to power the llm and their premise is that they actually use an LM to generate a bunch of children's books. And use those generated children's books to train the llm. Gunnar Hellekson: Right, right. David Egts: And so it's like do I need more? tweets from Twitter and will that make it so much better, right if I add more tweets from Twitter or is that or you just add more garbage? Right? And so,… Gunnar Hellekson: Yeah. David Egts: to me is that an unhobbling would you call that of trying to figure out ways to be smart about is more data necessary or is more of the right data is what's necessary. David Egts: Yep. Gunnar Hellekson: Yeah, that's and the track record here is not spectacular, right and there have been AIS that have been trained on let's call them highly toxic data sets like Reddit for example, right which you train an AI on Reddit and… David Egts: Yeah. do you think that the future is? Gunnar Hellekson: you are going to get a racist AI right? That's what's gonna happen. David Egts: Instead of having the AI companies just hoovering up everything off the internet. Gunnar Hellekson: It's already happened and I think the idea that we could somehow from this. David Egts: It hey, I'm gonna license stuff from New York Times or… Gunnar Hellekson: Let's call it contaminated dirty data and… David Egts: Wall Street Journal or the Journal of the American Medical Association,… Gunnar Hellekson: we could derive some cleaner data which would be useful for further training is that's something he doesn't really interrogate here. David Egts: right? And So that solves couple things. Gunnar Hellekson: But it's like I really skeptical of as an approach right kind of garbage in garbage out I think. David Egts: It makes the models smaller and Tighter higher quality and there's also hopefully some compensation and Licensing of the content that goes back to the content creators. David Egts: Yes. in Gunnar Hellekson: right Gunnar Hellekson: Yeah, yeah. Gunnar Hellekson: Yeah, that's right. That's right, which may all be helpful on the road to more accurate AIS right or lines anyway, or small language models all that is true. but okay, we're already derailing his thesis here. but he's saying we're gonna have this explosion of algorithmic efficiencies. getting better kinds of data. We're gonna unhobble ourselves. So we're going to be able to cut that maybe these things can collect data on their own not just relying on that kind of chatbot interface that we're comfortable with now,… David Egts: Yes. Yeah,… Gunnar Hellekson: And he was saying that we should expect in the way that AIS are in the last just few years have jumped from kind of preschooler to plausibly High School. David Egts: and like he did in the one graph that quote unquote, preschooler is analogous to gpd2 and… 00:20:00 Gunnar Hellekson: Quality, we're going to experience a similar jump. David Egts: Elementary schooler to gpd3 Smart high school or… Gunnar Hellekson: In quality between now and… David Egts: to gpt4 as is GPT five or… Gunnar Hellekson: like you said 27-2030 and… David Egts: whatever. Is that going to be the PHD student or… Gunnar Hellekson: we should expect … David Egts: six? Right? Gunnar Hellekson: if we jump from preschool to high school in that amount of time we can at least expect High School to PhD in the… David Egts: And then eventually it's like that banning you'll be doing with the AI will have more education than we will and… Gunnar Hellekson: handful of years, that's his promise. David Egts: we won't be able to tell. what's true or… Gunnar Hellekson: Yeah. David Egts: not, David Egts: Mm-hmm Gunnar Hellekson: Yeah, that's right. Although there's a sleight of hand going on here too right where he says it's behaving as a preschooler. David Egts: Okay. Gunnar Hellekson: It's behaving as a high schooler and then behaving as a PhD student. for example, but He kind of slips right past the place. David Egts: It's autocomplete on steroids. Gunnar Hellekson: Where is it? Actually reasoning as a high schooler or reasoning as a PhD student and I think that it's because the way that these things work is it's just a neural network of tokens, right and It's not exercising judgment. David Egts: Yeah. Yes. Gunnar Hellekson: This is pretty important,… Gunnar Hellekson: It's basically just doing a very sophisticated. David Egts: Yeah, and… David Egts: then you're in and also he talks about intent. Gunnar Hellekson: statistical analysis of words in order to mimic the behavior of a smart high school student or… David Egts: Lying right and also optimization as well, the whole paper clips. Gunnar Hellekson: mimic the behavior of a PhD right David Egts: Story right of … Gunnar Hellekson: Yeah, yeah,… David Egts: if you give it something to maximize upon it's going to figure out… Gunnar Hellekson: That's right. But from this point in the essay forward. we just take it as read that these AIS are now reasoning at that level… David Egts: how to maximize it unless you put rules in place that it doesn't skirt around. Gunnar Hellekson: because I guess the argument being like who can tell the difference. right Gunnar Hellekson: Yeah. David Egts: Yes. Gunnar Hellekson: Yeah, That's right. And now once we get and so at this point in the essay, okay, so now he's painting this picture of This open-ended especially once AI begins training AI right now, we've passed over some important threshold and now we're in the realm of not just a general intelligence and AGI but also we're in this explosive growth of intelligence and the super intelligence he refers to it, right. Gunnar Hellekson: And this is where things start getting real wacky and actually interesting So even if you are skeptical of everything right up until this point As maybe you can tell I am a little bit this is where the thought exercise actually becomes useful, Because he brings a bunch of new context into this. Right. So let's say for the sake of argument that progresses past a human level intelligence. We're actually able to create a super intelligence. What are the consequences of having access to a technology like this? David Egts: Yes, yeah. Gunnar Hellekson: right and one of the first that he talks about is This notion of just the resource race. Required to keep feeding this cycle, right? You need gpus you need data centers and maybe more importantly and kind of one thing that he spends a lot of time on is you muster power in order to run the servers… David Egts: Yeah. yeah, he has some of the graphs that he talks about Gunnar Hellekson: who will train this superintelligence,… David Egts: the total AI demand for power consumption will be equivalent to the total electricity generation that's expected for the United States in 2030. Gunnar Hellekson: because no amount of algorithmic efficiency gains and no amount of clever GPU design are going to slip us past the fact that we will always need more power to run these things. Because we're just gonna win any efficiency is just going to get plowed right back in and so power becomes the limiting factor in this kind of AI economy that he's talking about. David Egts: yeah, you and then to me I think about this of it's like trying to put a fitted sheet on a mattress right You pull the one corner down, right? And it's like I'm gonna have more data right now. Okay, I got that corner down and then you pull the other corner. It's like okay, I need more gpus and then you pull the other Corner down. I need more electricity and then something pops off, So then you got to figure out it's okay in order for this. Thing to play out. 00:25:00 Gunnar Hellekson: Right, So obviously that's a problem, right? David Egts: It's like I got to make some assumptions of whether it's solar generation or nuclear power or some breakthrough and Material Science that allows or Quantum Computing or whatever, that it's like you got to figure out how are you going to address the energy problem? Gunnar Hellekson: That's right. David Egts: right David Egts: Yeah. Yeah. Gunnar Hellekson: Right, and then at this point in the argument, he makes this kind of head fake towards well, and of course we could use a super intelligence to help us with these Material Science problems, right? and this is where we go into the paperclip analogy right where you tell the AI that its goal is to create more paper clips and… David Egts: Yes communist China. Yeah. Gunnar Hellekson: pretty soon. The entire world is covered in paper clips and everything's being turned into paper clips and paper clips are feeling the paperclip factories Etc, right? Yeah. David Egts: Mm-hmm Gunnar Hellekson: so Okay, so at this point he introduces A new challenge, Which kind of sets up the rest of the essay, which we're Do not take for granted that the US is the only country trying to do this, right you use trying to do this, but most importantly China is trying to do this. and Communist China and the communist China is not encumbered by pesky things like environmental law. David Egts: yes, and… Gunnar Hellekson: Or a free market and… David Egts: Literally, yeah. Gunnar Hellekson: their ability to do an industrial mobilization towards feeding this superintelligence. David Egts: and it's the part with it starts to like hey,… Gunnar Hellekson: is In his mind inevitable because the benefits as he's described them are unbounded,… David Egts: I yeah, we sort of seen this movie before right of as people that grew up in the eighties of the nuko arms race and… Gunnar Hellekson: right? As long as you being this machine the machine will keep making algorithmic efficiencies. It will improve Material Science research. David Egts: who's going to the … Gunnar Hellekson: It'll improve AI research and… David Egts: and back in the 40s the race to build the bomb and… Gunnar Hellekson: it's this self-looking ice cream cone now, right? David Egts: everything and he ties this. Race that were in right now for AI as an existential race similar to that of the nuclear bomb and… Gunnar Hellekson: Infinite outside, literally, right David Egts: nuclear energy. David Egts: Yeah, right, right. Gunnar Hellekson: Yes, That's And just for those keeping score at home. I think we're about four or five hypotheticals deep, But okay, we're in for a penny in for a pound. Let's keep rolling right so if you take all of that to be true now the benefits of having a super intelligence in the case of If you are a nation-state. David Egts: Yeah, you've won or you lost? Yeah. Gunnar Hellekson: It is an existential threat if someone because the first person to reach a self-licking super intelligence,… David Egts: Mm- Yes, and… Gunnar Hellekson: right is we have going to have a permanent advantage in politics in war and… David Egts: it's gonna happen in 2030. and what's your status report? Gunnar Hellekson: in the economy in all the conduct of a nation state, you So it's been a game over at that point and so now it is. The mission of every state that can muster itself into this fight is you now have to get a superintelligence before anybody else. And it's a zero and… David Egts: Yes. Gunnar Hellekson: it's some zero game, right? Gunnar Hellekson: That's right. Gunnar Hellekson: That's right. David Egts: Yes, yeah air gap. Gunnar Hellekson: So if you are our friend ashenbrenner your hair is now on fire. David Egts: No. Gunnar Hellekson: And so the first recommendation that he makes is that we now need to start treating an AGI as… 00:30:00 David Egts: Yep. Gunnar Hellekson: if it was a national security project and leaving this in the hands of a bunch of Silicon Valley VC funded Ding Dongs is dangerous. Gunnar Hellekson: And instead should be nationalized. Gunnar Hellekson: And airgat, So he Compares this with the Manhattan Project,… David Egts: Yep. Gunnar Hellekson: We need to grab all these folks send them out into a desert in New Mexico and don't let them go home until they've developed a superintelligence. Gunnar Hellekson: To say Dave this part of the argument this didn't seem this wasn't as crazy as the whole story until now right because David Egts: Mm-hmm Gunnar Hellekson: The AI research being done right now is predominantly being done in the private sector, which means it's extremely transparent to everyone right? David Egts: Yeah. Gunnar Hellekson: and if there is a advantage to be Had from a nation-state possessing any part of this technology. David Egts: Yeah, no, and I can imagine the same vein that popped out in your head, if being the open source theologians we… Gunnar Hellekson: Yeah, shouldn't have me we treat this week. it's a weapon system,… Gunnar Hellekson: It is an ammunition like we shouldn't we be treating this a weapon system. David Egts: and… David Egts: also from working with the government,… Gunnar Hellekson: Why wouldn't we treat it like weapon system? David Egts: Large programs are … Gunnar Hellekson: So that part did get me thinking. What do… David Egts: those are hard to be successful disconnected from everything and… Gunnar Hellekson: what do you think is that and he actually takes a swipe at open source. in this part of the argument right or… David Egts: all that. So Gunnar Hellekson: races like,… David Egts: And… Gunnar Hellekson: open sourcing. David Egts: but the other thing that he talked about it just to go back a second you mentioned about the shouldn't be in the hands of AI startups… Gunnar Hellekson: These things is a material danger to the country is basically his message. David Egts: because a lot of loose lips that could potentially sink ships going on that he says, hey just go to a happy hour in San Francisco and… Gunnar Hellekson: Yes. David Egts: and you just listen, right? You're gonna hear all kind of stuff right and the cyber security of startups which we've all seen it right that it's like if you have a fixed amount of Runway that you have to get off the ground, are you gonna invest it in super strong cybersecurity and not get off the Runway or will be willing to take more cybersecurity risks and be a little bit more fast and loose. Than getting the product out the door faster. Gunnar Hellekson: Yeah. David Egts: Yep. David Egts: Yeah, yeah. yeah, right was it class Fuchs? Yeah. Yeah, and they're a bunch of them right it's so it's like I appreciate he has this Silicon Valley sort of angle the way this is written and… Gunnar Hellekson: Yes, that's And I've found it interesting that he used that he kept referring to the Manhattan Project as kind of gold standard for this,… David Egts: he looks to the government programs as … Gunnar Hellekson: right? Which one sense it was… David Egts: the government should do it and… Gunnar Hellekson: but on the other hand. David Egts: everything and… David Egts: it's like I could tell he's looked into it, but I don't know if he's lived that life. Gunnar Hellekson: Not a great example. Gunnar Hellekson: Because that project itself was a leaky boat as we discovered. Yeah, yeah. Gunnar Hellekson: That's right. David Egts: right Yeah, and he going back to the open source part, it's like hey, you're basically given the recipe away for not just The primary competitors at the United States, Gunnar Hellekson: Yeah. Yeah, but it is interesting that all of this points towards centralization and consolidation of the research the materials The means by… David Egts: but you're able to have pteros organizations. Smaller crazier countries that don't like the United States they can take advantage of the technology very easily too, right? Gunnar Hellekson: which we would create an AGI. He clearly feels like it's much safer if it was nationalized right as opposed to a private venture. David Egts: and he said that open source would be okay for it's like once the David Egts: Things are harmless and… Gunnar Hellekson: Yeah, that's right. David Egts: and can be I think of radar and World War Two it's that now everybody has microwave oven. So that's similar technology is trickled down and the open source stuff can move into consumer goods and the private sector after a while but it's only after we achieve some sort of, harness the power of and… 00:35:00 Gunnar Hellekson: Yes. David Egts: win that super intelligence race ourselves first. Gunnar Hellekson: Yes, that's right. David Egts: Yeah. Gunnar Hellekson: Yeah here I thought the race was towards super intelligence and he's kind of imagining a world where we've re superintelligence. But then once we've gotten there there could be a piece dividend right and now everybody you get the open source version of it. What right which is I suppose okay. David Egts: right Gunnar Hellekson: Although this presumes that then wouldn't we all be worried about A second Super intelligence percolating up through the open source. I don't know. that seemed. I had more questions about that part of the story, but now his next concern right? So let's say we get through the resource fight. David Egts: Mm- Yes. Gunnar Hellekson: and deregulated everything and… David Egts: Yes. It's basically… Gunnar Hellekson: we've Grown us electricity production by tens of percent and… David Egts: how do we ensure AI systems much smarter than humans will follow human intent. Gunnar Hellekson: a short amount of time and let's assume for a second that now we've created the USA artificial intelligence commission and they've taken over a good chunk of Arizona. And that's where we're doing. We have solar panels out there. We're doing all this work now, we're getting to the point where we're super intelligence and the next problem and this is extremely important for our author is the problem of super alignment. David Egts: Mm- Being ground it. Gunnar Hellekson: right So dude,… David Egts: We ground it. Gunnar Hellekson: can you explain super alignment? David Egts: Yeah. David Egts: Gunnar Hellekson: Yes, that's and Hydra, he draws it's like if the robot is behaving as the preschooler, we are mentioned earlier fairly easy to constrain. It's something keep it safe, right? Because we kind of understand how it's going to behave. we can give it some simple rules that will keep us and it's safe, super alignment David Egts: Right, right. Gunnar Hellekson: Yeah, that's right when grounded we think it's thrown away. David Egts: Yeah. It's like hey and… David Egts: this goes back to That after world book… Gunnar Hellekson: With tools are available to us, But in a problem super alignment is… David Egts: where the solution to get rid of all the humans to save the Earth that was the idea of an AI. Gunnar Hellekson: what if this thing is so smart. That it can outsmart us and we have every reason to think it will figure out… David Egts: It's a I got your problem and… Gunnar Hellekson: how to lie we have every reason to think it will deceive and… David Egts: your solution for climate change right here, so Gunnar Hellekson: we need a way of ensuring that this thing is basically behaving according to our expectations instead of running off and doing its own thing. For example, see every sci-fi movie in the last 50 years. right Gunnar Hellekson: Yeah. Gunnar Hellekson: Yeah, that's good. Yeah new dealer. That's it. every I ultimately decides that all the humans have to go because those in these paperclips aren't going to make themselves, And so this is his area of research in and While this is not totally crazy, especially if we decide as it seems like that we're going to use AI even as a decision support system. It seems like we should make sure that we have good ways of ensuring that the AI is offering up Solutions without unintended consequences with good boundaries around the Judgment we're willing to extend to it. this actually makes sense. And this is something that going all the way back to as a modeling 2001 in the entire book was about this problem, How decided that I don't want to spoil it for anybody. but it doesn't go well for the humans in that story. Gunnar Hellekson: And so it is interesting that we don't have a technical solution to this problem. David Egts: And will it be too late by then? Gunnar Hellekson: And even if you want to argue with the author about the timeline that we're on. David Egts: So if we were to,… David Egts: stick to our analogies of the preschooler to Elementary Schooler and… Gunnar Hellekson: Maybe this doesn't happen in the next five years. 00:40:00 Gunnar Hellekson: Maybe it happens the next 50 years. in any case number one,… David Egts: all that and you being a father,… Gunnar Hellekson: we don't know David Egts: you… Gunnar Hellekson: how to solve this technically. David Egts: you're gonna reach a… David Egts: where Sirens going to run faster than you … Gunnar Hellekson: And it's not even clear… Gunnar Hellekson: how we would do it. And I think even more alarming for me as I'm reading we don't even know when it will happen. David Egts: he's gonna do stuffed out smart you and all that, do you see any sort of similar sort of things that he's going to pass you up in math someday,… Gunnar Hellekson: When will the Need for this trigger? That's also not clear, right? David Egts: or is there some sort of comforting analogy that we could take out of this that it's okay to be associated with an intelligence that has surpassed you. Gunnar Hellekson: And will be too late by the time we figure it out. That's right. That's right. Yeah. Gunnar Hellekson: Yeah, that's So I thought about So the first thing especially as Soren is younger. This is as he gets older this changes, but I am not going to trust sorens judgment on… David Egts: Gunnar Hellekson: which mortgage I should take out. So there's a natural fence or kind of a safety mechanism in this sense. I'm not gonna give him total autonomy on some of these decisions because I actually don't think he knows enough. He hasn't had another experience. He doesn't have a good track record on things like this, right and… David Egts: Yeah. Gunnar Hellekson: it seems like in This is working the same way with an AI right they have to establish a track record of Behaving correctly within certain pounds. And then you give them a little more freedom and then you give them a little more freedom and give them a little more freedom. That's what we do with children, right and… David Egts: Right, right. Gunnar Hellekson: then, they violate that thing and you kind of correct them and bring them back on the right side of the fence and then keep going That all seems pretty straightforward and more or less what we're doing today Dave. I like the story you were telling about keeping humans in the loop in weapons,… David Egts: Yeah, so it's a sea wiz close in weapon support system it's been on ships for decades right that you… Gunnar Hellekson: we're first rule is humans are always in the loop on the weapon systems. David Egts: you could look up YouTube videos and… Gunnar Hellekson: Okay, that sounds fine. David Egts: yeah, they've been developed for decades there's no artificial intelligence behind it. Gunnar Hellekson: And we're confronted with a problem where we literally can't put a human in the loop because humans can't react fast enough,… David Egts: It just has sensors and… Gunnar Hellekson: right? right David Egts: it will detect something that is coming at the ship at a certain rate of speed and… Gunnar Hellekson: But that was because we built trust with those systems over time. David Egts: if it's a certain size,… Gunnar Hellekson: And we felt like you… David Egts: let's say it's bigger than a cubic meter. Gunnar Hellekson: so in the case you were talking about. David Egts: This Gatling gun will reduce that object to something smaller than a cubic meter and… Gunnar Hellekson: I'll let you tell a story unless you tell a story. David Egts: shooting at it until it's smaller. but the thing is that it's that last line of defense for a ship and it's like you flip that switch and it's like David Egts: I got stuff coming in at me from all angles. I can't have all of the Gunnar Hellekson: right David Egts: People on the ship support that and what's funny is it that I actually got to see one firsthand at the USS Missouri when I was in Hawaii from last episode, right? And ironically they had some model kits of the Missouri inside the Captain's Quarters that we were on the tour and they were able to reduce the number of guns and number of people serving the ship because of the automated weapon systems, the seaways and so the Missouri actually went from having to hot bunk, the crew,… Gunnar Hellekson: right David Egts: where two people are working in one person sleeping to everybody having their own bunk because they're able to reduce the number of people in the ship. David Egts: Yeah, yeah, it's just sensors and… Gunnar Hellekson: Yeah, yeah, right great example and… David Egts: but use it… Gunnar Hellekson: no AI involved at all. David Egts: but like you said I'm sure there's plenty of field testing and… Gunnar Hellekson: right Yeah. David Egts: there's this. Balance that is going to be like, what is the better? Yeah, net outcome of having it be autonomous and save the ship or it's like, assuming some level of risk with that or not being autonomous and having a high probability of being sunk. 00:45:00 Gunnar Hellekson: Right, That's right. and so this problem of super alignment is I'm trying to imagine when it triggers right this notion that something is going to be so super intelligent. It's gonna start making decisions and we'll have no way of controlling it or even judging what it's doing and that seems to me It's interesting thinking about it in terms of what if it wasn't super intelligent? What if it was just terrifically stupid, And what kind of controls would we have in place? David Egts: Yes, and are those mistakes? Gunnar Hellekson: And with those controls be any different than… David Egts: I guess there's the intelligence and… Gunnar Hellekson: if it was super intelligent. David Egts: Stupidity I think it has a lot to do with intent as Right that it could do something bad because it's like, I didn't intend to do that. Oops, versus no, I really did want to get rid of all the humans on the earth. and so and there has to be some sort of intent going on as well in terms of… Gunnar Hellekson: Thank you. David Egts: what it's trying to optimize for and Is there really intent or… Gunnar Hellekson: Yeah. David Egts: it's an optimization problem. That it's converging on and it's just coldly sees this is what it thinks the solution is. Gunnar Hellekson: Yes, Yeah, That's right. And so a really important kind of propelling force in this essay is and you mentioned and at this point in this he's mentioning it several times is this notion that this is not some abstract hypothetical that we can afford to. Not address because somewhere someone is addressing it and whether we like it or not, we are in an arms race because again,… David Egts: Mm-hmm Gunnar Hellekson: the first nation state that reaches superintelligence is gonna have this kind of permanent Advantage, that. If that's the starting premise, then we must mobilize the US electrical production. We must lock down the security of this AGI research this AI research. David Egts: Uh-huh. Yep. Gunnar Hellekson: We must solve the problem of super alignment and if we don't China will right again, This part gets really tricky right because as long as you're referring back to this obviously we don't want China to win. So now all of these things have to have to happen, We must do something and this is something therefore we must do it. David Egts: Yes. Yes. Gunnar Hellekson: What not a lot of subtlety in the argument, you know what I'm saying? David Egts: No and you can imagine it's like having some lobbyists. Gunnar Hellekson: He's got an entire chapter heading here says the Free World must prevail. David Egts: That walk in the essays around Capitol Hill right of here's how to get some funding for some unsolicited proposals. Gunnar Hellekson: right David Egts: But it's like is it? cooping up everybody. Gunnar Hellekson: Yeah, that's right. David Egts: Going back to the open source question and all that and the traditional way of researching is Manhattan Project the right way to go. David Egts: is it all air gap? Do you sequester everybody until they come out they're not allowed to leave until they come out with an AGI will the free market out pace. the people that are sequestered and so what's your take on that especially with open source and all that stuff? Gunnar Hellekson: right David Egts: Yeah, yeah. Gunnar Hellekson: Yeah, again, I think it's easy to lose. Especially when you're reading this. It's a very good essay in the sense. you really go for a ride, you know what I mean? And he was very quickly passed some of these hypotheticals that we mentioned earlier and… David Egts: Yes. Gunnar Hellekson: it's very easy to lose sight of the fact that you're now fully in hypothetical territory, But the history here is this is kind of trout territory, right? There are always. Gunnar Hellekson: There are always. Existential threats to the country that come about as a result of technology and we have a solution for this problem and it's pretty easy is that as soon as somebody invents something that the military finds is going to be kind of a permanent Advantage for them then they take that person's research stuff in a van and… 00:50:00 David Egts: Mm-hmm Gunnar Hellekson: ship it off to a national lab and that person isn't allowed to talk about it anymore. I saw a real genius. I know how this works and that has worked for everything until now and… David Egts: yeah, and… Gunnar Hellekson: nothing in the essay told me… David Egts: and I think that that's something maybe that was an oversight of his experience with the public sector and… Gunnar Hellekson: why it wouldn't work this time. David Egts: all that, right? Gunnar Hellekson: Yeah. could be and so I think the Gunnar Hellekson: it is also true that it is pretty icky for something this potentially disruptive. let's at least go that far in the hypothetical and say yes, this is going to be kind of a permanent Advantage for whoever holds it it is Icky to think of Private companies especially these private companies. I think it's worth mentioning to be in control of this technology. Gunnar Hellekson: That is cause for concern and I think his underlying plea for some level of Regulation or government interests in the development of this technology. David Egts: yeah, that imagine a company is you… Gunnar Hellekson: I think that skepticism as well placed. David Egts: does the whole AGI thing right and… Gunnar Hellekson: I think he's probably right David Egts: and then they seal off all the exits and Also, there are Monopoly but Maybe the AGI played the market. So well that it is owning everything. Gunnar Hellekson: Yeah. David Egts: It's owning all of the legislators and all that that it's so instead of a nation-state winning could like a corporation win, right? Gunnar Hellekson: Yeah, right, right. Yeah. Just as likely and he does seem worried about that. But I think for the wrong reasons, he's worried about that in the sense that it's going to somehow enable our Chinese competitor. But I think he's not spending a lot of time thinking about a very different threat,… David Egts: Yes. Yeah, Andrew Carnegie JP Morgan sort of thing. Gunnar Hellekson: which is what if a private company held this and no one else. Gunnar Hellekson: Yeah. David Egts: yeah, and if it you… Gunnar Hellekson: Yeah, yeah. David Egts: and the thing that got me thinking too it's like Gunnar Hellekson: scary Yeah. David Egts: ultimately how do you see? Government changing if all of a sudden there is the super intelligence. is the inefficiencies of whether it's a democracy or an autocracy does a new form of government emerge that is like, yeah, we'll just let the superintelligence run everything I get my Ubi check and everything's great. I just watched TV all day. Gunnar Hellekson: All right. David Egts: Yeah. Gunnar Hellekson: I mean this is how but if you want a butler Ang Jihad because this is how you get a butler and Jihad right? And I think there's one of the kind of fundamental mistakes he makes here is that along with the capability? That we will automatically invest it with authority. and those are in fact two different things. Because even today, we have extremely smart scientists whose opinion we respect and we rely on to make breakthroughs in research, etc. Etc. David Egts: Mm-hmm Gunnar Hellekson: But that is not the same as giving them permission or authority or the resources in order to act on their research, right? Gunnar Hellekson: And I don't see and I'm not sure why would be any different in the case of an AI. David Egts: You sure. Gunnar Hellekson: Right, I don't think we would ever give an AI a blank check. David Egts: Yeah. yeah, because if it's like You… Gunnar Hellekson: maybe Wall Street,… David Egts: I look at it as in the rudimentary case of … Gunnar Hellekson: maybe Wall Street would David Egts: I don't have to do arithmetic. I'll just do us use a pocket calculator, right and then you just play that out. I'm not writing my essay. school. I'm just gonna use chat GPT for it. and it's from a democracy standpoint. It's like To the people I vote for do they really represent me are there ways that an AI could better represent what my goals are and my needs are I don't know. 00:55:00 David Egts: Yeah, but that leads to another. Gunnar Hellekson: Yeah, okay. Yeah, yeah. FairPoint David Egts: Essay from Bruce Schneider that's related about how AI could potentially change democracy and I saw he has a post out now these actually turning this into a book that'll come out next year. I think 2025 but he goes through all the different axes of Gunnar Hellekson: nice David Egts: You know what? He says that replacing humans with AI isn't necessarily interesting. But as it takes er over replacing humans with ai's isn't necessarily interesting. But when an AI takes over a human test the test changes in particular their potential changes over four dimensions speed scale scope and sophistication any breaks that down and then he talks about. if we apply the speed scope scale and sophistication I think of that is like that the fated sheet you're trying to put on the bed, right if you speed up one thing, what happens right of the different axes and he talks about. what if I have assisted politicians that has personalized chatbots that can engage with voters right away or… Gunnar Hellekson: Yeah. David Egts: AI assisted legislators that good or bad. They can actually find and create loopholes or a David Egts: This would bureaucracy where you can have an AI that is. David Egts: Looking for ways to improve the delivery of government services. And the other thing I thought was interesting was assisted legal system where if you could use generative AI to lower the cost of legal advice good or bad. Does that mean that only the richest people get the best AI generated legal advice or… Gunnar Hellekson: Okay. David Egts: does everybody get it or With their Miranda rights. if you cannot afford a lawyer an AI generated one will support you, like a low cost one. Right and then AI assisted citizens that Advocate versus right so that the direct democracy sort of thing. Gunnar Hellekson: Yeah, yeah. that is true, although with I mean ostensibly survive. Changes like this in the past right the Advent of the internet changed… David Egts: But yeah, that I think politics can change and… Gunnar Hellekson: how politicians interact with their constituents right? David Egts: in the way governance can change too if it really becomes AI augmented. Gunnar Hellekson: I don't know. Is this a different in kind or this is just more the same. Gunnar Hellekson: Right, right. David Egts: To go to what Schneider said. it changes in terms of speed scale scope and sophistication. So it's like when the representatives got their websites with the feedback box it went from people just having to call and write letters to the spam they would get machine generated and all that. And so, with AI instead of it being just a copy and… Gunnar Hellekson: Right, right. Yeah, I guess that's right. David Egts: paste of an angry letter to a representative. Gunnar Hellekson: cool I guess we'll just I guess you and… David Egts: It could sound very very plausible. Gunnar Hellekson: the best thing to do is just crawl into our nuclear safety snail shell. David Egts: Right and everyone is unique every letter that goes to that Congress person is unique. Gunnar Hellekson: And I saw there's a first aid kit kind of strapped down by your shin. David Egts: Yeah. Gunnar Hellekson: I thought that was thoughtful of thoughtful. David Egts: Mm- Yep. Yep. it has a nice little mask thing on the front for your face, too. I don't know if it has a chainmail looking thing. I kind of like. Gunnar Hellekson: in the end times, I guess the last thing we're thinking about a snacks. David Egts: Yeah, but there's no room that I would have thought there you put some snacks in there, too. 01:00:00 Gunnar Hellekson: I guess hopefully we've talked about this enough. This is actually believe it or not. It is still worth reading the essay even after this treatment and… David Egts: Yeah, yeah. Yeah that can cover Gunnar Hellekson: if you want to read the essay, you should go to DG show.org that's days and… Gunnar Hellekson: Dave she isn't Gunnar's show.org. David Egts: All right. David Egts: Where do we go from here Gunnar? Gunnar Hellekson: Yeah. David Egts: Mm-hmm David Egts: Yep. Gunnar Hellekson: Yes, That's right. All right. Dave I'll see you on the other side of the singularity. David Egts: Yeah. Yeah, let us know what you think of it whether you disagree. Yeah. Yeah, it's a 165 pages. spine well spent in terms of provoking thought. Gunnar Hellekson: All right by Dave bye everyone. David Egts: Yeah. David Egts: Yeah, I hope all right. David Egts: We'll see you later Gunnar. Bye everybody.