Scott Orton: So they're going to want deterministic control. So that's really speaking to the AI containment. As far as, you know, why we would use hardware enforcement for doing that. I'll answer that one with the analogy, if you wanted to confine a tiger, you wouldn't build the cage out of meat. Carolyn Ford: I'm Carolyn Ford, and this is. Today's episode is our 2026 predictions conversation, not crystal ball forecasting, but signals already flashing red across cybersecurity, defense, and truth itself. I'm joined by Brian Carter, Scott Orton, Ralph Spada, and Michael Blake from Owl Cyber Defense to walk through what they see coming next, from the slow collapse of content trust to AI needing to be treated less like helpful software and more like a privileged insider who absolutely can't be left alone. Along the way, we cover a hacker and a Power Ranger suit who wipes out the entire system in minutes and why Battlestar Galactica may have been onto something when it decided the safest network was the one you physically rip out of the wall. It's funny, until you realize these aren't hypotheticals. These are signals and in 2026, security leaders won't lose because they lacked tools. They will lose because they trusted the wrong ones. These are Owl Cyber Defense’s 2026 predictions. What's now? What's next, and what's urgent? Let's get into it. Don't give up, Ralph in a world of bots, deep fakes and spoofed apps, you're predicting a sharp turn toward deep identity assurance, powered by behavioral biometrics, content province, and cryptographic validation. Say more about this prediction. ________________ Ralph: Yeah, it is kind of a fun, exciting time that we live in now with artificial intelligence. You don't have to work at Skywalker Ranch anymore to make some really sophisticated, you know, computer special effects. You know, we've had the tools, you know, with computers for decades to do really convincing, you know, photoshop pictures of what we used to call them, or, you know, you know, the CGI in the movies. But now everyone just calls it AI because, you know, it's some really great tools and it's out there. But what that means is that it's not just in the movie theater where you're seeing this stuff. It's on your Instagram feed. And, you know, I think one place I've started to see this already, actually since the last time we talked about this was, you know, my iHeartRadio ads are saying guaranteed human. I think we're gonna see a lot of that through a lot of different venues in terms of, you know, when a CNN clip is shared on Instagram, you're gonna see some kind of marker that that content creator really did generate that. Just in my Uber ride, the other day, I was, you know, talking to the driver, and he had talking about all these different scams that he was getting, all because of when he got a phone call, um, the identity that he could go on was just who they said they were, and yet we have really robust communication technologies built into WhatsApp and in iMessages. Like, we have these tools that I think we're gonna see used more prevalent in order to assure that when someone calls us, it is the person that they claim to be, or that when we see content on the Internet, or we want to, you know, use an app, it's coming from a place that we know and we trust and that it really is coming from that place. You know, it is using these, you know, really powerful cryptographic technologies or biometric technologies to know that, you know, this content that we're seeing or this communication medium that we're getting is who it's actually from, because, we're not gonna be able to trust anything otherwise. And so the people who do want to feed us that content are gonna want to use tools to assure us, "Hey, this isn’t an AI. This is actually coming from us." ________________ Carolyn Ford: And so we’re already seeing it on you're hearing it on your iHeartRadio. You know what really pisses me off is the animal videos, like on social media? I'm just like, this makes me so mad because I of course, I want to believe it, and it pulls on the heartstrings, but there's so many tells right now. There's still a lot of tells with those, but I think those tells are getting less and less. And it should be required that it's marked somehow. So are you predicting that the markings will just show up if these tools are used, like it will have to have a water mark if these sorts of tools are used. ________________ Ralph: Yeah, it's interesting. You know, we have been seeing different AI companies say, "Oh, don't worry about our AI. We're going to the water market. But that's not what our concerns are. You know, if I create a video with Gemini and share it with my friends, you know, the concern isn't that, you know, my friends don't believe that I created that video or whatever. They see that, that I think the concern is, you know, legitimate content creators, like traditional broadcasting companies or traditional publishers. If they still want to get their voice out there and, you know, they'll have the endorsement of marketing and all of that, they're going to need to, as part of their brand management, I think, build those tools also to verify who they are. And you're right. Today, there are tells, but really, today, there's only tells for a generation of us that's growing up with this technology, and we can kind of recognize it, right. I can guarantee you that my father in law does not see the same tells. And so there's going to be target markets for a lot of that content creation, where if they want to keep the trust of their customers and their consumers, they're going to need to find another way to do it. They're gonna need a way to say, hey, guaranteed human, or you're actually buying it from me. And so it's gonna be important to provide the technologies to make that trust still there. ________________ Carolyn Ford: Yeah. Scott, we've talked a little bit about this. I wonder if you want to jump in here, and Brian and Michael, too, but I know Scott, like I said, we've had a couple of conversations around this. ________________ Scott Orton: Well, I certainly agree that's the trend. We're gonna get to a place of needing some form of digital ID. It'll be interesting. I have no idea what form that would take. Um, I wouldn't be shocked if it somehow came out of all the blockchain research type things going on. that underlying technologies there kind of are pretty applicable. But yeah, I agree. Even in our family, we have shared a code word for, hey, if you get a call and don't think it's me, ask for a code word. ________________ Carolyn Ford: Mm hmm. Well, and even what we're doing right now, like, how do you know it's me? How do I know it's you? Right. ________________ Scott Orton: Right? Okay. And I think to Ralph's point, right? It's going to be this. It's really the more malicious actor. You're not gonna be able to count on people to watermark things when they're acting maliciously. It's interesting in the case of, you know, we think about threats all the time. We work with like folks in commercial community and manufacturing environments, power environments, and we discussed ransomware in that threat, and people rightly say, well, you could also turn the plant off. But that's a point of motives, right? There's no motive for an actor right now to turn off a power plant. There is motive for them to hold them ransom. In a war, or in a conflict, there will be motive to turn them off. And that's really what I worry about going for. But anyways, we've gone off topic. Carolyn Ford: Michael, Brian, anything to add here? ________________ Michael Blake: Looking at how I have a 14 year old, who's on social media way too much, and I'll talk to him about current events, and he's like, I don't think that's happening. I haven't seen it on TikTok. No. And so TikTok is his source of news. And my concern is, yeah, there are tells, but we have a generation that in 10 seconds or less of a video clip, I don't know that you'll be able to cognitively pick up on many of the tells. It's going to be such a short sampling that it really provides an opportunity for either younger or older people. Like I have, my mother-in-law is the same. She will send me reels that are obviously AI, and she will say, you know, I want one of these pets and I'll be like, "To your point, of these videos. And I'm like, that's this obviously AI. That doesn't exist." And so, yeah, my concern is for those people that spend a lot of time on those platforms that are designed to keep you there with those short, short clips, you know, real world events that have real time news coming in it from sports and things like that is going to be really easy to feed misinformation at a high volume. And really quickly influence a large section of the population. ________________ Carolyn Ford: Yeah, and you have the factor of you believe what you want to believe, right? Your mother in law really wants one of those pets. And so she doesn't want to think about, oh, this might not actually be a real animal. Brian? Brian Carter: I mean, that's not to go with, you know, the darker aspects of things, right? But if you look at it from, like, a, like a phsyop perspective or from a national actor or an individual that's trying to incite chaos in sort of something like America, right? And we it would be very easy to do those types of things with AI. So, to the points that everybody made, trust in the information is extremely important. I think we're gonna talk a lot about that here today, but, you know, I agree with everything that everyone said so far. ________________ ________________ Carolyn Ford: So before we leave this prediction, Ralph, um, what are some promising developments that you know of that could actually make digital identities and material trustworthy? ________________ Ralph: Uh, well, the first one is the prevalence of crypto, like Scott was saying, blockchain was designed exactly for, you know, this kind of distributed, authentication and public record of what is truth in, you know, transactions. And so now what's truth and information? And then the other is, you know, iHeartRadio is saying, guaranteed it's human. You know, that shows me that there is an interest in content creators, showing that they are generating legitimate content and that they can be trusted. ________________ Carolyn Ford: Okay, well, we're gonna stay with you, Ralph. So your other prediction that made it into the report is another big one that in 2026, the federal government will make a deliberate shift away from legacy contractors toward non incumbent invaders, agile, secure, and fast moving, we unpack that one for me. ________________ Ralph: Yeah, well, this one was kind of a bit of a,cause we already saw this starting to happen last year, but I think we'll continue to see even more of it this year as we accelerate in terms of cybersecurity, capabilities, and getting them into the field and drone technology and all that. And really, it's too long term trends that I think are pretty obvious. You know, 50 years ago, the Apollo program, technology was being driven by the government, by defense, by aerospace, Today, technology is being driven by who can build the best, you know, cell phones, smartphone, and get it out to market. So technology all the way from, you know, microelectronics and software development really is being driven by the commercial world. And so, um the other side of that, or another trend that we've been seeing is where are the battlefield lines being drawn, right? One could argue, and kind of Scott alluded to this earlier, but, you know, there is a battlefield in the middle of Europe that doesn't really affect my day to day, but that battlefield is also online, you know, at some point, there's going to be motive to do those cyber security attacks and really, we're already seeing a lot of them. And by we, what I mean are non traditional defense companies, but cloud providers, Microsoft, Amazon, Google, they're almost on the front lines also in defense. And so, I think we're going to be seeing some more of in the coming year is the fact that, to provide defense solutions, national security solutions, it's going to be leaning on the fact that technology is being driven by the commercial world, and the fact that the people who have real world battlefield experience are now cyber security providers like Google, Microsoft, Amazon. But what that enables is small teams that can very specifically understand a national security problem, and they can build on top of commercially available technology, whether it's hardware or cloud tech, and they can get something innovated and into the government's hands really quickly. So you don't need to have, you know, necessarily giant, multi thousand people, teams coming up with the next battleship, although those might be useful, too, and especially if they're gold. But you can have a slow number of engineers, build something really powerful that defends the plant, when there's a motive to turn it off. ________________ Carolyn Ford: So you said that we're already seeing we've already seen this start to happen. What are you saying is going to happen in 2026? It's going to happen faster. The procurement process within the government is going to adopt Brian. You're nodding your head. Are you already seeing this?. Okay. ________________ Ralph: I think we're definitely going to be seeing the procurement process changing, especially some of the language around wanting to see an 85% solution and working towards 100%, and something really interesting and powerful around that is that with the commercial technology a small team can build on, they can start with a very secure commercial technology baseline, but work towards, you work from that 85% functional capability. And so what that means is a small team can provide to the government a very, um secure, capable hardware platform, and evolve that functional protection or that functional capability that needs to be protected, and the government's going to be very interested. They've already said they're interested in doing that and doing that more places and doing it quickly, because it gets them capability into the field quickly.. ________________ Carolyn Ford: Brian you are really focused on weapons systems. How are you seeing this play out in that arena? Brian Carter: That's a good question. I mean, even for the past 15 years, you know, Silicon Valley, rose up in the early 2000s, right? It was a very, very huge force of technology, you know, development, just like Ralph was talking about before. And so, you know, they made things like direct government involvement with aspects of DIU, uh, all the research organizations and the government really started to, look at the technology as being created by very large budgets of companies that are doing very impactful things on the world, and So what they did is instead of, you know, running away from that, they started to embrace as much as they can and bring in some of those things closer to the Defense Department. So what we have now is you're seeing companies that are really structured around bridging that, you know, more Silicon Valley kind of mindset to direct into the Defense Department. You know, places like Andol and Palinter and some of those folks that people came out of Silicon Valley. They really are interested in defense and started companies directed at it, because they like the technology and like that kind of product and creativity. So I think what we're seeing is definitely the government was kind of exhausted by the '80s and '90s, a big primes, you know, trapping them into, you know, vendor lock, other things, and now we're seeing really companies bringing impactful technologies, and then the government seeing the value over the last 10, 15 years, and now we're seeing, you know, policy, we're seeing executive orders, especially within this administration to really run towards that. And so I think that we're gonna see, a lot of those things over the next couple of years, we're gonna see people taking a little bit more risk, not afraid to fail, like Ralph said, putting some 80% solutions out there and see what's more effective. Obviously, recent, uh interactions in Ukraine and other places where people come up with very simple solutions to complex problems and they've been very effective. So I think people understand that that's kind of the way we need to do to catch up, because I think it's it's generally accepted thought within the defense of the Department of War now that we are behind, and that we're going to have to do some things that take some risks to catch up. ________________ Michael Blake: I think cultural change is hard, and that's getting the old guard to change their behavior is going to be difficult. They're used to being very risk averse. Looking at an entire legal department on tenure just to evaluate contracts and \ assess, do they want to pursue this? Is it going to introduce risk to the company? A lot of the smaller companies are either naive or willing to take that risk, and so they're going to innovate faster. So our adversaries don't have those problems. They are kind of under the direction of the government and they don't have all the different layers that we have. So they just wake up in a morning and how do we defeat America? And then they go to bed and they wake up in the morning and how do we defeat America? And that's what they're thinking about. They're not thinking about, how do I need to rewrite acquisition roles so that I can make something happen in two years versus five? You know, Those are the problems we're trying to solve. And so we have a long way to go, and I think that, you know, the inertia of the larger primes is so great that there's going to be leapfrogs by these smaller companies as far as technology goes. ________________ Carolyn Ford: Right. I'm reminded of a 30 Rock episode. It features Matthew Broderick, Alec Baldwin, goes to work in the government, and they need to get pens, and it takes us through the procurement process. If you haven't seen that episode, go find it. It's really funny. That story arc. All right, Scott, you get the last word on this one. ________________ Scott Orton: Oh, well, you know, one of my predictions for last year was that the defense primes would start the process of breaking up, and we saw some evidence of that last year, and it will continue. I think the area of space in particular will be one where you'll see lots of spin outs. My very first job in the Pentagon was in the Diirector of Space and Nuclear Terrence. And at that time, I could say over 90% of what was in space was managed by that office. Today, that's less than 1%. So the Department of Defense is not the primary user, and so those things will start to spin out. And the other is the cost structure. So if the government is paying you for your time, if they're paying you for engineering time to build a jet, then your cost structure, what happens is you add administration and bureaucracy to that cost structure, because you can build it into the cost of that person's time. If you go to a model where you start paying for outcomes, start paying for results, then all of a sudden that administration becomes a disadvantage. And the way to get rid of it easiest and quickest is to start spinning things out and breaking them up. Carolyn Ford: Mm hmm. Okay, well, let's stick with you, Scott, and move on to your prediction. You argued that we need to stop treating AI like a software tool and start treating it like a privileged insider. You predict that 2026 marks the move from guardrails to containment, especially through hardware enforced limits. So talk about this one a little bit. ________________ Scott Orton: Yeah, so let me start to unpack. I mean, there's, you know, demystifying a little bit, the AI. mainly, I think the main point to focus on with AI is it's a model. And the big distinction there is that we have become very used to algorithms. I mean, the term algo has even become a term that's used pretty broadly. And in an algorithm, I have a defined input and I get a defined output, and I can test to that and I can ensure that that's always true. In the case of a model, I have an input and I have an unpredictable output, and that's by design. I want that unpredictable output in that case. So that's the AI type system we're talking about. So containing those is because I have an unpredictable output, I need some deterministic rules around that output. So when we start to think about things that have to do with medical, with financial, with legal, somebody has some liability somewhere in that process. If a doctor makes use of an AI result and then gives you advice based on that AI result, that doctor is responsible and liable for that advice that they provide. And so they're gonna want deterministic control. So that's really speaking to the AI containment. As far as, you know, why we would use hardware enforcement for doing that, I think the analogy, I'll answer that one with an analogy is, you know, if you wanted to confine a tiger, you wouldn't build the cage out of meat. ________________ Carolyn Ford: There you go. Yeah. It's a really good analogy. That works. All right. Ralph, you have thoughts on this one? ________________ Ralph Spada: Well, I'm gonna pick up on that last one. Certainly, if you want to have security, it needs to be anchored in hardware. I don't need to point out the Innumerable software Vulnerabilities that have inevitably happen, and I think we've already seen in a lot of the AI model research, where some of them start to do unpredictable results that look like they're trying to escape, you know, some sort of containment or do some sort of actions that they weren't, you know, designed or intended to do. So, yes, definitely agree with hardware anchored security and setting limits on kind of where that model can reach. ________________ Carolyn Ford: Yeah. Michael? ________________ Michael Blake: Yeah, I think, there's a number of technologies that are extremely dated that we get a false sense of security when we use them. VPNs have been around. several decades, and somehow, you, to corporate connection gives you a VPN and you feel safe, okay, I'm okay to connect to the network with this VPN. And so I think the reality is VPNs are as useful as firewalls in preventing threats that are out there now. There was just a video that was posted where a hacker in her Power Ranger costume erased a white supremacist dating site from existence using AI in about five minutes. And it was pretty impressive. One person... ________________ Carolyn Ford: What are you saying? ________________ Michael Blake: It wasn't a team. She got an orchestrated a set of AI tools to socially engineer websites, get people's accounts information. And once it's collected enough of the information, it detonated, so it deleted all of their registries, the websites, the database, of all the users, all their email accounts, their domain registration, wiped it all out. ________________ Carolyn Ford: How big of a database are we talking? Like, thousands? ________________ Michael Blake: Yeah. ________________ Carolyn Ford: I'm still hung up on the Power Ranger suit. Michael Blake: Well, she didn't want her identity revealed, so she's videotaping herself at this where she's being recorded at this conference, so she's in, like, this Power Ranger costume while she's executing all this stuff.. It's pretty impressive. Yeah. what she was able to do with these AI tools in just a few minutes. And so I think, you know, all of those things that she did,websites, email, were protected at different levels by firewalls or VPNs and they were compromised very quickly. So I think there's a lot of benefit, not only defensively from AI, but offensively from AI and from, you know, the combat world, you know, you punch a wrestler and you wrestle a striker, and, you know, you're not going to be able to defeat these very complex AI attack suites without some type of very, functional and agile AI defensive suite. ________________ Brian Carter: Yeah, I'm still getting past Power Ranger, so the I think that, you know, obviously, a lot of a lot of talk about software vulnerabilities, software, network vulnerabilities with access to software or front ends of systems or accessible things that you can get to through the network, right? Obviously, having hardware stops or hardware enabled defense behind those things are something where you have to have either physical access or some additional attack vector to influence on top of all these complex things. If AI is making one of these entire sets of attack vector, that much simpler, right, then it just kind of amplifies the need for something that's either resident on site or inaccessible physically from someone or has some other embedded type of secret that's not accessible unless unless you have, you, a lot more of like a nation state kind of aspect to go after things, right? So I think as I think it teller rate, just like anything else we see, technology wise, right? Like we think about threat things today with our current glasses on and use previous types of attacks and other things. And I don't think we're really understanding the scale of advancement that's going to come from automation that comes with AI, right? And I think that having these hard stops will really help, you know, slow that down, you're never going to stop it, right? Just like everything else. It's going to continue to accelerate and become more complex, at least we'll put in some roadblockers, speed bumps, or whatever you want to call them from a hardware aspect. ________________ Carolyn Ford: Right. Because if you want to contain a tiger, you can't build the cage out of meat. Well, there's got to be a Power Ranger suit in there somewhere, too. I'll work on that one. I think more of us could be wearing a superhero suits. ________________ Carolyn Ford: All right. Well, I'm gonna stay with you, Brian, and talk about your prediction. So you predict the defense communications will need to evolve beyond just encryption toward real time, hardware enforced trust that's rugged enough for battlefield conditions and coalition ops. Will you unpack this prediction for us? Brian Carter: Sure. So, I mean, this is a subject that I'm pretty passionate about, and I think most people that are within the, you know, working in the defense base for a long period of time are, in wartime environments for probably the past 25 years, right? Everybody' been deep in enduring freedom and then Iraqi freedom, but what we've all found out, working really close, basically coin fight that that we went, you know, we're trying to find individuals and go out for them, and now we go into a more, you know, fighting, big world war kind of aspects. One thing is pretty consistent between the two. It's that our weapon systems have not been overly networked in the past, right? We have very proprietary type networking aspects of things. One of the desires of the Department of War is to go towards, you know, taking in the commercial technologies, looking into things like IP traffic and passing things back and forth in digital space. So as these things become more extensively networked, you know, obviously, trust in the source of the information that's being conveyed to you really starts to move up, and that list of critical needs, especially, like we've been talking about with the addition of AI, right? So now you have digital data flying around in more time, very, very short decision space. It's moving around from human to human. It's moving from machine to machine, machine to human, you know, all the different aspects of that. And if you're making, I guess, the integrity of that data is critical so that you make sure that the person in the field is making the correct decision, which is usually a decision that involves someone's life, you know, or someone's wellbeing, with, you know, real information, right? So how do you determine within an already stressed time intensive environment if something is real? Obviously, we can only maintain what we usually call here at our decision advantage. by having this built in trust, right? And that has minimal delay, has timely information, so it can't be latent based on the technology, so we can't, you know, put a bunch of things in the middle of the data moving around to make it latent. So that inherent trust from a hardware aspect, it provides, you know, very secure aspect of things, because it's in hardware, like we talked about, with the hacker, Power Ranger example, that Michael gave. And then, obviously, it's less complex. You don't have as much moving parts. You have, you know, a piece of hardware that that code is less malleable, it's less complex, like from a software aspect of things. And then, obviously, if you put things in hardware with the advancement of all the things that are happening in technology, it gives you the minimal swap you need. It gives you the ruggedness, you can ruggedize things in microelectronic space, the, you know, we've been doing that within all the different services for many, many years. So I guess embedded software are these very small form factor solutions exist, and it'll add massive value to the warfighter at low cost, with also adding this whole layer of hardware security on the top of it. ________________ Carolyn Ford: What's the biggest obstacle right now from making battlefield communications, trusted in real time, and how to, I mean, you talked about this already a little bit. The hardware and four solutions. We've talked about this. I'm seeing a thread of all these predictions, really. ________________ Brian Carter: There's a lot of threads in that one, particularly, like, I mean, one thing is, you know, service is still what still do with the different services do, right? They have different needs, different requirements, governing bodies on top of already large governing bodies within different services. They don't necessarily agree upon what the requirements are.' then like a whole not aspect of them. ________________ Carolyn Ford: Wait, so you're saying people are the biggest obstacle? ________________ Brian Carter: Well, people in bureaucracy is definitely a big contributor and one of the many threads, right, that happens here. I think some of that, though, we'll see starting to ease a little bit with the subject we talked about earlier about, you know, 80% solutions, getting things out there quicker novel ideas or new innovative type ideas coming to the warfighter. I think some of those things will ease, I guess, because one of the things that has been consistent through my entire career has been, if you get capability out, solve the need to the war fighter, it's going to stay there, right? And then it's gonna stay there and be used, and we're going to find a way as a collective group to get it, you know, to the point where it meets all the rules. ________________ Carolyn Ford: Who wants to go first to respond to this one? ________________ Ralph Spada: Well, you know, as Brian was saying, it's really important out there that we, uh, you know, we help them with the networks, close that kill chain, right, from sensors to weapons and all of that. But I can't also help of the challenges they had in forgive me, Brian, but Battlestar Galactica, right? There are reasons that they had to rip out the networking, you know, so that it couldn't be used against them when they were fighting against AI. And so that's really our challenge here, right, is somehow balancing between that so that we can enable the networks, but not let us be used against us. And go back to that one common thread, you know, hardware, anchored security, the hardware enforcement, that hardware trust is what's going to make that possible. ________________ Carolyn Ford: You're my favorite, Ralph. I mean, Battlestar Galactica. Now I got to go watch that reboot. such a good series. Such a good reboot. All right, Scott. ________________ Ralph Spada: It's such a great reboot, yeah. Scott Orton: Yeah, I think maybe under appreciated the complexity of modern networking and networks, we went from, you know, let's hundreds of years, maybe thousands of years, where, you know, the Olympics, the Olympics came about because to find who was the best person that could run 24 miles and deliver a message. And by way, the lot of the other, yes, there are some that are for soldiers, but a lot of the other events are, you know, believe me, as that person was making that run, they were arrows might have been coming out them, swords might have been coming at them, right? They had to be able to defend themselves. They had to get away. We've gone from and then, you, we got more advanced networks. We have phone networks. Those were things where we talk to other humans. Lots of human to human, and then our digital world came about. And the interesting aspect of the digital world is it multiplies at a scale in a speed that simply, you know, it doesn't resemble the scale at which humans multiply, right? So the amount of data and the amount of our own lives that are now digital, really, just take up a 30 second check of how much of your life is digital today. And that is all on these networks, and they're all on the networks in semi-trusted relationships. I trust certain things. I trust Amazon with certain information, but I don't trust Amazon with all information, and you could say that about every party and every party it's something different that you're willing to trust. And so, the need to be able to, in these very complex networks, determine what you're sharing with, who, and how, uh, I think is a massive challenge, and it's just, you know, what Brian's at. There's going to be continued investment in it because it has such rich value.. to users. ________________ Carolyn Ford: Yeah. All right, Michael, you get the last word on this one. ________________ Michael Blake: Yeah, I think it's, um, it's the concept of micro segmentation that's been taken into network security. Um First, we wanted to connect everything, and then we realized maybe that wasn't such a good idea. So, figuring out what actually doesn't need to be connected, like our diets. If you're pushing information into your backup vault, you don't really need stuff to come back out of there until it's an emergency, so that's the perfect example of where data just needs to go one way. If I have a missile that launches it just needs to be told to launch and where to launch the missile. It doesn't need to necessarily feed any information back to me. The radar system does. Sure, I need to see what the radar system does. But as far as what the missile actually needs to do, it's a very small amount of information it needs. And so I think, that is going to be the key to securing those platforms is putting that one way hardware enforcement in there, which is AI resistant, it's crypto resistant, you know, quantum resistant. So there's once you're on the other side of that diode, there's not really any way to get anything back out. And so I think that really rethinking how we isolate things and using more of those types of physical hardware or isolations to break apart the things that don't actually need to do anything but listen, and be told what to do takes away a lot of risk in connecting all these things together. ________________ Carolyn Ford: Well, that was a beautiful segway into our final prediction, which is yours, about zero trust. You're seeing, maybe a little bit of a roadblock with implementing zero trust. Talk to us about that. Michael Blake: Yeah, so the kind of the linchpin of Zero Trust is a term that's often used as modernization. I need to buy new stuff that can support all of these security features in order to support Zero Trust. That can get really expensive, really fast, and where do you want to spend your money? Obviously, you want to spend your money to protect the things that are valuable. So I think that another piece of that is going to be identifying the parts of the enterprise of whatever you're trying to protect and securing those things of value and focusing on creating zero trust environments around those valuable things, and uh it is complex. Looking at what they call zero trust in a box that's been advertised by, the different defense agencies. It's 35 commercial products in one zero trust in a box. It's 35 skill sets. Who has 35 skill sets. How do you sustain 35 skill sets with the typical teams that I see in enterprises? It's going to be really challenging to properly implement that many software packages, configure them properly, and get them all working and keep them working. And then as problems arise, the first thing you do is, well, turn the security off, and let's fix that thing. And after a while, enough of those security knobs get turned off, and then the security is just an illusion. And so I think that, that human capital cost, the technical skills that you're going to have to have to maintain all of that, that really has to be taken into consideration. And so, it's not just the modernization or the implementation, it's all the people that are going to need to care and feed that zero trust environment. So really sizing that right for the organization, what can they do? And then what should they do? 'Cause those are two very different things. You can do a lot, but maybe there's a lot of it, you shouldn't do. Figure out what parts of the network and the devices fall in the can do versus should do, and then making sure you budget towards what you should do, and not necessarily what you can do. ________________ Carolyn Ford: Yeah. So this roadblock is modernization, implementation, but then the talent gap of getting it all there.So you talk about using hardware and forest, cross domain solutions to help with this, talk a little bit more about how that's going to solve the problem. ________________ Michael Blake: Well, zero trust is really focused on identity, management, and authorizing people to use resources based on behaviors. Are they if they're accessing a computer in a room that has a badge reader, did they badge in before they logged in at the computer? Are they there during regular business hours? Are they outside of business hours? But zero trust really doesn't address what happens once the person or the service gets access to the resource. So if I'm really focused on authorization, authentication, and my behaviors, and I come in and I get access to something, that's where cyber defense takes over. Zero trust ends once people actually get access to something. I can open an Excel spreadsheet, put some visual basic macros in there. push it over into a folder to Scott's Drive. He opens it up and clicks a button in there. He thinks says calculate, but I've changed the way that works. That could do all kinds of things inside the computer, have malicious intent. Zero Trust was not compromised at any point in the way. The malicious content was injected into your resource that I was at authorized to access, and I was authorized to transfer to Scott. So there's this second stage that has to happen where cyber comes in to protect from data corruption. That's where your backup's having those secure backups out there, using data diodes and pushing that secure backups into those vaults, making sure you have those for recovery. Continuity, all those kind of business practices being thought through. And then, really, having the integrity checkers, the the sanitization of not only things that you have internally, but your supply chain, all of the vendors that you interface with when you're their customers, that you're receiving documents from or exchanging information from to material and you may be ingesting. We might have a really good zero trusted limitation, but then the second we connect to another enterprise, we don't know necessarily how well they're zero trust implementation is actually implemented. So there's, you're kind of assuming the risk of all those entities that you're going to be connecting to, unless you have something sitting on that boundary that's verifying that the data coming in is known is good and sanitized. And if there's any type of, you know, trade secrets or espionage, you want to make sure that, you know, whatever information that you have inside doesn't get leaked out. ________________ Carolyn Ford: The first example you gave was an insider threat, like a malicious actor, right, with the Excel file. And then we've got the challenge of connecting with our coalition partners, even our vendors, sending information. So the cross domain solutions help mitigate insider threat and or my, I'm confused. ________________ Michael Blake: Yeah, so there's yeah, from the threat from the outside world coming in in most cases is malware. It's a multi-billion dollar enterprise, and they're making more money and getting better and better. So that is the biggest threat is these very large, well funded technically savvy malware enterprises. So social engineering, whatever it is, they're going to be sending potential malware packages in content that looks like it's something that you should open. An invoice, they'll figure out one of our vendors it could be one of our suppliers. They create a fake email with an attachment that looks like it's a billing invoice for something that we've recently bought. So the accountant opens it. Accounting opens it, and launches the malware attack inside our finance department. Very bad day for finance, right? So how can you protect that? If you had application and a process that took those attachments in the emails, scanned them, sanitized the content, and then brought them in. That kind of thing, the chances of that happening significantly goes down. Mm hmm. So the the same thing happens with Zoom conferences and teams conferences. There have been a number of exploits that have happened where people have hijacked the video channel and you buffer overflows, you know, been able to execute code on other people's computers. So there's a number of risk profiles that are out there. I think, you know, definitely the larger enterprises are worried about these things and have better security in place. the smaller companies that just don't have the ability to invest ina 35 product suite, to secure their enterprise and input cyber on top of that, right? So yeah, the real real big threat, like I was saying, was from inside outside is mawarere. I mean, it's outside inside is malware. And then from the company, you know, things going out. So, once malware is inside, generally, just like a malicious actor trying to steal company proprietary information, or somebody trying to get a bank, people's credit card numbers or things like that, making sure that that data doesn't get back out of the network. And so that is the other part is how do I protect the my data rights? How do I make sure that my data stays where it's supposed to be? That is not a silver bullet. There's a huge requirement on to enforce that. You have to label all of your data and say what stays inside and what can go outside. And so much of the data that's out there is not labeled at all. So it's going to be another significant amount of effort to label all of that data. There are companies out there that are working with AI to try and help automatically scan networks. Once proprietary information has been identified, it'll go out and walk the all the different file shares and things like that. Look at the information and say, oh, we found some more of your proprietary information over and this person's folder, let's move that out of there. So I think the tools are getting better, but it is going to take a cultural change and resources, both human and improved artificial intelligence to really scale up and really apply it well.. ________________ Carolyn Ford: Scott thoughts on this one? ________________ Scott Orton: I think what Michael just pointed out is exactly why Zero Trust is not totally sufficient to protect against the threats we have. We really, at some point, are going to need an intermediary that works on our behalf, that I might be willing for that intermediate that's going to look at the data going across back and forth and have some rules. So I might accept an Excel spreadsheet from Michael, but I might not be willing to accept any macros in that spreadsheet. And so a lot of the spreadsheet to pass, but not the macros to pass. Similarly, I might communicate outside my organization, and I want those emails to pass, but I might have a rule set that says, no proprietary information, even if I'm the one sending it, ostensibly, shall not pass out. And that's really what cross domain and data integrity is all about, and I think why going forward, it's absolutely critical. Yeah, the identity piece is important, and it's going to be a long term challenge. One of the things I was going to ask back to Michael was labeling. But as we've gotten these discussions before, labeling and tagging has been a topic for data, for at least four decades, if not longer. I fear it a little bit because it says that there is somebody who determines what is a good label and what is not a good label. And that then implies that there's some oracle who decides what is good information, what is bad information. But, you know, still, we're going to have to get to a point of an intermediary cross-domain intermediary that can set some rules on your behalf so that you know, and you can feel safe and what you're willing to accept, and you can feel safe being on a network, knowing that what leaves your property is only what you wish.. ________________ Carolyn Ford: Brian? ________________ Brian Carter: Yeah, I mean, obviously, I think one of the big things we'd talked about here is everything is, you know, infinitely more complex than it was, you know, 50 years ago. And obviously, throwing more complex security solutions adds a lot of problem solves a lot of things, everything else, right? Looking at insider threat, looking at, you know, an intermediary to look at the data, but the one thing that, sometimes it gets overlooked, but is also a really good cornerstone from security is failapes, I mean, like, you know, if you're in an airplane, you have a big handle right, I see you can pulled to hunch you out of the aircraft. If you're in an elevator, you know, gut gut forbid the cable brakes or something malfunctions, you've got these, like, really simple devices that sit on the rails and stop you, you impact the ground at a very high rate of speed, right? And those things are, like, really simple machines, right? So you almost need when everything goes bad, things get too complex, all of our best plans foil into nothing. You're gonna need some simple things to keep some sort of fail safe, right? And I think cross domain solutions also enable a lot of those things as well, right? So it's just like any layered security practice, there's always your ejection handle or backup that you kind of need that's very much enabled and kind of a cornerstone of that tool security posture. ________________ Carolyn Ford: Right, right. Cage, not made out of meat. Right. ________________ Brian Carter: Yeah. It's never as exciting, and nobody really wants to talk about it. It's extremely important, right? And it really cut off checks a lot of the lot of the boxes like, uh oh, something happened. Okay, let's just scrap what's going on now and just do a reboot, right? or a redo or a reset or whatever. And that's where you need those fail safes really give you some peace of mind. Right. ________________ Carolyn Ford: All right, Ralph. Last word on this one. ________________ Ralph Spada: Yeah, well, a lot of zero trust is expensive and complicated, and unfortunately, a lot of it's only mitigating the damage of what happens when an attacker is inside the network, right, in terms of limiting their access or limiting their lateral attacks and such like that. So really, the, you know, critical stopgap or fail safe is these hardware data diodes or cross-domain solutions that can actually block attacks at certain boundaries and really protect what's important on the network. So it's good that there are some simple solutions that can at least start to go down that path of securing what's important. And again, that's a thread of it's anchored in hardware. ________________ Carolyn Ford: Right. All right, we're coming to our tech talk questions, so I'm gonna pick two, and I'll go down the line, and you guys all get to answer this question. These are just fun, like, gut reaction questions. What's a Cold War Era tech you wish had all grade security baked in? I'm going to start with you, Michael. ________________ Michael Blake: I think there's a lot of older tactical radios that are out there that are in use that have been known to be compromised for a very long time. And I think with some built-on security, we could provide some tech to help improve the security across those tactical radio links. ________________ Carolyn Ford: Right. Brian? ________________ Brian Carter: Oof. Uh, well, so, I mean, this is a tough one. So the well, anybody who watches this will understand that we did not prepare for these questions. I mean, think. So I think that you know, back to the current thread that I've been talking about recently is, is, you know, simple things. People, people really focus on the complex things. They forget about the simple things. Things that report information, temperature sensors, wind sensors, migration sensors, things like that, integrity of those simple things that have been around literally for 80 years in some aspects, are really interesting to me, right? And I think getting those things even very simple amounts of security is 100% more than they have now, right? And I think that's going to be much more important as we go forward and we start to basically crowdsource all of our answers and shove them through machines. Right. ________________ Carolyn Ford: Okay. Scott.. ________________ Scott Orton: God, did I just say the Internet? That was not cold. Oh. That was not cold... Well, I guess, no, it's not coldware era tech Okay. I'll give it to you. Another would be cars and planes. Planes, we're getting more and more capability at the seat. Those are independent computers from flight control computers. However, these things are becoming more and more interconnected as people try to save weight. Not a while of your passenger aircraft, but I don't know what the computing looks like in a personal aircraft that you're going to use to fly between buildings in New York City or something like that, I'd be worried. ________________ Carolyn Ford: Yeah. All right, Ralph. ________________ Ralph: Uh, I'm also gonna cheat a little bit. I think they should replace the red phones that they have with our XD visions systems, that they were able to communicate more directly, maybe things would have been a little better. ________________ Carolyn Ford: Oh, my gosh, Ralph, you're winning. You love Starbucks, you got the red phone in there. Okay, this one will be easier. What cybersecurity buzzword should just go away? ________________ Scott Orton: AI safety. ________________ Carolyn Ford: Okay, Brian? ________________ Brian Carter: Insider threat. ________________ Carolyn Ford: I don't know, I know, you can't No, that can't go away. You're out.Ralph, you're next. ________________ Ralph: I want the term generic AI to go away, because it doesn't mean what it used to mean anymore. ________________ Carolyn Ford: Well, and nobody knows what you actually mean when you say AI. So I was really glad that Scott level said it us when we start talking about it. Okay. Michael,. ________________ Michael Blake: Secure VPN. ________________ Carolyn Ford: Oh, Okay. All right, well, thanks, you guys. This has been a really fun way to spend an hour. And thanks, listeners for joining us. Please smash that like button, share this episode. Tech Transforms is produced by Show and Tell. So until next time, stay curious and keep imagining the future.