Tech Transforms Host: Carolyn Ford Guest: Connor Morley, Head of Security Research at Glasswall ________________ Carolyn Ford: Welcome to Tech Transforms. I'm Carolyn Ford, and today I get to talk to Connor Morley, Head of Security Research at Glasswall. I love talking to experts like Connor, because what he does sounds like something straight out of a cyberpunk thriller—QR code malware, steganography, PDFs acting like something they’re not. He’s on the front lines of next-gen cyber defense, helping organizations rethink what a “safe file” actually is. What you do really does sound like something out of a spy story—malware embedded in images, PDFs, and links. This stuff’s been around for a while, but has it changed much? Or is it like social engineering—so effective that it never really goes away? ________________ Connor Morley: File-based security and file-based threats have been around for as long as computer viruses. If you look at threat reports from 2024, human-interaction-based breaches still account for between 40 and 60 percent of total incidents, and roughly half of those are file-based attacks. That means a huge number of breaches start when a human is convinced—through social engineering, coercion, or insider threat—to interact with a malicious file. That initial click gives attackers the foothold they need to get inside a network. We’re also seeing a resurgence of old techniques like steganography, combined with newer ones powered by AI. AI can now create extremely convincing phishing messages, complete with local slang and perfect grammar. Five years ago, AI wasn’t this capable—but now it’s producing spear-phishing-level precision at scale. It’s always going to be effective because humans can always be manipulated. ________________ Carolyn Ford: AI must have blown the doors off of this in a few ways. People are now using AI agents to open their email—zero-click attacks are already possible. But AI can also embed malware into things, right? At scale? ________________ Connor Morley: Yes. There was a talk at 44Con last year about AI email handlers being vulnerable to prompt injections and sending out automated malicious replies. AI can also be used to find flaws in rendering systems or memory handling, but perhaps the bigger risk is how it’s used to make phishing more convincing. Gone are the days of broken English and obvious red flags. These attacks now look perfect, often personalized, and that makes them much harder for users to spot. ________________ Carolyn Ford: I still think I can spot them—but eventually they’ll be able to fake mistakes too. Tell me about something you’ve worked on recently that really surprised you in how it came through and how you mitigated it. ________________ Connor Morley: Recently, we saw a resurgence of steganography being used as a second-stage delivery mechanism inside JPEG images. An initial Word document infection would reach out and pull down an image. That image had hidden code embedded in the pixel data—completely invisible to the user. When the image was processed, that hidden code was extracted and executed, deploying the actual malware payload. This method is designed to bypass traditional detection systems, because it hides malicious content inside something seemingly benign. Detecting steganography is incredibly difficult. Simple techniques, like least-significant-bit encoding, are easy to spot, but advanced methods—like spread spectrum steganography used by the U.S. military—can mimic natural visual noise, making detection statistically impossible. At Glasswall, we counter this using a Zero Trust file filtering approach and Content Disarm and Reconstruction (CDR). Instead of trying to detect threats, we assume the file is malicious and rebuild it from scratch, removing anything unnecessary. We make imperceptible pixel-level changes that completely destroy hidden data streams while preserving the visual integrity of the image. The human eye can’t tell the difference, but digitally, the malicious payload is gone. ________________ Carolyn Ford: So you use CDR to rip it apart and rebuild it—and anything not needed for the human eye to see is discarded. ________________ Connor Morley: Exactly. CDR breaks a file into its components, validates them against vendor specifications, and reassembles only the safe parts. Users can also customize their risk tolerance—deciding whether to allow things like hyperlinks or metadata. For images, we subtly alter the pixel data so that steganographic payloads are destroyed, but the image looks unchanged. If it looks like a duck, it still looks like a duck—but digitally, the hidden threat is gone. ________________ Carolyn Ford: So if I only open files from trusted sources, am I safe? ________________ Connor Morley: Not necessarily. Zero Trust means you don’t assume safety based on the source. Even files from your CEO or a trusted vendor get treated as potentially malicious and filtered the same way. Every single file is sanitized before it reaches the user. ________________ Carolyn Ford: Where does CDR actually live? Is it part of a firewall? ________________ Connor Morley: It usually sits at the domain boundary—between the public internet and your internal systems—and can be layered between internal domains depending on your security posture. You can adjust what to allow based on risk tolerance. As with all security, you balance security versus functionality—the higher the protection, the less functionality you may allow. But in practice, CDR operates in milliseconds, so users don’t notice any delay. ________________ Carolyn Ford: Let’s talk about polyglot files. The name sounds like something out of Harry Potter—they’re shapeshifters, right? ________________ Connor Morley: That’s a perfect description. The term polyglot means “many languages.” In cybersecurity, a polyglot file behaves differently depending on what program opens it. It could appear as a Word document to Microsoft Office, as a PDF to Adobe Reader, or as a ZIP archive if you try to unzip it. It’s all legitimate code inside one file, so it’s not technically malformed—but it can easily be abused. Attackers use this to slip malicious content past filters by disguising it as a harmless format. ________________ Carolyn Ford: But CDR would rip that apart too, right? ________________ Connor Morley: Yes. We use mature file-type detection systems to identify the “base” file type, then remove all areas where other file types could be injected. We’re not detecting the polyglot—we’re destroying its ability to exist as one. ________________ Carolyn Ford: Are polyglots always bad? ________________ Connor Morley: Not always. There are legitimate uses—for example, research archives that store test data inside a single file—but those are niche. In secure environments, you typically don’t want polyglots at all. If your domain policy disallows them, CDR removes those embedded file types automatically. ________________ Carolyn Ford: What are some of the most creative things you’ve seen attackers do with these techniques? ________________ Connor Morley: One fascinating case involved attackers using polyglots in web portals. Many websites only allow certain file types—say, JPEGs for images. Attackers upload a file that’s both a JPEG and a PHP script. When the web server processes it, it might execute the script instead of displaying the image, opening the door to remote code execution or cross-site scripting. I even saw a case where someone embedded a master boot record inside a WebP image. It looked like a picture—but it could boot an entire operating system if run as an executable. ________________ Carolyn Ford: That’s wild. So LinkedIn images are probably safe because they sanitize uploads, but smaller or custom sites could be vulnerable. ________________ Connor Morley: Exactly. Large platforms usually sanitize metadata and file headers, but custom sites—like WordPress—often don’t. That’s where these threats thrive. ________________ Carolyn Ford: Okay, let’s talk QR codes. I love them as a marketer, but they have a bad reputation in security. Can I ever be sure a QR code is safe? ________________ Connor Morley: Short answer: not really. Until you inspect where a QR code points, you can’t be sure. Most users skip that step. Attackers exploit this trust through “quishing,” or QR phishing—replacing legitimate codes with malicious ones that redirect users to fake websites or downloads. It’s not that QR codes themselves contain malware; they’re just a doorway to malicious domains. What makes this tricky is that people usually scan them with personal devices, creating a new, less-monitored attack surface outside corporate protections. ________________ Carolyn Ford: So my phone could be compromised by scanning a malicious code? ________________ Connor Morley: Potentially, yes. For mobile devices, attacks usually fall into two categories: tricking users into downloading a malicious app or redirecting them to credential-harvesting sites. Android devices are more vulnerable because they allow third-party app installs, but iPhones aren’t immune—especially if users are tricked into giving permissions. The biggest risk is that people inherently trust QR codes. Attackers count on that. ________________ Carolyn Ford: Do you ever use them? ________________ Connor Morley: No—I avoid them completely. I’d rather manually type the URL. It takes a few extra seconds but eliminates the risk. ________________ Carolyn Ford: That’s too much work, Connor! I love the convenience. ________________ Connor Morley: And that’s exactly what attackers rely on—convenience over caution. ________________ Carolyn Ford: So can’t you use CDR to clean QR codes? ________________ Connor Morley: To a degree. We can scan images or documents for QR patterns and delete them—turning the pixels black so the code is destroyed. However, unlike steganography, QR codes are robust. Even if parts of them are damaged, they can still scan successfully. To truly neutralize one, we have to remove it entirely. ________________ Carolyn Ford: That’s disappointing! I was hoping you’d tell me there was a “CDR-approved” QR code fix. ________________ Connor Morley: Unfortunately not. The only reliable safeguard is to inspect URLs before clicking and avoid free or unknown QR code generators. Attackers can easily swap or hijack them. ________________ Carolyn Ford: You’ve officially ruined QR codes for me. ________________ Connor Morley: Glad I could help! ________________ Carolyn Ford: So let’s end on a more positive note. What are some measurable outcomes you’ve seen from organizations that use Zero Trust file filtering or CDR? ________________ Connor Morley: The biggest impact is risk reduction. CDR removes entire categories of threats—macros in Office documents, JavaScript in PDFs, embedded executables—before they ever reach users. It also reduces “SOC noise.” Analysts aren’t flooded with meaningless alerts from blocked phishing attempts or harmless detections. They can focus on true incidents that matter. Ultimately, CDR makes it so hard for attackers to use standard playbooks that they either move on or expose themselves trying something more advanced—something your defensive teams can detect. ________________ Carolyn Ford: That’s huge. I wouldn’t have thought of noise reduction as a security advantage. ________________ Connor Morley: It really is. It improves efficiency, focus, and overall resilience. ________________ Carolyn Ford: All right, time to land this plane. Rapid-fire question: You’re dropped into a Mission: Impossible scenario and can only bring one cybersecurity tool from 2025. What do you bring—and why? ________________ Connor Morley: If it’s software, I’d take an agentic AI—to handle data collection and repetitive filtering. For hardware, I’d bring a Flipper Zero—a portable device with Bluetooth, RF, and USB interfaces that can emulate keyboards, jam signals, or analyze networks. Small but powerful. ________________ Carolyn Ford: You can take both. I’d just take you—you clearly know how to use them! Honestly, after this conversation, I feel like I should keep CDR in my purse at all times. ________________ Connor Morley: It solves a lot of problems! ________________ Carolyn Ford: In a world where deepfakes can impersonate anyone, who’s the last person you’d want a hacker pretending to be? ________________ Connor Morley: Myself. Deepfakes are already showing up in court cases where fake evidence is being presented, and juries can’t tell what’s real. Beyond criminal use, it’s devastating to reputations—personal or corporate. Once trust is broken, it’s nearly impossible to rebuild. ________________ Carolyn Ford: That takes identity theft to a whole new level. ________________ Connor Morley: Absolutely. The level of realism is frightening. Individuals don’t have PR or legal teams monitoring for that. By the time you realize it’s happened, it could be months—or years—too late. ________________ Carolyn Ford: For leaders looking to stay ahead of threats, what’s one step they can take toward proactive defense? ________________ Connor Morley: Innovation. Always aim for proactive, not reactive, security. Zero Trust is a strong paradigm because it eliminates threats—even those we haven’t seen before—and limits damage through compartmentalization. Attackers evolve constantly, so defenders must evolve faster. Never stand still. ________________ Carolyn Ford: That’s a perfect note to end on. Where can listeners connect with you to learn more about your work? ________________ Connor Morley: I’m on LinkedIn—feel free to reach out. I share research papers and publications there regularly. ________________ Carolyn Ford: We’ll link Connor’s profile in the show notes. Thanks again for joining us, and thank you to our listeners for tuning in. Please share this episode, and don’t forget to like and subscribe. Tech Transforms is produced by Show & Tell. Until next time—stay curious and keep imagining the future.