[00:00:00] Well, hey there folks. Welcome to today's episode of CX Without the bs, oh, we've got a major problem brewing in the world of customer service, and most folks are either ignoring it, pretending not to see it, or they're too scared to admit it's even happening. It's this whole push towards AI assistance in the contact center. You've seen the demos, you've heard the buzz vendors are out here saying, this tech is gonna revolutionize support, it's gonna save time, it's gonna cut costs, it's gonna reduce head count, it's gonna do everything. You name it, right? But on the front lines, the story looks very different. A new study was just, uh, dropping here covered by tech spot where some researchers teamed up with a big utility company in China to study how AI was actually being used by their customer service reps. And what they found was not the shiny magical AI that you see in the sales decks. one rep straight up said, and I [00:01:00] quote, the AI assistant, isn't that smart in reality? And that quote, that wasn't the exception to the rule, that was the overarching theme. because while the AI was supposed to help, it ended up doing nothing more than creating more problems, more errors, and more cleanup, and that added to more stress. I. So today what we're gonna do is dig into this. I have, five big takeaways from that article. All rooted in this study. We're gonna break them down and we're gonna be covering what the AI got wrong, why it made things harder for agents, how it failed to read customer emotion and the hidden burnout that this stuff is causing. And oh, by the way, we're also gonna cover why AI and humans, not AI instead of humans, is really the only setup that. Actually works. So let's dig in. First, AI made work harder, not easier. this is where the problem started in the transcriptions, what you would think would be one of the easiest things it can do because one of the main [00:02:00] things AI assistants are supposed to do is listen to the call, turn that call into text, and then spit out a summary so the reps don't have to do that. And that sounds awesome, right? Except it didn't work. The AI struggled with accents. It couldn't keep up with people who talk fast, guilty background noise, threw it off, and anytime a customer rattled off a string of numbers, like a phone number or an account number, the AI absolutely butchered it. One rep said that the system gave phone numbers, quote in bits and pieces, so they had to literally go in and manually retype everything. Anyway. That's not help, that's rework. The, the whole point of this kind of tool is to save time and reduce effort, but instead of removing tasks, it just moved them around and created a follow-up. I think we call that rearranging the deck chairs on the Titanic. And look, it's not like the AI was just getting numbers wrong. It [00:03:00] was also confusing words like new KNEW and new NEW or two, the number two TWO or the word two, like too many TUOO things that any human would understand instantly based on the context, of course. But the AI got incredibly wrong and got it wrong constantly. And if you're thinking okay, but at least it could handle, you know, tone and sentiment, right. Well, no. That's where it gets worse because part two AI couldn't even read the room. So let's talk about this. You know, ai, so-called emotion recognition, sentiment analysis, because that's a feature that gets talked about a lot. Supposedly, it can tell when a customer is upset. It listens to the tone of voice volume, pacing, and then it's supposed to give a little readout, like a tag, saying things like customer frustrated, customer neutral, but in practice that didn't work either. [00:04:00] Agents in the study said the system constantly flagged loud or passionate speakers as angry, even when they weren't. So if someone is naturally a loud talker, guilty, or they are in a noisy space, or heck, they, they're just energized, also guilty, the AI freaks out. It flags them as hostile. And suddenly the reps are sitting there thinking, are we sure that this person is even upset? And what happened was eventually the agents just started ignoring the emotion tags altogether. Why? Because they weren't accurate, and when you can't trust what your tool is telling you, it's not a tool, it's a distraction. Here's the core problem. Real people express emotion in different ways. I know surprising AI doesn't understand nuance. It doesn't understand culture, it doesn't understand tone. The way that humans do it reads signals and guesses, and right now it's guessing [00:05:00] wrong a lot more than it's guessing, right? here's another one. The summaries from AI weren't helpful, so maybe just maybe you're thinking, okay, it struggles with transcripts and tone, but at least it can generate a pretty decent call summary. That saves time, right? Nope. Again, this was another huge complaint from the reps in this study. The AI generated summaries were, quote, repetitive, missing, key customer details, full of filler, and often didn't even capture the main point of the call. Okay, so, what did agents do? They had to go in, read through the entire thing, edit the summary, and clean it up before moving on to the next task again. That's not time savings, that's overhead. We keep hearing about how these tools are supposed to remove friction, but what's actually happening is that we're adding mental clutter. We're asking reps to second [00:06:00] guess the very system that's supposed to help them. Yeah. And over time, that adds up. It wears people down, and it leads us to something that nobody wants to talk about, but we absolutely need to. It's creating invisible burnout. Here's the part that matters most, not just to the agents, but to the whole operation. When you give your team tools that are constantly needing correction, that feel more like a chore than a solution, what you're really doing is adding cognitive load. That's a fancy way of saying you're making people think harder about things that shouldn't take up much brain power. And when that happens over and over and over, it leads to burnout, not the. I need a vacation kind. The quiet, creeping kind, the kind you know, that shows up as disengagement, quiet, quitting, low energy. The kind where agents stop caring about the call. Stop checking the notes, stop trying to [00:07:00] connect because mentally they're just drained. They've been pulled in five different directions, juggling screens now, cleaning up AI messes, fixing summaries, ignoring emotion tags, all while trying to stay calm, composed, and helpful. That is what we call a recipe for disaster. And by disaster I mean attrition. It's a fast track to turnover and it's 100% preventable. If we stop throwing half baked tech at our teams, of course, and calling it innovation, and here's. Part five. I, I, I know we've talked about what's been going wrong with ai, but let's quickly talk about what actually works, because let's be real. Um, you know, I'm not just here to bash ai. Um, I love ai. I use AI daily. I, but I'll call out what's broken so we can build something that's better. And what works consistently is a hybrid model, AI plus humans together. Let the [00:08:00] AI do what it's good at, like pulling up customer records, suggesting helpful resources, generating a first draft of a summary. Route the call to the right person, faster modern keywords for compliance, but let the human be in charge. Let them review, let them override, let them lead the interaction. Let them use this magical thing called judgment. When you build your systems this way, where the tech supports the human, not replaces them, that's when you start seeing real results. You get happier agents, you get fewer mistakes, you get faster handle times, and most importantly, you get better customer experiences because the customer knows when they're talking to a real person, they can feel when someone's actually listening and they remember how it made them feel. Sorry, ai, but you can't replicate that. Not now. Not yet. [00:09:00] Maybe not ever. So here's the bottom line. AI can be useful, but it's not magic. It's not plug and play, and it's definitely not a replacement for real people. If you're rolling this stuff out, just because a vendor told you it's the future, you need to stop and ask, is this tool actually helping my team? Is it making their day easier or harder? Can they trust it? And if it disappeared tomorrow, would anyone miss it? Because if the answer is a no, it's not a solution. It's a liability. The future of customer experience, the future of CX isn't no humans. It's better supported humans. That's the shift we've gotta make. Not toward more tech, but towards smarter tech that actually helps the people doing the work. And if that's not something your system is doing right now, [00:10:00] it might be time to rethink it. Alright, that was today's episode of CX Without the bs. If you got some value from today's episode, do me a favor, go ahead and give it a share. But with that being said, it's Brian Nichols signing off on CX Without the bs. We'll see you next week.