One of the promises of generative AI customer service was that it would improve the sometimes painstaking process of getting a traditional interactive voice response system to understand your problem.
Modern chatbots were supposed to make automated communication more natural, effective and organic — but the reality is very different, according to Brian Cantor, managing director of digital at Customer Management Practice.
“Unfortunately, data shows that many of today's AI experiences have had the reverse effect,” Cantor said during a CCW Digital webinar Monday. “They've resulted in experiences that are actually more difficult and demanding, and actually making conversations less clear and journeys far less streamlined.”
Less than 2 in 5 consumers have confidence in AI self-service as a support tool, according to CCW Digital data. The No. 1 frustration customers have with AI support is difficulty explaining their issue.
This is a solvable problem, according to Cantor. The key is that businesses need to focus on making AI-powered conversations more accessible by improving their ability to helpfully parse and respond to customer queries.
“It's not doing it to show off how advanced the AI model is,” Cantor said. “It's doing it because when we have that level of understanding, we can work more collaboratively, we can be more empathetic.”
Don’t get caught up in the details
It’s no secret that AI doesn’t actually experience empathy with customers, according to Oliver Shoulson, lead dialogue designer at PolyAI.
As a result, building a good AI experience isn’t about making it sound emotional, according to Shoulson. It’s not necessary to make an AI sound sad or respond with apologies after a customer tells it about their problem.
The actual key is to design conversations that make the AI come across as a trustworthy source of support and information, according to Shoulson. He likened conversation design to copywriting, where it’s not so much the individual words but the way they flow that matters.
“I think so much of what empathy actually is about is interactions over the course of the conversation, not any particular utterance,” Shoulson said. “Sort of like how call center workers have different personalities and different ways that they phrase things.”
The focus for teams it offers useful, not overly informative, feedback, according to Shoulson.
“I think a lot about the AI being cooperative,” Shoulson said. “In a pragmatic linguistics sense, we talk about the responses being relevant, accurate, and not over or under informative.”
Don’t forget personalization
AI doesn’t need to sound emotional, but that doesn’t mean automated conversations shouldn’t take customers’ feelings into account.
“They don't need you to ask them about their days and their pets and their families and all that stuff,” Shoulson said. “But that can easily become an excuse to not have any personalization, just be so transaction and so issue oriented.”
A chatbot that comes across as understanding the customer will incorporate context from their earlier interactions, according to Shoulson. From a design perspective, this means teams should think about the customer service journey as a whole as they design their AI’s flow.
“It's about how everything comes together to ensure that what we're serving up for that customer is what they need to see, and when the customer asks a question, we're taking all that context into account when we deliver that answer,” he said.
Potential questions to think about include what makes sense as the first question to ask and how much detail the follow-up should include, according to Shoulson.
For instance, is it enough to just ask for a customer’s account number, or should the AI also explain why the number is needed?
“I think agentic AI and implementing LLMs in AI is not just about giving them this giant repository of information and saying ‘Have at it,’” Shoulson said. “You have to exert a lot of control in structuring these interactions in ways that make sense to people and feel intuitive to people.”
Involve the end user
The gold standard for designing AI-powered customer service with the customer in mind is to engage in quality assurance testing with the actual end users. CX teams only have so much insight into how customers use their tools.
“Utimately, you are going to get yourself locked into your imagination of what the end user is going to say, or what the what the kind of edge cases you're going to encounter are, and those will never be right or all of them,” Shoulson said.
Teams can start with a quality assurance period before they launch their system, or if they are confident in their agility, quickly respond to issues as they arise during the early days of deployment.
Either way, the key is to not get bogged down in functionality at the expense of the experience, according to Shoulson. If customers avoid using an AI because it’s too frustrating, it becomes very hard to get the data necessary to improve the implementation.
The best way to encourage new users to try an AI tool is to assure them that they can jump to a live agent at any time, according to Cantor. This can break down some stigma against using AI and make it more accessible for people who aren’t used to the technology.
“There's some cheekiness to that,” Cantor said. “Yeah, I'll use your chatbot, but I'm just going to press zero right or go for the agent right away. But I think it also speaks to this idea that when we show that you're not locked in, we're pretty confident it's going to work.”