Customer experience leaders need to take a more active role in their organization’s AI initiatives — before it’s too late.
Successful AI deployments are “absolutely critical” to delivering a great customer experience, according to Isabelle Zdatny, head of thought leadership with the Qualtrics XM Institute. CX leaders are best positioned to understand how AI can enhance the customer experience because they own customer relationships, control touch points, and are responsible for satisfaction, loyalty and other key business outcomes.
But CX decision-makers often have “been along for the ride rather than key drivers of it,” Zdatny said.
Effective governance is crucial for the successful deployment of generative AI. However, many businesses are rolling out AI initiatives without it, damaging their customer experience, loyalty and brand reputation in the process, experts say.
Going forward with AI projects before they are ready is a risky business. Just this week, Qualtrics XMI found that AI customer support, such as AI-powered chatbots, “did not provide any benefit at all” for 1 in 5 consumers.
"If a company rolls out an AI feature that doesn't work or, even worse, does something bad that you don't want it to do, you immediately lose trust,” said Susannah Shattuck, head of product at Credo AI, an AI governance company. “Customers are very unlikely to go back and do anything with that company's AI, even if they make it better.”
Effective AI governance is a must: customer relationships depend on it.
“CX is on the front lines of consumer trust,” Shattuck said. “And a huge part of that trust is going to come from transparency about where and how AI is being used.”
What AI governance really means
AI governance refers to the set of processes, policies and tools that ensure AI systems are developed, deployed, utilized and managed to maximize benefits and minimize harm.
“It’s critical for organizations to have proper AI governance in place to mitigate the well-documented risks and ethical concerns associated with AI, like bias, violations of privacy and other unintended consequences,” said Eric Guarro, SVP of digital transformation at Ibex, a CX outsourcer. “Its absence can be troublesome and costly, so it’s important to build robust frameworks to quickly identify and rectify instances when AI goes off the rails.”
Effective governance enables a diverse range of stakeholders, including data, engineering, legal and business teams, to align their organization’s AI systems with business, legal and ethical requirements throughout the machine learning lifecycle.
"AI governance isn't just about preventing bad things from happening,” Shattuck said. “It's also about making sure that you're defining the most valuable problems to solve with AI and then making sure that the AI systems you're building can actually solve those problems."
That’s important because AI project failure is rampant. About 2 in 5 companies abandon most of their AI initiatives, according to S&P Global Market Intelligence. Nearly half see zero return on investment: The average organization scrapped 46% of AI proof-of-concepts before they reached production.
Pilots often don’t “scale into production because of a lack of centralized governance,” Zdatny said. "We call it pilot purgatory.”
To get the most out of AI initiatives, CX leaders must ensure that stakeholders are engaged in deployment, keep humans in the loop, test continuously to prevent bias, and comply with privacy rules, Guarro said.
Getting a seat at the table
CX leaders need to understand AI’s capabilities and limitations to ensure such initiatives meet the expectations of their staff and customers.
“That level of technical understanding is no longer a nice-to-have for non-technical roles,” Shattuck said.
CX decision-makers should research key terms and applications and ask questions to ensure they have the necessary information to make informed decisions.
Knowing the basics helps CX leaders define acceptable AI behaviors, service levels and quality thresholds, as well as ensure brand consistency across channels.
It can also help them better articulate the value that CX brings to AI governance and, once CX gets a seat at the table, achieve better results. For instance, a CX leader might explain that while agentic AI can boost efficiency, it may also lead to lower customer loyalty and lifetime value if customers' need for emotional connection goes unmet, Zdatny said.
"The trick is framing it in language and outcomes" executives across departments care about, Zdatny said.
Companies should integrate AI governance into their broader CX governance, strategy and operations.
“When AI represents your brand, all stakeholders should be engaged because the development requires input and approval across departments,” Guarro said.
Experts encourage setting up a Center of Excellence, a centralized unit of CX professionals focused on analytics and insights, experience design and change management, and charging it with AI governance.
“Distributed responsibility is the fastest way for nothing to get done,” Shattuck said. “Organizations need to resource AI governance with dedicated resources if they really want this to be successful, and they need to make sure that those resources have the power to make decisions."
However, many organizations lack a centralized CX function, making representation essential wherever key decisions about AI initiatives are made, such as steering committees or working groups. Executive sponsors and ambassadors can also help champion CX during the planning and implementation of AI initiatives, according to Zdatny.
Businesses must also understand that AI governance is an ongoing process, given the rapidly changing nature of the technology and its applications, as well as the ever-evolving risks and challenges, Shattuck said.
"One of the biggest pitfalls that we see is when a company tries to do AI governance as a one-and-done,” Shattuck said. "Making sure that your customers understand when they're interacting with AI, why they're interacting with AI and how that AI works — that is the most important piece for building external trust in AI systems.”