Customer experience leaders need to understand the data privacy and cybersecurity risks associated with AI-powered technologies, including chatbots, to accomplish their business goals and avoid irreparable harm, experts say.
"If you're deploying a technology that's even remotely autonomous, and you're putting it in front of customers who can say anything they want to it, you're potentially creating a risk," said Richard Lichtenstein, chief data officer for private equity and a member of the private equity and advanced analytics practices at Bain & Company.
But that doesn’t mean businesses should avoid AI entirely.
"It just means you have to be very careful about it," Lichtenstein said.
AI-powered technologies, including chatbots, are more vulnerable to cyberthreats than traditional software. But many organizations are adopting AI-powered technology without adequate oversight and safeguards, potentially damaging customer experience, trust, loyalty and brand reputation as well as exposing themselves to financial and legal risk.
"If something doesn't go the way the customer wants, the customer is not going to be mad at Zendesk or Qualtrics,” EY Americas AI Strategy Leader John Dubois said. “They're going to be mad at you."
Why AI is riskier
Unlike traditional software, generative AI systems won't produce identical results every time. While the outcome goals remain consistent, the specific wording, analysis and communication style will vary.
"The benefit of AI is that it can take in a wide variety of imprecise inputs, make sense of them and take action,” Lichtenstein said. “You can talk to an AI chatbot like it's a person, and it will understand you and be able to do something.”
But this non-deterministic nature creates new security challenges that don't exist with rule-based systems — the long-established approach to cybersecurity.
Generative AI systems are vulnerable to prompt injection attacks in which hackers camouflage malicious inputs as legitimate prompts to exploit them for nefarious purposes, such as leaking sensitive customer data.
This happens because AI-powered chatbots create a “lethal triangle” whereby untrusted external information, such as a customer query, has access to internal company information and the ability to communicate or take action externally, Lichtenstein said.
This makes them more challenging to defend against because they can penetrate multiple layers of security. With non-AI systems, companies can “get away with lax controls” because they have a defined syntax that makes it easier to identify specific types of attacks, said Assaf Keren, chief security officer at Qualtrics. But with AI prompt injection, which doesn't have a defined syntax, organizations must do everything right.
“You have less slack the moment you bring in an AI tool,” Keren said. “You can’t skip steps.”
The risks “multiply” when a chatbot moves from responding to acting, Dubois said. Multiagent systems that connect models from different vendors to solve a problem are especially vulnerable to data leaks and other issues.
"In the multiagent scenario, the vendor may be making [application programming interface] calls to models they don't own and that they don't understand the trustworthiness of,” Dubois said. “So, who is ultimately accountable if things go wrong?"
Making matters worse, bad actors are constantly inventing new ways to attack AI systems.
“It's really the Wild West,” Lichtenstein said.
Taking action to protect customer data
The vast majority of data privacy and cybersecurity measures, such as data governance and access controls, required for generative AI are the same as those for traditional software.
However, AI systems require extensive testing because they can respond in unpredictable ways. Moreover, every company uses a different combination of technologies and vendors, creating unique cybersecurity challenges for each organization.
While IT teams are most responsible for cybersecurity, there are steps CX leaders can take to help protect their customers and organization when considering AI products and services.
CX leaders should ask if a vendor follows industry standards, such as the NIST AI Risk Management Framework or ISO 42001, or participates in FedRAMP, the U.S. federal government’s compliance program for cloud-based products.
"NIST is basically the equivalent of saying there's a standard that we need to follow. And then the ISO 42001 is like, ‘Now your books get audited every quarter,’" Dubois said.
They should also ask essential privacy and security questions:
- Where did you source the training data?
- Are you using our customers' data for model training or reinforcement learning?
- If using customer data to model train or for reinforcement learning, is it only from our instance or globally shared?
- Is customer data anonymized and aggregated?
- What safeguards prevent prompt injection attacks that could expose customer identities?
- What's the frequency and transparency of your security audits?
It’s also important to understand the operational safeguards the vendor has established and the service level agreement:
- What guardrails and feedback loops exist?
- What are the escalation paths when AI fails?
- What's the service level agreement for resolving bias, mistreatment or inaccuracy issues?
- Can the AI interaction remain on-brand and empathetic?
- Who's accountable when multiagent systems fail?
Ultimately, however, each company is responsible for the data and technology it uses. So, organizations must take a “trust but verify” approach to AI data privacy and cybersecurity, Lichtenstein said.
The biggest hurdle, however, might be getting CX professionals to start thinking about cyber risk.
“It’s not top of mind, but it probably should be,” Lichtenstein said.