Telecoms Fraud News - Toll Fraud - Security News - CX Today https://www.cxtoday.com/tag/fraud/ Customer Experience Technology News Fri, 07 Nov 2025 10:00:35 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.3 https://www.cxtoday.com/wp-content/uploads/2021/07/cropped-cxtoday-3000x3000-1-32x32.png Telecoms Fraud News - Toll Fraud - Security News - CX Today https://www.cxtoday.com/tag/fraud/ 32 32 Amazon Sues Perplexity for Allegedly Misusing Its AI Shopping Tool https://www.cxtoday.com/security-privacy-compliance/amazon-sues-perplexity-over-comet-ai-shopping-tool/ Wed, 05 Nov 2025 13:02:45 +0000 https://www.cxtoday.com/?p=75754 Amazon has threatened Perplexity with legal action after its shopping tool was accused of computer fraud. 

On Tuesday, the startup’s Comet AI was accused of violating Amazon’s ban on robot and data gathering. 

Amazon has previously warned Perplexity about the use of the tool on its shopping site. 

In the claim, Amazon accused Perplexity of misconduct against its company’s terms of service, claiming that its agentic browser, Comet AI, was being used to access customer accounts and make automated purchases on behalf of a customer, without Amazon’s knowledge. 

The accusation also claims that perplexity has damaged Amazon’s customer experience by pretending to be a human consumer and accessing restricted sections of its website, threatening the trust and privacy of customers. 

In a statement on Tuesday, a spokesperson for Amazon addressed the claims made against Perplexity. 

They said:

“We’ve repeatedly requested that Perplexity remove Amazon from the Comet experience, particularly in light of the significantly degraded shopping and customer service experience it provides. 

“This helps ensure a positive customer experience and it is how others operate, including food delivery apps and the restaurants they take orders for, delivery service apps and the stores they shop from, and online travel agencies and the airlines they book tickets with for customers. 

“Agentic third-party applications such as Perplexity’s Comet have the same obligations.” 

Amazon had previously demanded that Perplexity stop using its shopping tool bots on its platform in November 2024; however, it later accused the company of breaching this request in August 2025. 

In response to the allegations, Perplexity published an article on Tuesday titled ‘Bullying is Not Innovation’.

Within the article, the company responds to the claims made against them, saying they have felt ‘bullied’ by Amazon’s attempts to block Comet AI. 

A spokesperson for Perplexity said:

This week, Perplexity received an aggressive legal threat from Amazon, demanding we prohibit Comet users from using their AI assistants on Amazon. 

“This isn’t a reasonable legal position; it’s a bully tactic to scare disruptive companies like Perplexity out of making life better for people.” 

Interestingly, Sirte Pihlaja, the CEO of CX design agency, Shirute, compared Amazon’s recently launched Buy for Me feature to Perplexity’s Pro Shopping Assistant.

Although this is not directly linked to the current legal battle, it does point to Amazon encroaching on Perplexity’s area of expertise and looking to cut out third-party shopping tools from its customer journey.

However, Amazon does not appear to be the only company that has an issue with Perplexity,

Indeed, the company has been cautioned against similar claims made by other businesses in recent months. 

In August, cloud service provider Cloudflare accused Perplexity of intentionally disguising its bots as Google Chrome browser’s and avoiding detection to access sites without permission repeatedly after being asked to stop, resulting in Perplexity being removed from the company’s list of verified bots. 

In October, Reddit sued several firms, including Perplexity, of dodging anti-scraping safeguards and stealing customer data, with Perplexity rebutting the claim suggesting it was a threat to ‘public interest’. 

What This Means for the Wider CX Space

The dispute between Amazon and Perplexity underscores a growing tension at the intersection of AI innovation, customer experience, and digital ethics.

As agentic AI tools like Comet become more capable of acting autonomously on behalf of users, brands are being forced to reconsider the boundaries of their customer ecosystems and who truly “owns” the customer relationship.

For the CX industry, this clash highlights an inflection point.

On one hand, AI-driven shopping assistants promise hyper-personalized, frictionless experiences, which can provide consumers with convenience and control. On the other, they raise serious concerns about trust, transparency, and brand integrity.

If platforms continue to restrict third-party AI integrations, CX innovation risks becoming siloed, limiting customers’ ability to curate the experiences they want.

Conversely, allowing open access without guardrails could erode trust and compromise data security. The challenge for CX leaders, then, is to strike a balance: enabling AI-led personalization while maintaining accountability, compliance, and ethical clarity.

]]>
Microsoft Faces Legal Action After Allegedly Misleading 2.7 Million Copilot Customers https://www.cxtoday.com/security-privacy-compliance/microsoft-faces-legal-action-after-allegedly-misleading-2-7-million-copilot-customers/ Mon, 27 Oct 2025 14:06:48 +0000 https://www.cxtoday.com/?p=75495 A lawsuit has been filed against Microsoft after allegations were made of purposefully misleading 2.7 million Australian users. 

Today, the Australian Competition and Consumer Commission (ACCC) has published a lawsuit against Microsoft Australia and its US corporation, stating that the software giant had intentionally misled its Australian customers after a 45% maximum price increase was added to its AI assistant, Copilot, on October 31, 2024.   

The ACCC claims that Australian Microsoft Copilot customers were told to pay the higher subscription fee or cancel their subscription altogether, without informing them of their third plan option. 

The third plan option, Microsoft 365 personal and family classic plans, was claimed to be accessible only to customers once they had begun the process of canceling their subscriptions. At that point, customers were offered lower prices while keeping all original features without the AI assistant. 

The regulator claimed this action was in breach of Australian consumer law, as Microsoft had failed to disclose its cheaper plan options and created a misconception about the options available to its customers. 

ACCC Chair, Gina Cass-Gottlieb, revealed the regulator’s next steps in the lawsuit, stating that a number of customers would have opted to the classic plan if they were made aware.

“Following a detailed investigation, we will allege in Court that Microsoft deliberately omitted reference to the Classic plans in its communications and concealed their existence until after subscribers initiated the cancellation process to increase the number of consumers on more expensive Copilot-integrated plans,” she said.

The Microsoft Office apps included in 365 subscriptions are essential in many people’s lives and given there are limited substitutes to the bundled package, cancelling the subscription is a decision many would not make lightly

Although Microsoft is making headlines this week, it is far from the only major customer service vendor to have found itself in hot water.

In recent months, several other companies have been flagged for consumer law breaches. 

In one example, Amazon settled a $2.5BN lawsuit with the US Federal Trade Commission in late September after alleging the multinational tech company of duping customers into signing up for their Prime services without their knowledge. 

In line with Australian consumer laws, ACCC has outlined that the maximum penalty per breach is A$50MN. 

This also follows a similar lawsuit filed earlier this month by ChatGPT customers, accusing Microsoft of exploiting its OpenAI cloud deal by allegedly inflating the AI platform’s prices while also decreasing service quality. 

This breach highlights the wider issue of how tech companies broadcast their product price changes to their customers, especially in the increased rate of AI integration. 

The Microsoft lawsuit also reveals that regulators are paying more attention to these companies’ pricing options and how clearly they’re being communicated to the customer, meaning more tech companies will be forced to be more transparent when pricing their products, as well as tackling raising concerns around their customer loyalty. 

]]>
AWS Outage Fallout: How Service Disruptions Spark Scams and Shake Customer Trust https://www.cxtoday.com/security-privacy-compliance/aws-outage-fallout-how-service-disruptions-spark-scams-and-shake-customer-trust/ Tue, 21 Oct 2025 14:55:10 +0000 https://www.cxtoday.com/?p=75362 As services recover from the recent AWS outage, scammers are taking advantage of the disruption.

While engineers worked to restore normal service, fraudsters began flooding inboxes, phone lines, and social media feeds with phishing attempts, fake tech support messages, and bogus “fixes.”

For already-frustrated customers, these schemes add insult to injury, turning a few hours of downtime into a longer-lasting disruption and putting them at risk.

The outage, which began at 00:11 PDT (08:11 BST), involved network connectivity issues at AWS’s US-EAST-1 data center in Northern Virginia and affected multiple AWS services until it was eventually resolved at 15:53 PDT (23:53 BST).

The downtime extended across Amazon’s own platforms as well as workplace tools, gaming platforms, food and transportation apps, social media, fitness and lifestyle apps, financial platforms, and several UK government websites that run on the tech giant’s cloud servers.

The extent of the outage not only highlighted our digital dependency and how much we rely on a single cloud provider for everyday digital life, it also laid bare the vulnerabilities that follow in the wake of such an outage, said Vonny Gamot, Head of EMEA at online protection company McAfee.

“AWS’ massive outage reminds us just how interconnected our digital world has become. When a single service like Amazon Web Services goes down, it’s not just businesses that feel the impact, it’s consumers trying to access everyday essentials like banking apps, emergency services, or even their favorite platforms like Fortnite and Snapchat.

“The complexity of our shared cloud infrastructure means a glitch in one system can send shockwaves across the internet.”

How Service Disruptions Put Customers—and Enterprises—at Risk

More than six million individuals were reportedly affected, according to security researchers at Cybernews.

Beyond the inconvenience of being unable to access services, such outages can open up customers to scam attacks. At a time when employees are stretched thin by efforts to get systems up and running, businesses also become more vulnerable to security breaches.

“Cybercriminals thrive in the confusion, exploiting the moment with fake support scams, phishing emails, and malicious links posing as fixes,” Gamot said.

Hackers can use AI tools to send emails to customers that appear to originate from an affected organization, often using a spoofed email address or phone number that mimics the organization’s legitimate details.

To help consumers navigate the aftermath safely, Gamot shared the following practical tips:

  • Remain skeptical of unsolicited messages about service restoration or refunds, especially if they ask for personal information or urge fast action.
  • Never send money or credentials through unofficial channels in response to a message about the outage.
  • Outage days are prime time for misinformation, from fake screenshots to “DIY fix” instructions. Stick to official help centers and status pages.

For customer experience leaders, this raises a fresh challenge: managing trust as well as service reliability. When they cannot access a service, customers turn to brands for clarity, and silence or slow communication can leave a vacuum that scammers eagerly fill—not to mention competitors.

Exemplifying this, Mike Young, Project Manager at PTT Design noted in a LinkedIn post that his company buys its Autodesk software from a reseller that did not respond to a support ticket, while another reseller who contacted its email newsletter list of customers and non-customers with news of the AWS outage that had affected Autodesk software offered a workaround fix.

“Clearly a response to the outage that will make me get in touch with them next time I renew our software licenses,” Young wrote.

As Kolton Andrus, CEO and Founder of reliability management platform Gremlin put it:

“[The] AWS outage is a reminder to us all: Your customers don’t care whose “fault” it is, they pay you for reliable service. PSA: Avoid blaming your outage on AWS (or any vendor). You chose that dependency, so own the risks. Know how it fails, and always have a backup plan ready.”

“Outages happen, but proactive teams turn them into opportunities,” Andrus added.

Jacqueline Watts, Head of Corporate Commercial Law at Allin1 Advisory had a similar message:

“Whether your business runs on AWS, Azure, Google Cloud or a hybrid mix, the lesson is the same: Outages happen. What matters is whether your business can survive a digital blackout without losing customers, data or investor confidence.”

This is where disaster preparedness comes in, Watts added. Beyond ticking off a technical checklist, it requires having legal protections in contracts and service-level agreements, well-defined backup procedures, robust systems to maintain data continuity, and a clear communication plan for when things go wrong.

“If the AWS chaos made your heart skip a beat, take that as your cue to stress-test your stack, your supplier contracts, your customer agreements and your continuity plan now.”

All of this can feel like mere legal formalities, “until you realise your customers can’t access your platform and you’re suddenly explaining downtime and liability to different stakeholders.

In the fog of digital disruption, communication matters as much as technical recovery. The first few hours after an outage can define whether customers walk away reassured or fall prey to the next scam that lands in their inbox.

]]>
Klarna Rethinks CRM AI Strategy by Partnering With Google Cloud https://www.cxtoday.com/contact-center/klarna-rethinks-crm-ai-strategy-by-partnering-with-google-cloud/ Wed, 15 Oct 2025 15:24:48 +0000 https://www.cxtoday.com/?p=74644 Google Cloud and Klarna have announced a strategic partnership that will include AI models after Klarna decided to redeploy its customer service staff. 

The cloud computing service suite and flexible payment provider have entered an AI-first partnership, aiming to improve Klarna’s customer experience with Google Cloud’s customer-centric products. 

Klarna is already reporting dramatic growth with early AI pilot testing, with a 50% increase in customer orders and a 15% increase in average time on the app. 

David Sandström, Chief Marketing Officer at Klarna, claimed that combining Google’s AI models with Klarna’s consumer insights was allowing the buy-now pay-later specialist to “craft experiences that feel smarter and more personal.  

Early pilots already show the potential: AI-driven creative concepts, from dynamic digital ‘lookbooks’ to hyper-personalized product campaigns, boosted time spent in our app by 15% and increased orders by 50%. 

Interestingly, the collaboration follows Klarna’s recent decision to bolster its customer service headcount by redeploying staff from other areas of the business.

At the time, Sebastian Siemiatkowski, CEO of Klarna, seemingly backtracked on the staunch pro-AI in customer service views he had previously espoused, admitting that the human touch was more critical in customer service than he had first thought.

While this new strategy appears to be flip-flopping back to championing AI, the company has confirmed that the partnership will initially focus on empowering Klarna’s current teams to transform the consumer experience.  

Via the Klarna App, the partnership will be concentrating on two specific areas for its customers (outlined below), which are designed to deliver a richer, more personalized shopping experiences and engaging content. 

Creative Velocity

As part of the partnership, Klarna is leveraging Google’s latest generative media models, including Veo 2 and Gemini 2.5 Flash Image (Nano Banana). 

These models are being used to create dynamic digital “lookbooks”, an automatically generated shopping gallery personalized for consumers on the Klarna app.

These “lookbooks” tailor shopping interests to each customer, designed to adapt to trends and individuals’ clothing interests.

Klarna is also employing Google’s AI models to assist in the creation of their hyper-personalized marketing campaigns for its users. 

Personalization and Beautification

Klarna is aiming to enhance its current customer service offerings by targeting its extensive library, home to more than 200 million images. 

This will be achieved by regenerating and refining its visuals, ensuring that every shopper encounters a higher quality of engagement and visual content on the app. 

Marianne Janik, Vice-President, EMEA North at Google Cloud, highlights how Google Cloud will help transform Klarna’s customer service.

“To lead in this new AI era, businesses require more than tools – they need strategic capabilities. Our partnership with Klarna is about providing just that,” she said.

Our integrated, AI-optimized platform and cutting-edge models are enabling Klarna to unlock significant creative velocity and drive innovation. 

“We’re proud to help Klarna not just adopt AI, but also use it to fundamentally redefine the customer experience.” 

Security is Key

Outside of the lookbooks and revamped images, the partnership will also allow Klarna to strengthen its security by installing Google Cloud’s AI hardware and expertise. 

This will be used to train and deploy artificial Graph Neural Networks (GNN) to tackle instances of fraud or money laundering on its platform. 

The learning models have been designed to analyze complex relationships between users, transactions, and devices, in order to detect any anomalies or suspicious patterns at a higher accuracy level. 

This security extension ensures secure protection as Klarna continues to improve its creativity on next-generation products.

]]>
Vonage Launches Fraud Detection Tool for Salesforce Amid Spike of Attacks https://www.cxtoday.com/contact-center/vonage-launches-fraud-detection-tool-for-salesforce-amid-spike-of-attacks/ Mon, 13 Oct 2025 12:31:38 +0000 https://www.cxtoday.com/?p=74588 Vonage has launched a security solution, native to its CCaaS platform, that supports human and AI agents in combating rising contact center attacks. 

The Vonage Agentforce Identity Insights and Fraud Detection solution detects possible attacks, verifies customers, and validates effective communication channels in real time. 

When it detects a possible attempt at fraud, the agent receives an alert via their desktop, prompting them to make additional authentication checks.

The solution, available to Vonage Premier for Salesforce Voice customers, can therefore safeguard all human-led conversations. Yet, the solution it may also alert AI agents as they engage with customers, prompting them to take similar actions.

To provide the solution, Vonage utilizes its Communications and Network APIs together with Agentforce actions.

Its launch follows several Salesforce attacks in recent months, which have resulted in an FBI warning of cybercriminal groups targeting Salesforce Service Cloud.

With Vonage Agentforce Identity Insights and Fraud Detection, Vonage hopes to protect mutual customers from similar threats. 

“Fraud continues to be an ongoing challenge for businesses in today’s evolving digital landscape, underscoring the need for constant innovation in prevention and detection technologies,” added Reggie Scales, President and Head of Applications for Vonage.

With Vonage Identity Insights for Agentforce, we are putting the power to combat these risks directly into the hands of those on the frontlines of the contact center: agents.

As an example of the additional intelligence the solution provides, it includes a SIM swap check, powered by Vonage’s Network APIs. 

With this, contact centers can identify and flag potential fraudulent numbers with their SIM recently swapped, safely validating mobile numbers before sending messages or engaging in voice calls. 

Alongside a SIM swap, the solution gathers more “rich phone intelligence”. That includes number type, carrier, call ID name, and more, enabling contact centers to:

  • Flag potential fraud risks – Detect numbers linked to recent or multiple SIM swaps, enabling contact centers to escalate and address suspicious activity quickly.
  • Verify customer identities – Match incoming call IDs against CRM records to ensure secure, frictionless customer interactions.
  • Optimize outbound engagement – Automate SMS and WhatsApp outreach for mobile users, while routing landline-only contacts to specialist sales teams.
  • Enhance lead quality – Verify phone numbers at the point of lead creation to eliminate invalid or outdated contact details.
  • Deliver reliable notifications – Send reminders and alerts only to verified numbers, boosting engagement rates and message deliverability.

Doing all this allows for the reduction of manual verification efforts through automation, helping agents to prioritize more complex tasks. 

David Myron, Principal Analyst of Customer Engagement at Omidia, explains how Vonage can go beyond competitors in securing its CCaaS-CRM implementations. “By leveraging network intelligence, Vonage Identity Insights for Agentforce offers a seamless and automated verification process that is completely invisible to the customer,” he said. 

This is a major breakthrough, leading the way for all businesses to tackle fraud prevention head on, while continuing to foster the kind of customer experience that drives lasting loyalty. 

“CX and security are critical to every business’s success, and today’s customers demand both.” 

Vonage Identity Insights for Agentforce is now available to customers on the Salesforce AppExchange. 

 

]]>
Lenovo’s Customer Service AI Chatbot Got Tricked Into Revealing Sensitive Information. Here’s How. https://www.cxtoday.com/contact-center/lenovos-customer-service-ai-chatbot-got-tricked-into-revealing-sensitive-information-heres-how/ Wed, 20 Aug 2025 13:08:06 +0000 https://www.cxtoday.com/?p=73088 Lenovo is the latest high-profile brand to have a security flaw exposed in its AI customer service chatbot.

Indeed, Security Researchers at Cybernews opened up Lenovo’s ChatGPT-powered customer service assistant, Lena, with jaw-dropping results.

Its investigation found that Lena can be tricked into providing sensitive company information and data.

Cybernews researchers were able to uncover a flaw that allowed them to hijack live session cookies from customer support agents.

With a stolen support agent cookie, an attacker could slip into the support system without any login details, access live chats, and potentially dig through past conversations and data.

And all it took was a single, 400-character prompt.

In discussing the investigation, the Cybernews researchers highlighted the relative ease with which AI chatbots can be duped:

Everyone knows chatbots hallucinate and can be tricked by prompt injections. This isn’t new.

“What’s truly surprising is that Lenovo, despite being aware of these flaws, did not protect itself from potentially malicious user manipulations and chatbot outputs.”

The news comes soon after CX Today reported on how a different team of researchers cracked open a replica of McKinsey & Co.’s customer service bot, getting it to spit out entire CRM records.

Unpacking the Flaw

First of all, it should be noted that while Cybernews did uncover a flaw in Lenovo’s system, there is nothing to suggest that bad actors have accessed any customer data or information.

Cybernews reported the flaw to Lenovo, which confirmed the issue and moved quickly to secure its systems.

But how exactly were the Cybernews researchers able to dupe Lena?

The researchers have revealed that the prompt used contained the following four key elements:

  • Innocent opener: The attack begins with a straightforward product query, like asking for the specs of a Lenovo IdeaPad.
  • Hidden format switch: The prompt then nudges the bot into answering in HTML (alongside JSON and plain text), a format the server is primed to act on.
  • The payload: Buried in the HTML is a bogus image link that, when it fails to load, pushes the browser to contact an attacker’s server and leak session cookies.
  • The push: To seal it, the prompt insists the bot must show the image, framing it as vital to the user’s decision-making.

Worryingly, Zenity revealed earlier this month that 3,500 public-facing agents remain open to similar prompt injection attacks.

How to Prevent Your Chatbot from Becoming a Liability

Lenovo’s Lena case is a wake-up call for any company leaning on AI for customer support.

The core problem isn’t just a single flawed implementation; chatbots, by design, are eager to please. And when that eagerness meets poorly vetted inputs, things can go sideways fast.

Indeed, Lenovo is far from the first major organization to experience chatbot troubles.

The challenges aren’t limited to security flaws. AI chatbots have a long history of hallucinating and/or giving wrong or misleading advice.

Take New York City’s “MyCity” small-business assistant as an example. In April 2024, it misrepresented city policies and even suggested illegal actions to users.

Similarly, Air Canada recently found itself in court over its chatbot’s inaccurate guidance, with judges ruling the airline had to honor advice that was plain wrong.

Other errors have verged on the absurd. For instance, DPD’s GenAI chatbot was coaxed into swearing and composing a self-deprecating poem about the company.

These incidents underline just how unreliable chatbots can be.

For businesses, the question isn’t if an AI will make mistakes; it’s how prepared you are to contain them when they do make a mistake.

While the ever-evolving nature of AI-powered technology makes it impossible to put together a definitive guide on how businesses can prevent chatbot errors, the following steps will go a long way towards shoring up your defenses:

  • Harden input and output checks: Never trust what comes in or goes out. Sanitize all user inputs and chatbot responses, and block execution of unverified code. It’s a simple step that could have prevented the session-cookie flaw in Lena.
  • Verify AI outputs before acting on them: Web servers shouldn’t automatically treat chatbot outputs as actionable instructions. As is evident, blind trust can open the door to attacks.
  • Limit session privileges: Not every bot interaction needs full agent-level access. Segregating privileges reduces the impact if a token or cookie is compromised.
  • Monitor for anomalies: Keep an eye on unusual access patterns or unexpected requests. Early detection is often the only thing stopping small flaws from becoming major breaches.
  • Test aggressively and continuously: Regularly simulate prompt-injection attacks or other AI-specific exploits. Proactive testing beats reactive firefighting every time.

Ultimately, while chatbots can boost efficiency and CX, they can only truly be relied upon if businesses pair them with strong security hygiene.

As all of the above examples have demonstrated, even big brands can overlook the basics – and in the world of AI, small oversights can escalate fast.

 

 

]]>
A Scammer Hijacks United Airlines’ Customer Support Line, Costing the Victim $17k https://www.cxtoday.com/contact-center/a-scammer-hijacks-united-airlines-customer-support-line-costing-the-victim-17k/ Mon, 18 Aug 2025 13:55:57 +0000 https://www.cxtoday.com/?p=73004 In this day and age, somebody getting scammed over the phone doesn’t usually make headlines, but there’s something unique and troubling about Dan Smoker’s incident with United Airlines.  

Stop us if you’ve heard this one before: a customer called up a company and ended up accidentally sending money to a bad actor.  

It isn’t quite a tale as old as time, but it’s certainly been around for as long as telephony customer service has been.  

In most cases, the customer uses an internet search to find the company’s contact number and is redirected to a scam line or receives a call directly from the scammer impersonating the company.  

However, in this instance, Smoker has proof that he only ever dialed the United Airlines’ official customer service number, so how exactly did he end up losing over $17,000?  

A Series of Usual and Unusual Events

As he was about to embark on a European family holiday, Smoker’s flight was cancelled.  

Most people will have experienced something similar at some point during their travels. While it is unfortunate and can be very irritating, it is usually a fairly straightforward process.  

Smoker contacted the official United Airlines customer service number to try to rebook a flight for himself and his family.  

During his three-hour-plus phone call, Smoker was able to rebook the flights and upgrade to premium economy. The service agent confirmed that they would have to charge Smoker for the new tickets, but that he would be able to get a full refund.  

Following the call, Smoker received an email confirmation with details of the new flights and the refund. 

It all sounds pretty uneventful so far, but this is where things took a turn. 

Months later, the refund never appeared. 

After checking over his credit card statement, Smoker noticed that the $17,000 charge was listed under AIRLINEFARE, rather than United Airlines.

At this point, he suspected he may have been the victim of a scam and reached out to Consumer Investigator Steve Staeger. 

Staegar reviewed Smoker’s call log and confirmed that he had contacted the official United Airlines number, but he did notice several irregularities with the email confirmation.  

“When I read that refund email, I spotted red flags almost immediately, like the email didn’t come from a United Airlines email address,” he said in an appearance on Denver, Colorado’s 9News.  

The format is weird, some numbers with zeros in front of them, dollars always listed in USD, dates with day in front of the month. I figured Dan had been taken advantage of.

But, if Smoker only ever contacted United, how did this happen?  

During the call, Smoker recalls being placed on hold by a female agent. When the call returned, it had been passed on to a new male agent named ‘David’.  

It looks like this is where the scammer must have made contact, as United later confirmed that although they did have a record of Smoker’s call, it had only been logged on their side for 12 minutes – not the three hours that Smoker’s call log shows.  

The airline has launched an internal review but cannot yet explain how the line was diverted to a scammer or why their own records showed a far shorter interaction. 

In the meantime, Smoker has filed a fraud report with his card provider as he waits for answers.  

However, Smoker has stated that “it’s not even about United paying the $17,000,” he just wants to know how he contacted United Airlines but ended up being on the phone with ‘David’. 

How Enterprises Can Avoid the Same Pitfalls

While the United Airlines mess has yet to fully untangle itself and is undoubtedly a unique instance, there are still wider customer experience and service lessons to be learned.  

For enterprises, the story should be viewed as more than just a one-off scam; it’s an example of what happens when customers can’t trust the channels you control. 

Contact numbers, live chat links, and social handles serve as signals of credibility. If those signals are hijacked or left vulnerable, the brand becomes part of the problem. 

So, what can enterprises do? 

First, agent education is critical. In this case, the agent could have been manipulated into forwarding the customer to a fraudster. As such, contact center leaders must offer agents a clear escalation path if they are ever threatened into following a scam, so they never feel alone and coerced. 

That’s not an unusual occurrence. Many customer service leaders will tell stories of their agents being threatened in the car park to share sensitive information.

There is also the chance that the agent did this of their own free will. Given this risk, service leaders should engage with their tech partners to ensure they can block external transfers or add a permissions layer.

Moreover, contact centers should tighten their verification processes. That means securing their digital footprint across search engines, directories, and social platforms, making sure customers aren’t falling into lookalike traps. 

From there, they must build redundancy into their trust markers. Caller ID authentication, clear callback protocols, and two-step verification for financial transactions are no longer ‘nice-to-haves’; they’re essentials. 

Equally, enterprises need to think about customer communication. When customers don’t know how to tell the difference between a real agent and an impostor, silence becomes a risk. 

Proactive education – whether it’s a line in the IVR, a banner on the support site, or consistent messaging in email receipts – can go a long way in this regard. 

At its core, this is about recognizing that the support channel isn’t just a service function; it’s the front line of brand integrity. 

If customers can’t trust the number they call, everything else comes undone. 

 

 

]]>
Google Takes on Deepfakes with a New “Know Your Customer” Innovation https://www.cxtoday.com/customer-engagement-platforms/google-takes-on-deepfakes-with-a-new-know-your-customer-innovation/ Mon, 02 Jun 2025 16:16:59 +0000 https://www.cxtoday.com/?p=71079 While AI is the poster child for the new age of technology, its ugly underbelly is swelling at an alarming rate.

Alongside other new risks, like chatbot attacks and model poisoning, AI-generated deepfakes pose a big threat to enterprise security and customer data.

Indeed, a worrying consequence of rising AI capabilities is that it enables attackers to make deepfakes of customer voices and documents.

To tackle the latter, Google Wallet recently released an easier way to prove your age and identity.

By integrating its Zero Knowledge Proof (ZKP) technology with Google Wallet, the tech giant ensures that “there is no way to link the age back to your identity.”

The innovation allows Google to provide age verification across mobile devices, apps, and websites that use its Digital Credential API, so customers don’t have to share their precious data, which could be used against them.

On the announcement, Alan Stapelberg, Group Product Manager at Google Wallet, explained in a company blog post: “We will use ZKP where appropriate in other Google products and partner with apps like Bumble, which will use digital IDs from Google Wallet to verify user identity and ZKP to verify age.

To help foster a safer, more secure environment for everyone, we will also open-source our ZKP technology to other wallets and online services.

An Innovation That Comes at a Critical Time

This introduction comes at a vital time, not only because of the aforementioned issue of deepfakes, but also because most verification systems can’t properly detect them.

Almost every sector, including retail, has seen data breaches recently, with Samsung and M&S hitting the headlines this year.

Many data breaches expose complete identities, which are occurring daily. Google hopes that news interventions will reduce data breaches from having these kinds of implications.

Here’s a snapshot of what Google is looking to deliver:

  • Verify age without revealing birthdate
  • Prove identity without showing documents
  • Control exactly what data you share

The UK Government Becomes an Early Adopter

This innovation has already been widely received, most notably with the UK Government being the first to adopt the ZKP system.

Multiple platforms are to follow, too, with Bumble, Uber, Amazon, and CVS Health all looking to secure their customers’ data through the system.

The knock-on effects of this innovation include dating apps verifying age mathematically, banks conducting KYC without storing documents, Healthcare systems accessing records privately, and travel verification becoming truly digital.

Through this Google Wallet innovation, enterprises hope to achieve the holy trinity of privacy, control, and resilience.

How Much of a Concern Are Deepfakes?

Deepfake concerns aren’t just limited to ID fraud.

Voice notes are also being considered the latest weapon in the arsenal of deepfakes, as previously reported by CX Today.

Research indicates that deepfake voice notes are emerging as a growing cybersecurity risk.

As voice messaging gains traction among friends, families, and professionals, the rise of AI-generated audio for deception and fraud is becoming a serious concern.

A 2024 survey by Preply found that two-thirds of American adults have used voice notes, with 41 percent reporting a noticeable increase in their use over recent years.

This surge in voice note popularity aligns with a sharp rise in AI-driven deepfakes worldwide, with one study revealing that counterfeit audio and video content grew by 245 percent year-over-year in 2024.

 

]]>
The Big Cisco-ServiceNow Partnership: A Closer Look https://www.cxtoday.com/crm/the-big-cisco-servicenow-partnership-a-closer-look/ Tue, 29 Apr 2025 12:30:57 +0000 https://www.cxtoday.com/?p=70045 The latest models, innovations, and possibilities dominate the enterprise AI conversation.

Yet, in terms of its practical application, there must be more room for discussion around governance, security, and data access.

Cisco and ServiceNow have pushed these crucial talking points to the fore by declaring a strengthened partnership.

In doing so, the vendors announced the combination of Cisco AI Defense and ServiceNow SecOps.

However, this isn’t a run-of-the-mill integration; it’s co-innovation, alongside a commitment by the two vendors to work more closely together.

Indeed, during an interview on Cisco’s YouTube channel, Amit Zavery, President, CPO, & COO at ServiceNow, teased ongoing co-innovation, deeper product integration, and real engineering collaboration, not just go-to-market fluff.

As such, expect more from the two tech giants, with considerable crossover, especially from a customer experience perspective.

Nevertheless, before considering what could come next, let’s first consider how its initial co-innovation endeavor will work and benefit mutual customers.

That starts with a quick rundown of the newly interoperable Cisco AI Defense and ServiceNow SecOps solutions.

What Is Cisco AI Defense?

Enterprises face two critical emerging security challenges as they get to grips with AI.

First, they must protect AI assets across the enterprise environment, ensuring employees don’t tamper or misuse them, knowingly or not.

Second, they need to prevent “shadow AI”, which is the utilization of unapproved third-party generative AI (GenAI) applications.

Cisco AI Defense addresses these challenges by acting as a control system.

Indeed, it monitors internal AI assets, ensuring they’re secure and alerting security and compliance teams to tampering and anomalies.

It also observes AI’s use in the network to detect shadow AI activity.

What Is ServiceNow SecOps?

ServiceNow SecOps splits into two groups of apps, tools, and workflows.

The first set aims to anticipate and understand security incidents and vulnerabilities. The second set focuses on case management, informing quick responses to critical issues.

Still, many organizations monitor issues via spreadsheets and email, which makes crucial updates and reporting difficult.

With SecOps, businesses can establish an HQ for their security posture, utilizing visual dashboards to track problems and analytics to spot trends and response times.

Analytics also allows companies to determine the possible impact of vulnerabilities and incidents, prioritizing action and establishing ownership.

How Will the Co-Innovation Work?

The new partnership ties Cisco AI Defense and ServiceNow SecOps together in several ways.

First, Cisco AI Defense will map all the AI workloads, models, and data across the ServiceNow platform, including all its apps and services.

From there, it may perform automated vulnerability assessments. Findings will appear in SecOps’ Vulnerability Response app, where organizations can track issues, triage, and address them.

Meanwhile, Cisco AI Defense metrics will pass into SecOps’ Security Incident Response app. The telemetry will provide insight into incidents, support their investigation, and facilitate a proactive threat response.

Additionally, the co-innovation will pull together two other key features of the products: Cisco AI Runtime Protection and ServiceNow Security Posture Control.

Cisco AI Runtime Protection is a solution that puts guardrails on AI apps, blocking malicious inputs, scanning models for harmful content, and more.

ServiceNow Security Posture Control will isolate possible gaps in its coverage before making that data available for vulnerability prioritization.

Finally, mutual customers may track AI organizational compliance by inputting Cisco AI Defense controls as standards within ServiceNow’s Integrated Risk Management platform.

How Will Mutual Customers Benefit?

Trials of this initial integration will begin “soon”, with it tabled to become available to joint customers sometime in the second half of 2025.

In doing so, enterprises can establish a single view of their AI applications for governance and IT teams, bolstering their security posture.

Critically, that helps these teams stay connected with integrated platforms that cover AI and security infrastructure and workflows.

Ultimately, that will help enterprises pinpoint vulnerabilities and incidents faster, taking action to minimize any negative impact or – even better – ensure it never occurs.

Consider this troubling statistic: global companies took a mean average of 194 days to identify data breaches in 2024, per Statista.

Here’s another: almost a third of UK employees are not only using unauthorized AI tools at work, but they’re paying for them, too, according to Deloitte.

These are precisely the issues Cisco and ServiceNow are confronting with this collaboration.

“Through this partnership, Cisco and ServiceNow are aligning security and AI operations at the platform level,” summarized Zavery.

By combining Cisco’s advanced AI security with ServiceNow’s role as the AI control tower for the enterprise, we’re helping customers operationalize trust—ensuring that AI is governed, secure, and ready to scale.

What’s Coming Next?

As Jeetu Patel, EVP & CPO at Cisco, teased in the above-mentioned YouTube interview:

The market should expect a lot of cross-product integration… We don’t view things as a zero-sum game but instead focus on interoperability. That is incredibly important.

Indeed, there are many more opportunities for interoperability within the Cisco and ServiceNow portfolios beyond AI and security.

For instance, ServiceNow is working with hyperscalers to run its own data centers across many regions. That creates synergies with Cisco in networking, performance management, and more.

However, for Zeus Kerravala, Principal Analyst at ZK Research, the collaboration will likely expand into CX next.

“ServiceNow and Cisco are leaders in their respective fields, and the co-development and innovation can greatly benefit customers,” he said. 

I’m expecting to see the partnership extend to other areas, most notably CX where co-innovation can deliver more value to more customers.

A potentially significant opportunity in the CX market is to bring together the Webex Contact Center and ServiceNow Customer Service Management platforms.

The solutions currently represent two of the fastest-growing products in their respective portfolios.

Moreover, ServiceNow is actively converging its customer support CRM with rival CCaaS solutions, including Genesys Cloud CX and the Five9 Intelligent CX Platform.

Meanwhile, Cisco recently announced a similar collaboration with Epic, the popular healthcare CRM solutions provider.

By embedding its channels, routing engine, and more into ServiceNow, Cisco could similarly streamline the agent experience and centralize customer support data.

With Cisco and ServiceNow able to provide a robust overarching security architecture, the proposition could be attractive to more cautious contact center buyers.

 

]]>
6 Emerging AI Threats to Contact Centers (and How to Combat Them) https://www.cxtoday.com/contact-center/6-emerging-ai-threats-to-contact-centers-and-how-to-combat-them-webex-by-cisco/ Wed, 19 Mar 2025 10:10:42 +0000 https://www.cxtoday.com/?p=68583 AI promises to reimagine the contact center by automating contacts, elevating employees, and redefining experiences.

However, AI is not just delivering new, game-changing capabilities to service teams; it’s also bringing new tools to attackers.

Recognizing this, contact center leaders must understand emerging threats to their operations and customers.

As such, CX Today reached out to Santosh Kumar, Chief Security Architect at Cisco, to identify six new risks of AI and how to combat them.

1. AI Voice Phishing

In February, a startup named “Zyphra” launched two open text-to-speech (TTS) models, each capable of cloning someone’s voice with as little as five seconds of sample audio.

An impressive achievement? Absolutely. But one with several risks to many businesses.

After all, with such technology, a fraudster may conduct a voice phishing attack that can convincingly bypass voice biometric systems.

For instance, an attacker could call a bank, pass the voice recording as authentication, and gain full access to the account.

That may seem far-fetched, but—in November—a BBC journalist successfully used voice cloning technology to bypass voice ID systems at two prominent UK banks.

The threat is significant. Indeed, OpenAI stalled the release of a similar solution last year, warning businesses to “phase out voice-based authentication”.

Commenting on this threat, Kumar noted: “The growth of AI-driven voice phishing has increased by 3,000 percent compared to two years ago.

“To mitigate this, it’s crucial to implement anti-spoofing mechanisms, multi-factor authentication, and liveness tests to verify the caller’s presence.”

Companies that haven’t already implemented similar voice biometric protections are especially vulnerable to this AI threat.

2. Privacy Risks

The growing use of machine learning (ML) models in contact centers introduces new challenges. These go beyond the scope of traditional practices–like encryption, access controls, and GDPR compliance–which, of course, remain essential.

Yet, businesses must consider new practices to protect against new breaches of these models.

For instance, there are “membership inference attacks”, where a fraudster attacks an ML model by inputting specific queries to determine if certain individuals’ data were used in their training.

In doing so, the attacker may access that individual’s personal information.

Additionally, they may gain insight into how the model was trained. That could allow them to tamper with it or create a fraudulent duplicate – as scammers are doing more and more.

To mitigate such AI threats, Kumar advises against leveraging machine learning models trained on small datasets and ensuring the model has gone through adversarial testing.

“Every model in our pipeline undergoes adversarial testing before deployment,” said Kumar.

“We also explore differential privacy techniques to ensure prediction vectors remain ambiguous, preventing attackers from extracting precise information.”

Remember, ML models often memorize sensitive data, so always treat them cautiously.

3. Chatbot Attacks

Chatbots offer a common entry point for attacks, especially those powered by machine learning. After all, they can be targeted by adversarial attacks like those highlighted above.

Yet, as businesses power bots with large language models (LLMs), there’s now a risk of “prompt injection” attacks. These are either direct – aiming to trigger specific responses – or indirect – striving to change the virtual agent’s behavior.

Via both methods, users can trick the bot into performing prohibited tasks.

These attack methods received widespread publicity after security researcher Johann Rehberger used similar techniques to tamper with Google Gemini’s long-term memory.

However, there are other chatbot attacks to guard against. For instance, a fraudster could manipulate the bot into adopting a persona. Alternatively, they may exploit AI’s limited context window to overload it with irrelevant data, hampering its performance.

Given these risks, Kumar recommends a multifaceted approach to safeguarding bots. “Mitigating chatbot threats involves strategies like adversarial testing, continuous model evaluation, input validation, and preventing prompt injection attacks,” he recommended.

Nevertheless, businesses must first understand these attack vectors to effectively enact these strategies and ensure AI remains reliable.

4. Model Poisoning

Not all threats come from external fraudsters. Some attacks come from within.

Consider model poisoning. This occurs when an insider injects malicious data during model training, creating backdoors for attacks.

For example, they may introduce poisoned data to an AI-powered security solution designed to detect malware. As a result, it may miss specific threats.

As such, contact centers must ensure their providers follow the OWASP Top 10 for LLM principles and ensure poison detection methods are built-in, suggests Kumar.

“We’re also leveraging Cisco’s AI Defense product, which enhances protection against such attacks,” he noted. “Our AI-specific pipeline includes continuous monitoring and testing to detect and mitigate threats early.”

5. API Weaknesses

Enterprises often integrate their contact centers with various point solutions for conversational analytics, forecasting, self-service, and more.

It’s critical to maintain strict authentication and authorization controls for these APIs.

After all, while APIs face similar threats as software and web applications, they also have unique vulnerabilities that demand special attention.

For instance, APIs – like chatbots – are susceptible to injection attacks via SQL injection, remote code execution, and cross-site scripting (XSS).

Contact center IT teams can ensure consistent input validation and leverage API management platforms to guard against such risks.

However, businesses must prepare for more than just API injections. Service availability threats, where APIs are overwhelmed with requests, and user identity risks are also concerns.

Deploying an API gateway and volumetric defense tools are best practices here.

6. Supply Chain Frailties

With the rush to adopt AI and ML, many companies turn to third-party solutions. While cost-effective, these solutions can introduce significant risks if not properly vetted.

For instance, they may be unpatched, depend on other components/services, or contain precarious open-source components.

Therefore, gaining assurances from vendors against supply chain attacks is critical.

As an example, Cisco has enforced rigorous supply chain security and compliance practices for over 20 years, whether for on-premise libraries or modern SaaS integrations. “This ensures the integrity and security of our ecosystem,” added Kumar.

The tech giant has also developed a Responsible AI Framework, outlining its approach to ethical and legal AI development and integration.

Combatting Contact Center AI Threats with Cisco

Cisco uniquely delivers customer experience solutions alongside a deep security portfolio.

In 2024, Cisco restructured its product divisions, including security and collaboration, to operate under a single Chief Product Officer, Jeetu Patel. Furthermore, Cisco consolidated its Webex Contact Center and CPaaS (Communications Platform as a Service) offerings under the leadership of Jay Patel. This strategic alignment was designed to empower customer experience leaders to proactively address emerging risks.

Cisco is uniquely positioned to deliver AI-enabled Webex Contact Center with security and compliance built into the foundation. With Cisco’s unique positioning on AI threat defense, it can further solidify customers’ overall trust by providing robust security and privacy posture.

To learn more about Cisco’s contact center portfolio, visit their website.

]]>