Security and Compliance - Unified Communications & Collaboration - Tech News - CX Today https://www.cxtoday.com/tag/security-and-compliance/ Customer Experience Technology News Mon, 01 Dec 2025 19:00:33 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.3 https://www.cxtoday.com/wp-content/uploads/2021/07/cropped-cxtoday-3000x3000-1-32x32.png Security and Compliance - Unified Communications & Collaboration - Tech News - CX Today https://www.cxtoday.com/tag/security-and-compliance/ 32 32 Zendesk and Microsoft Targets The Small Business Market in Latest Partnership https://www.cxtoday.com/security-privacy-compliance/zendesk-and-microsoft-targets-the-small-business-market-in-latest-partnership/ Mon, 01 Dec 2025 19:00:36 +0000 https://www.cxtoday.com/?p=81107 Zendesk has expanded its partnership with Microsoft to enhance employee services for smaller businesses. 

By integrating Microsoft 365 products into the software company’s platform, Zendesk customers can access Agent 365 capabilities for intelligent productivity. 

In turn, Microsoft has implemented Zendesk Agent within 365, allowing its customers to access tools to enhance service productivity and workflow efficiency. 

Craig Flower, Chief Information Officer at Zendesk, highlighted how the partnership expansion would improve Zendesk’s ability to deliver a superior customer experience. 

“Our collaboration with Microsoft on Agent 365 and Zendesk Agent for Microsoft 365 Copilot is a pivotal moment for Zendesk,” he explained. 

“This collaboration not only solidifies our position as a leader in enterprise AI automation but also ensures that Zendesk remains at the forefront of the evolving digital worker landscape.  

“By integrating with Agent 365 and Microsoft 365 Copilot, we are empowering our customers with both autonomous and streamlined support capabilities, optimizing operations, and ultimately delivering a more efficient and reliable employee experience within Microsoft 365.” 

Improving Service Experience 

This partnership aims to upgrade small business experiences by implementing both tools to generate tailored needs. 

By establishing Microsoft Agent 365 within Zendesk’s platform, the AI offers autonomous ticket management support for Zendesk’s customers for reduced human intervention. 

These capabilities include ticket creation, handling, status monitoring, and communication management within Microsoft’s environment to ensure data governance requirements are met. 

This allows human service agents to shift away from constantly reviewing routine queries and return to high-demand, complex tasks. 

In return, Zendesk Agent has been integrated into Microsoft 365 Copilot to support its core apps with ticketing capabilities, such as ticket submissions, status monitoring, and following up tasks without the need to switch tools. 

Similar to the first integration, this capability is managed within Microsoft’s environment, resulting in limited friction for tool management and deployment.  

As a result of the integration, agents can experience direct AI-assisted support in several routine task areas, resulting in higher responsiveness, resolution, and reduced waiting times. 

This AI integration allows smaller businesses to elevate their service demands to the level of any well-established company, including delivering higher productivity and service levels. 

By implementing these tools directly within a business, teams can manage their workflows effectively without agent intervention. 

Furthermore, both tools offer customers secure and compliance management for handling adoption risk within a governed ecosystem. 

Targeting The Small Business Market 

The integration follows a similar trend in recent months of larger vendors trying to dominate the small enterprise customer corner by offering tailored products and services to fit their needs. 

Earlier in November, Zoom had secured its commitment to providing service capabilities to companies of various sizes with simple, straightforward tools to enhance their businesses. 

The communications giant notes how businesses with smaller teams require different demands than larger ones, forcing some to juggle various workloads across the board to keep up with demand. 

This means vendors will need to personalize their tools and approaches to cover more ground and advance these smaller businesses to the industry standard. 

This has been a well-documented issue in the CX industry, as various companies have recently eliminated support for enterprise customers that don’t meet their size standards. 

Unfortunately, some customer enterprises that are unable to provide businesses with desirable profit results may be asked to cancel their subscription if the company can no longer provide the services needed or intend to solely focus on its largest customers. 

However, companies such as Microsoft and Zendesk have offered support for this neglected market, supplying these customers with both tools to elevate their teams while prioritizing their unique requirements. 

Srini Raghavan, Corporate Vice President for Microsoft Copilot and Agent Ecosystem, explained how the tool collaboration will offer these enterprise customers support across a range of business needs, and allow them to elevate their issue resolutions even at their current capacity. 

He said, “AI is transforming how organizations deliver employee service, and Microsoft’s collaboration with Zendesk is leading that change by enabling a new era of intelligent support. 

“We’re combining the power of Microsoft 365 Copilot’s intelligence with Zendesk’s modern service platform, enabling employees to resolve IT, HR, and Finance issues seamlessly within the tools they use every day.” 

]]>
Microsoft Steps Up Efforts to Support European Customers’ Data Sovereignty https://www.cxtoday.com/security-privacy-compliance/microsoft-supports-europe-customer-data-sovereignty/ Mon, 01 Dec 2025 19:00:33 +0000 https://www.cxtoday.com/?p=81138 Data sovereignty is top of mind for business leaders across Europe, shaping strategic decisions at Microsoft’s customers, according to panelists at the tech giant’s European Digital Commitment Day in Vienna, Austria last week.

Digital sovereignty, the ability for an organization to maintain clear control over how its data is stored, accessed, and governed, has moved from a technical concern to a board-level priority. As organizations expand their digital footprints and accelerate cloud adoption, rising regulatory scrutiny and growing customer expectations are forcing businesses to rethink how they manage data.

Sovereignty means different things to different people, the panelists noted, but the common thread is the need to take control over customer data, which has become essential to maintaining trust. The pressure to demonstrate that control is now shaping transformation plans, vendor choices and long-term customer experience strategies.

Control of Critical Data Is Becoming a Strategic Must

The energy crisis following the invasion of Ukraine exposed the geopolitical dimension of critical infrastructure, reinforcing the need for systems that can operate independently in extreme circumstances.

“Digital sovereignty is about stability and resilience,” said Julia Weberberger, Head of Corporate Strategy at Energie AG Oberösterreich, describing it as a source of power. “[W]e have to make sure that we operate our critical data on our own. We operate our own data center, with emergency power supply, and rely on a multi-provider strategy to create redundancies… It’s also very important that we build expertise in digital sovereignty in Europe, but also within our company.”

Europe is developing a new mindset built on innovation and security, Weberberger said, shaping companies, knowledge, opinions and even social narratives. In this environment, European data sovereignty is becoming a key strategic concern that requires balance.

As Martina Saller, Public Sector Sales Lead at Microsoft Austria said:

“It’s not a black and white discussion. It’s not about choosing the path of sovereignty or choosing the path of innovation. It’s about balancing and orchestrating… a risk-based approach.”

That layered approach should separate highly sensitive workloads from those suited for cloud-based innovation.

Public administrators highlighted that sovereignty is multidimensional: technical, legal, economic and emotional. What customers want above all is visibility and choice. As one leader emphasized, beyond control over data processing and storage, true sovereignty also means being able to choose the parts of a technology package they need rather than being required to buy licenses for bundles, which drives up costs.

Procurement rules, however, are still playing catch-up. With different requirements scattered across the EU, organisations often end up doing the same work multiple times. A more unified approach that allows for shared certifications and tech that plays nicely across borders would make it easier for businesses and public bodies to build modern, sovereign digital systems. And to make sure those sovereignty rules help innovation instead of getting in the way, organizations say they need clear guidance and strong partnerships with their tech providers.

What Customers Need from Cloud Partners

A recurring message throughout the discussion was that sovereignty cannot be achieved in isolation. Customers expect their cloud partners to help them meet changing regulatory, security and operational demands.

As Norbert Parzer, Certified Public Accountant, Tax Advisor and Partner at EOS put it, “first find the companion before you start the journey.”

To address concerns around extraterritorial data access, Jeff Bullwinkel, VP and Deputy General Counsel, Corporate External and Legal Affairs at Microsoft EMEA, detailed the steps the vendor has taken to provide assurance and legal protection.

The tech giant has built the EU Data Boundary for the Microsoft Cloud to “mitigate the risk, or reduce the surface area of risk by just reducing situations in which data is transferring from one continent to another.”

Just as crucial is Microsoft’s assurance that it will resist demands from governments to divulge customer data, Bullwinkel said:

“When Microsoft gets a request or a demand in order for data from any government around the world, we have a contractual obligation to litigate against that order whenever there’s a lawful basis for doing so. And we have quite a history of doing that…with a view toward guarding against that kind of risk and so we will continue in the future as well.”

Microsoft has also expanded its sovereign controls and confidential computing to ensure that customers hold the keys to their data.

The vendor recently announced expanded capabilities for its Sovereign Public Cloud and Sovereign Private Cloud. By the end of this year, customers in four countries—Australia, the United Kingdom, India and Japan—will have the option to have their Microsoft 365 Copilot interactions processed in-country. This will be expanded to 11 more countries in 2026: Canada, Germany, Italy, Malaysia, Poland, South Africa, Spain, Sweden, Switzerland, the United Arab Emirates, and the U.S.

These capabilities directly address customer expectations for operational autonomy and regulatory compliance.

Partnerships help empower organizations to keep control over their processes and architecture, so that digital transformations are secure and interoperable. Organizations across sectors are embracing AI, but they need to be sure that the models they use preserve transparency and control.

“There are many areas we see it’s important to have a good collaboration. And for that, trust is… obligatory. It’s the absolutely necessary thing. And it cannot just be a marketing promise,” Weberberger said.

The use of large language models (LLMs) raises critical questions when it comes to maintaining control over customer data, Weberberger noted, highlighting the need for transparency around who trains the data, who defines which information AI models are allowed to use, how ethical principles are implemented and who has the control and influence over the models.

“We need answers in the future when it comes to… how these LLM models are trained. Many providers tell us ‘we don’t use the customer data to train our LLM.’ But for us, still, the question remains, but how do the providers develop their LLMs when they don’t use the customer data to train them? Here we need clear agreements that we all know how it works, and openness to trust.”

For critical sectors like energy, innovation must align with stringent risk-management requirements without compromising safety or resilience.

Data Sovereignty as a Shared European Project

Panelists underscored the need for different regulators in Europe to get on the same page when it comes to digital rules, to create a clearer, more unified set of standards that works in practice and gives organizations the confidence to keep innovating.

“Policy makers and industry representatives should work together on defining clear, understandable and practical frameworks, which has not always happened in the past,” Parzer said.

“It’s about establishing certainty for market participants at the end… They should understand that innovation is not a luxury. It is just an enabler for our economic growth and insurance for our future. So it is all about defining rules that are going to balance innovation with compliance.”

And when those standards line up, it doesn’t just cut down on compliance headaches — it makes it easier for governments and regulated industries to embrace AI and cloud tools, giving them the guardrails they need to move ahead with confidence.

The conversation made one point clear: sovereignty is no longer a static concept. It is a shared responsibility shaped by policy, technology, and partnership. Customers expect cloud providers not only to deliver secure platforms, but also to collaborate, openly and continuously, on the frameworks, tools, and governance models that will define Europe’s digital future.

As the panel demonstrated when customers, policymakers, and technology providers align around transparency, control and trust, Europe can innovate at the pace required to remain resilient and competitive.

“I think we cannot expect this topic is going to go away,” Bullwinkel said. “These things are front of mind, absolutely, for our customers, for our partners, for government leaders… Things we’ve been talking about… around data privacy, around data security, around resilience, around data residency, these are all things that will continue to inform the conversation.”

]]>
OpenAI Discloses Mixpanel Hack, Highlighting Risks in Third-Party Data Security https://www.cxtoday.com/security-privacy-compliance/openai-discloses-mixpanel-hack-highlighting-risks-in-third-party-data-security/ Mon, 01 Dec 2025 10:22:26 +0000 https://www.cxtoday.com/?p=76794 OpenAI has been exposed to a security breach at Mixpanel, a data analytics vendor that the GenAI developer used to support its API frontend product. The incident highlights the growing risk around third-party integrations and the potential for customer data held by the major AI providers to be exposed.

On November 9, 2025, Mixpanel notified OpenAI that an attacker had gained unauthorized access to part of its systems and exported a dataset containing some customer information and analytics data related to the API. Mixpanel shared the affected dataset with OpenAI on November 25, the company stated in a blog post.

The breach occurred within Mixpanel’s systems and there was no unauthorized access to OpenAI’s infrastructure and systems. ChatGPT and other OpenAI products were not affected. “No chat, API requests, API usage data, passwords, credentials, API keys, payment details, or government IDs were compromised or exposed,” Open AI stated. It also confirmed that session tokens, authentication tokens, and other sensitive details for OpenAI services were not involved.

But Mixpanel’s systems had access to user profile information from platform.openai.com⁠. According to OpenAI, the information that may have been affected included:

  • Users’ name and email address
  • Operating system, browser and location (city, state, country) used to access the API account
  • Referring websites
  • Organization or User IDs associated with the account

OpenAI has removed Mixpanel from its production services and said it is working with the company as well as other partners to gauge the scope of the incident and determine whether any further response actions are needed. It is in the process of directly notifying the organizations, admins and users that were affected by email.

“While we have found no evidence of any effect on systems or data outside Mixpanel’s environment, we continue to monitor closely for any signs of misuse,” the post stated.

The incident is a reminder that exposure of non-critical metadata can introduce security risks, and sharing identifiable customer information with third parties should be avoided. As Ron Zayas, Founder and CEO of Ironwall by Incogni, told CX Today in a recent interview:

“The smart play is to learn how to sanitize your data. You don’t have to share 100 pieces of information on one of your customers with an outside company. It’s stupid. Why are you sharing all that customer information?”

Enterprises often underestimate the value of metadata to attackers, as it doesn’t contain critical information like customers’ login credentials or payment details. But malicious actors use the information to create credible phishing or impersonation campaigns, which are becoming an effective way to deploy ransomware attacks through social engineering.  Having a person’s real name, actual email address, location, and confirmation that they use OpenAI’s API makes malicious messages look far more convincing.

OpenAI acknowledged this in the blog post, advising its API users:

“Since names, email addresses, and OpenAI API metadata (e.g., user IDs)  were included, we encourage you to remain vigilant for credible-looking phishing attempts or spam.”

Users should “[t]reat unexpected emails or messages with caution, especially if they include links or attachments. Double-check that any message claiming to be from OpenAI is sent from an official OpenAI domain,” the post added. It also encouraged users to protect their account by enabling multi-factor authentication “as a best practice security control” and noted that OpenAI doesn’t request credentials such as passwords, API keys or verification codes through email, text or chat.

Complex AI Stacks Open More Ways In for Attackers

As with recent cyberattacks exploiting third-party platforms, the incident serves as a reminder that API-based architectures will only become more vulnerable with the use of AI in enterprises. AI systems are too complex for most companies to develop in-house, so they build stacks of third-party tools using APIs, all of which collect operational metadata and open up more attack vectors.

While vendors and enterprises are tempted to collect as much customer information as possible to train AI models as well as deliver personalization, they need to be judicious in the types of information they collect and store, Zayas said, as the risk of data breaches in the AI era will become “much more significant.”

“Companies are opening up all of their data and feeding it to an AI engine. And how secure are the AI agents? They’re led by big companies, but big companies get breached all the time.”

Zayas warned that the major AI and cloud providers like OpenAI, Google and AWS will become increasingly vulnerable as hackers target them for their wealth of data:

“When your data is sitting there, you’re going to get attacked. If I can pull out information… from an AI provider, I am going to get so much rich data that I don’t have to worry about attacking a lot of companies… That’s where companies and criminals are putting all their time and effort—going to the big ones. If you’re giving them data, you are much more of a target.”

Enterprises need to get smarter about the data they share with AI tools to get the outcomes they need. Customers’ personally identifiable information can often be removed to anonymize the data without affecting how the tools work, Zayas noted.

“You’re going to see the breaches being more and more related to the amount of information that’s coming out with AI, the amount of information that’s being enriched, and companies are going to suffer from this.”

Enterprises also have to train employees to avoid carelessly uploading spreadsheets and other files to chatbots like ChatGPT, because even if a company’s systems aren’t hacked, malicious actors may be able to extract customer information using certain prompts.

As the adoption of AI tools accelerates, enterprises should treat every handoff to an AI provider as a potential point of exposure of their customer data. Limiting the amount and sensitivity of information sent to these systems and designing workflows that avoid unnecessary data transfer can reduce the impact of a breach, protecting customers as well as the company’s reputation.

 

]]>
Hardware v Software: The Security Showdown Shaping the Future of Noise Cancellation https://www.cxtoday.com/tv/hardware-v-software-the-security-showdown-shaping-the-future-of-noise-cancellation-cyberacoustics/ Thu, 27 Nov 2025 15:45:27 +0000 https://www.cxtoday.com/?p=76762

Rhys Fisher sits down with Thor Mitskog, CEO of Cyber Acoustics, for a no-nonsense deep-dive into one of the most overlooked security debates in modern customer experience: hardware vs. software noise cancellation.

As enterprises race toward the cloud, Thor breaks down the hidden compliance traps, IT headaches, and cybersecurity risks that come with that shift – and why hardware might just be the unsung hero of secure communication.

If you’ve ever wondered how something as simple as a headset could protect sensitive data in finance, healthcare, or contact centers, this conversation is a must-watch.

When it comes to noise cancellation, the question isn’t just “how good does it sound?” –it’s “how safe is it?”

Join Rhys Fisher and Thor Mitskog as they unpack the real-world security implications of cloud-based software tools and why leading enterprises are turning back to hardware-driven solutions for peace of mind and performance.

Key discussion points

Cloud security pitfalls: How moving audio processing to the cloud opens doors to compliance and data breach risks.
Hardware simplicity: Why plug-and-play devices slash IT setup time, cut costs, and sidestep configuration nightmares.
Industry sensitivity: How financial services, healthcare, and contact centers are leading the charge in hardware adoption for regulatory reasons.
Future trends: Why Thor predicts consolidation in cloud noise-cancellation software – and a new wave of intelligent hardware innovation.

Explore Cyber Acoustics’ latest hardware solutions for secure communication.

Subscribe to CX Today for more deep dives on tech, security, and CX innovation.

Share your thoughts below — is your organization still trusting the cloud for critical voice data?

]]>
8×8 Enhances Security and Privacy Portfolio For Secure Customer Data Handling https://www.cxtoday.com/security-privacy-compliance/8x8-enhances-security-and-privacy-portfolio-for-secure-customer-data-handling/ Thu, 27 Nov 2025 12:40:48 +0000 https://www.cxtoday.com/?p=76749 8×8 has announced its decision to implement a privacy standard to protect customer data privacy. 

The cloud communications vendor revealed that it had taken significant measures to strengthen its service governance. 

This strategy will allow the company to expand its range of security and compliance frameworks, establishing itself as a trustworthy provider for customer enterprises. 

The implementation, better known as ISO/IEC 27018, is a well-established privacy standard used by enterprises worldwide to protect customer data in public cloud environments. 

And with security concerns now at an all-time high, vendors will need to consider how best to protect their customers’ data. 

Darren Remblence, Chief Information Security Officer at 8×8, highlighted how customer demand for security around data management is a bare minimum requirement. 

“Customers should never have to trade speed or innovation for security,” he explained.

“ISO/IEC 27018 gives organizations even stronger guarantees that their data is handled responsibly and transparently.  

“It means they can move faster, meet compliance requirements with confidence, and trust that privacy is built into every part of their communications experience.”

This privacy standard is a code of practice that protects personal data from public cloud providers and shields any private or personally identifiable information (PII) from falling into the hands of third parties. 

It contains core regulations for enterprises that choose to adopt this standard, such as data processing with customer consent, supporting customer data handling, transparency with data protection approaches, and implement strong security measures, including restrictions and encryption methods. 

This standard also includes controls for handling data access, use, transparency, and dealing with incident response. 

This assures customers that 8×8 is meeting the higher standards required for durable data handling. 

What This Means For 8×8 Customers

This implementation into 8×8’s security management system enhances privacy and security for 8×8 Platform for CX, a unified CX communications platform that includes multiple capabilities for customer-facing teams and customer interaction management. 

Customer enterprises can enhance their vendor onboarding routines with faster security evaluation, reduced data exposure risk, and transparency with data handling. 

It also enables 8×8 customers to feel secure in where they place their data, with constant review and improvement from the vendor’s security and compliance team to ensure these standards are kept, including privacy practices, data handling, and cloud architecture. 

And with the customer playing a significant role in its activity, data processing can only happen under customer instruction and is kept informed consistently about its storage whereabouts and who can access it. 

This highlights 8×8’s commitment to its customers’ privacy and security, assuring that data handling is less likely to be compromised or misused. 

Growing 8×8’s Security Portfolio

The privacy standard also allows 8×8 the chance to build up its security and compliance portfolio to meet the growing demands from customer expectations. 

This portfolio has also included similar frameworks, including ISO/IEC 27001, ISO/IEC 27017, SOC 2, and HIPAA mapping, which involve building and assuring security controls and management within a system, as well as several other regulatory standards to assure 8×8’s commitment to security requirements. 

This decision also comes during a time when customer expectations have risen significantly in the last year, after a wave of cyberattacks that profoundly impacted the customer experience sector, including CX giants such as Salesforce, Zendesk, and Google. 

This places risk on data handling processes such as migration and storage methods, forcing vendors like 8×8 to stay ahead of cyberattack activities. 

]]>
Meeting Regulations and Earning Trust in a Data-Rich CX World https://www.cxtoday.com/tv/meeting-regulations-and-earning-trust-in-a-data-rich-cx-world-contentguru-cs-0026/ Tue, 25 Nov 2025 14:06:22 +0000 https://www.cxtoday.com/?p=76670 The contact center has quietly become the most data-intensive function in modern business. What started as simple call logging has evolved into a complex ecosystem where billions of customer interactions generate unprecedented volumes of personal data – data that must be managed, protected, and governed in an increasingly complex regulatory landscape. 

In this exclusive interview, we sit down with Martin Taylor, deputy CEO and co-founder of Content Guru, to explore how CX leaders can successfully navigate the challenges of data ownership while embracing the transformational potential of AI and automation. 

Watch the Full Interview on Youtube 

The Data Explosion: From Calls to Connected Everything 

The scale of data generation in modern contact centers is staggering. “We’re creating billions of records a year just as Content Guru, as is everybody else,” Taylor explains in the interview. “So you’ve got increasingly information not just from calls anymore but from all the digital channels.” 

But it’s the emergence of what Taylor calls the “digital customer” that’s truly transforming the landscape. With predictions of 39 billion Internet of Things devices by 2030, each representing a human in the eyes of regulators, the volume and variety of personal data flowing through CX environments is exploding exponentially. 

” The UK Information Commissioner’s Office considers data generated by IoT devices  – be it a movement or somebody’s temperature detected by a smart health device or a smart fridge  reporting it is  empty – as a piece of personal data,” Taylor notes. 

The Jurisdiction Maze: Where Geography Meets Governance 

One of the most complex challenges facing global organizations is navigating the intersection of geography and regulation. As Taylor explains, “Data is being produced at large volume all over the world, every day, every second. So how that is generated and the rules under which it is being generated vary by geography and by market segment.” 

The implications go far beyond GDPR. While the regulation establishes that “the data subject is ultimately the owner of the data,” the practical responsibilities for processors and vendors create a complex web of obligations that vary by jurisdiction and sector. 

“The EU want it to take place within the EU. The UK want it to take place within the UK, the US in the US,” Taylor explains. “And then if you go to the actual sectors themselves like medical or financial, they’ve got a load of rules of their own and those are done per country as well.” 

Breaking Down Silos: The New Collaborative Imperative 

The complexity of modern data governance is forcing unprecedented collaboration between traditionally separate functions. Taylor shares a real-time example from his own organization: “An example from here today is about something that’s come to my desk  about live sentiment analysis and its legality within an EU AI Act context.” 

This seemingly straightforward CX enhancement required input from legal, product, and information security teams – a pattern that’s becoming the norm rather than the exception. “Those sorts of conversations are happening throughout all levels of the value chain from the provider of a service right through to the vendor,” Taylor observes. 

The Death of “Public Cloud” Assumptions 

Perhaps one of the most significant shifts in thinking has been the abandonment of the idea that being “in the cloud” absolves organizations of data responsibility. 

“I think we’ve seen the death of this idea of there being such a thing as a public cloud,” Taylor states emphatically. “Everyone can see that clouds belong to organizations now and that they reside in specific jurisdictions.” 

The AWS outage in Virginia that affected organizations worldwide serves as a stark reminder of this reality. Many affected companies didn’t even know they had connections to that specific location, highlighting the importance of understanding not just whose cloud you’re using, but where it physically resides and under what jurisdictions it operates. 

Balancing Innovation with Responsibility 

As organizations rush to embrace AI and automation, the challenge becomes maintaining innovation velocity while ensuring responsible data handling. Taylor uses an oil refinery analogy to explain this balance: “I think of raw data as like crude oil and then the refining process, fractional distillation. You’re looking now for that kind of high quality racing fuel  that we use to feed the AI.” 

Not every implementation needs to become more complex, but those involving AI require higher-quality data and more sophisticated governance. “In some cases, it’s AI, it needs that richer fuel. You can’t feed it the heating oil, because it won’t work,” Taylor explains. 

Looking Ahead: Preparing for 2026 and Beyond 

As we look toward the year ahead, Taylor predicts continued growth in both opportunity and complexity. “We’ve all heard  a lot about agentic AI  during 2025. I think 2026 is when it starts to get applied,” he notes, while emphasizing that this won’t mean wholesale replacement of human agents. 

Instead, organizations should prepare for “more data, more automation, and that means more data handling challenges.” This will require: 

  • Enhanced Security Postures: Moving beyond perimeter defense to comprehensive data protection throughout its lifecycle 
  • Geographic Strategy: Making deliberate choices about data processing locations based on customer needs and regulatory requirements 
  • Vendor Due Diligence: Evaluating partners not just on technical capabilities but on jurisdictional alignment and compliance frameworks 

The Trust Dividend 

Ultimately, the organizations that succeed in this complex landscape will be those that can demonstrate they’re worthy stewards of customer data. As Taylor concludes, “There’s going to be a lot more scrutiny of how all of this wonderful new processing is going to happen.” 

The complexity brings opportunity for those willing to invest in getting it right. By building transparent, responsible data governance practices, organizations can turn compliance obligations into competitive advantages – earning customer trust while enabling innovation. 

The question isn’t just who owns your customer data – it’s whether you’re prepared to prove you deserve that ownership. 

Continue the Conversation 

For more insights on navigating these challenges, visit contentguru.com 

]]>
Salesforce Launches Tools to Support Visibility in Large Scale AI Deployment https://www.cxtoday.com/crm/salesforce-launches-tools-to-support-visibility-in-large-scale-ai-deployment/ Mon, 24 Nov 2025 17:57:02 +0000 https://www.cxtoday.com/?p=76642 Salesforce has announced its new observability tools for Agentforce 360. 

This comes after its annual report revealed that AI implementation had increased by 282% since 2024. 

These tools enable enterprises to deploy AI agents without worrying about the reliability and safety of their performance within a system. 

Salesforce’s observability tools provide AI agents with the capabilities to analyze performance, optimize interactions, and monetize stability. 

Agent Analytics

This capability allows enterprises to view how well an AI agent is operating through monitoring its movements, how it’s improving/declining, and where these pain points are coming from. 

This can be turned into performance data, trends, and insights to understand how efficiently these agents are performing and take actionable steps to improve their usage. 

This can also be done across all implemented agents, allowing enterprises to view their agents’ overall effectiveness on customer interaction and support their continuous improvement. 

Agent Optimization

As a key observable capability, Optimization offers customer enterprises full transparency with each agent interaction. 

Customers can uncover how agents make decisions and what led them to make those choices, highlighting performance gaps and session flows to diagnose any issues and deduce the steps needed to improve its performance. 

This can include prompt, rule, or data source adjustments to solve misinterpreted information, inconsistent results or agent hesitation. 

Salesforce provides access to end-to-end visibility for customers to view each agent’s response and action, even with larger, complicated action chains. 

For less varied issues, similar requests can be accumulated to uncover larger problems in patterns or trends. 

Customers can also identify an agent’s configuration issues to pinpoint how an agent’s behaviour is affecting its operation and uncover which areas need to be retrained or personalized further for improved performance. 

Agent Health Monitoring 

This capability can monitor an AI agent’s reliability and safety level to ensure that it is running as expected. 

It provides almost real-time visibility and alerts when the agent is performing unpredictably, notifying the company before any significant damage takes hold. 

It measures an agent’s ability to handle requests, time taken to respond, and tracks incidents such as failures, breaks in activity, or invalid responses. 

By leveraging the capability, teams can speedily detect and resolve issues to minimize agent downtime and continue productivity. 

This tool is formed by two of Agentforce’s components, acting as the foundation for the observability tool by supplying the data and governance structure needed to monitor agents: 

  • Session Tracing Data Model: By logging every agent interaction, the data model can store all its data in Data 360 and provide the observability tool the means to generate reliable analytics, error identifiers, and support optimization for unified visibility.
  • MuleSoft Agent Fabric: This enables enterprises to control, register, and review agents to justify how they function and interact. 

AI Implementation Report 

In a report published in November, Salesforce announced that AI implementations had increased to 282% since last year. 

This data reveals that companies are now at a far better position to deploy pilot projects at scale rather than risk the threat of experimentation. 

Despite this, data governance, security, and trust remain high priorities, requiring risk management across workflows. 

This means that more companies are going to require higher visibility and control across large-scale AI deployments, which is where Salesforce’s observability tools come in. 

By supporting enterprises with agent interactions, Salesforce’s observability tools can decrease operational risk by allowing teams to keep up to date with agent visibility and analytics to keep agent deployments stable. 

Reddit, a customer of Salesforce, highlighted how Salesforce has allowed the customer enterprise to scale agents securely through consistent visibility. 

John Thompson, VP of Sales Strategy and Operations at Reddit, stated: “By observing every Agentforce interaction, we can understand exactly how our AI navigates advertisers through even the most complex tools.  

“This insight helps us understand not just whether issues are resolved, but how decisions are made along the way. 

“Observability gives us the confidence to scale these agents, continuously monitor performance, and make improvements as we learn from their interactions.”

]]>
AI Hallucinations Start With Dirty Data: Governing Knowledge for RAG Agents https://www.cxtoday.com/customer-analytics-intelligence/ai-hallucinations-start-with-dirty-data-governing-knowledge-for-rag-agents/ Sun, 23 Nov 2025 13:00:28 +0000 https://www.cxtoday.com/?p=73480 When AI goes wrong in customer experience, it rarely does so without commotion. A single AI hallucination in CX, like telling a customer their warranty is void when it isn’t, or fabricating refund rules, can undo years of brand trust in seconds, not to mention attracting fines.

The problem usually isn’t the model. It’s the data behind it. When knowledge bases are out of date, fragmented, or inconsistent, even the smartest AI will confidently generate the wrong answer. This is why knowledge base integrity and RAG governance matter more than model size or speed.

The urgency is clear. McKinsey reports that almost all companies are using AI, but only 1% feel they’re at maturity. Many also admit that accuracy and trust are still major barriers. In customer experience, where loyalty is fragile, a single hallucination can trigger churn, compliance headaches, and reputational fallout.

Leading enterprises are starting to treat hallucinations as a governance problem, not a technical one. Without governed data, AI becomes a liability in CX. With it, organizations can build automation that actually strengthens trust.

What Are AI Hallucinations and What Causes Them?

When customer-facing AI goes off-script, it usually isn’t because the model suddenly turned unreliable. AI hallucinations in CX happen when the system fills gaps left by bad or missing data. Picture a bot telling a customer they qualify for same-day refunds when the actual policy is 30 days. That’s not creativity, it’s a broken knowledge base.

Hallucinations tend to creep in when:

  • Knowledge bases are outdated or inconsistent, with different “truths” stored across systems.
  • Context is missing, for example, an AI forgetting a customer’s purchase history mid-conversation.
  • Validation checks are skipped, so the bot never confirms whether the answer is still correct.

The risks aren’t small. 80% of enterprises cite bias, explainability, or trust as barriers to using AI at scale. In CX, inaccuracy quickly turns into churn, complaints, or compliance headaches.

There are proven fixes. Enterprises just need to know what to implement before they go all-in on agentifying the contact center.

The Real-World Impact of AI Hallucinations in CX

The stakes around AI hallucinations in CX translate directly into lost revenue, churn, and regulatory risk. A bot that invents refund rules or misstates eligibility for a benefit doesn’t just frustrate a customer – it creates liability.

Some of the impacts seen across industries:

  • Retail: Misleading warranty responses trigger unnecessary refunds and drive shoppers to competitors.
  • Public sector: Incorrect entitlement checks leave citizens without services they qualify for.
  • Travel: Fabricated policy details can mean denied boarding or stranded passengers.

The financial burden is real. Industry analysts estimate that bad data costs businesses trillions globally each year, and the average cost of a single data-driven error can run into millions once churn and remediation are factored in.

Case studies show the impact, too. Just look at all the stories about ChatGPT, creating fictitious documents for lawyers, or making up statements about teacher actions in education. Every hallucination is a reminder: without knowledge base integrity and RAG governance, automation introduces more risk than reward. With them, AI becomes a growth driver instead of a liability.

Why Hallucinations Are Really a Data Integrity Problem

It’s tempting to think of AI hallucinations in CX as model failures. In reality, they’re usually symptoms of poor data integrity. When the information feeding an AI is out of date, inconsistent, or fragmented, the system will confidently generate the wrong answer.

Knowledge base integrity means more than just storing information. It’s about ensuring accuracy, consistency, and governance across every touchpoint. Without that, CX automation is built on sand.

Common breakdowns include:

  • Outdated articles: A policy change goes live, but the bot still cites the old rules.
  • Conflicting records: Multiple “truths” for the same customer, leading to contradictory answers.
  • Ungoverned logs: Data pulled in without privacy controls, creating compliance exposure.

Some organizations are already proving the value of treating hallucinations as governance problems. Adobe Population Health saved $800,000 annually by enforcing stronger data controls, ensuring agents and AI systems pulled only from validated knowledge sources.

Building the Foundation: Clean, Cohesive Knowledge

Solving AI hallucinations in CX starts with building a solid data foundation. No model, no matter how advanced, can perform reliably without knowledge base integrity. That means every system, from the CRM and contact center platform to the CDP – has to point to the same version of the truth.

A few steps make the difference:

  • Unified profiles: Use CDP to connect IDs, preferences, and history across systems. Vodafone recently reported a 30% boost in engagement after investing in unified profiles and data quality.
  • Agent-ready records: Golden IDs, schema alignment, and deduplication stop bots from improvising. Service accuracy depends on knowing which record is the right one.
  • Data freshness: Expired knowledge is one of the fastest routes to hallucination. Setting SLAs for update frequency ensures AI doesn’t serve answers that are weeks, or years, out of date.
  • Governance layers: Microsoft’s Purview DLP and DSPM frameworks, for example, help enforce privacy boundaries and ensure sensitive data is never exposed to customer-facing AI.

Clean, governed data is what allows automation to scale safely. In fact, Gartner notes that automation without unified data pipelines is one of the leading causes of failure in AI deployments.

The lesson is clear: AI only works if the underlying knowledge is accurate and consistent. RAG governance begins not at the model layer, but in how enterprises treat their data.

Choosing Your LLM Carefully: Size Isn’t Everything

When automating CX workflows, the assumption that “bigger means better” often backfires. In fact, purpose-built, smaller language models can outperform broad, heavyweight counterparts, especially when they’re trained for specific customer service tasks.

Here’s what’s working:

  • Smaller, tailored models excel at soft-skill evaluations. In contact center hiring, they outperform general-purpose LLMs simply because they understand the nuances of human interaction better.
  • Efficiency is a major advantage. Smaller models require fewer computational resources, process faster, and cost less to run, making them ideal for real-time CX workflows.
  • They also tend to hallucinate less. Because they’re fine-tuned on targeted data, they stay focused on relevant knowledge and avoid the “overconfident bluffing” larger models can fall into.
  • Distillation, teaching a smaller model to mimic a larger “teacher”, is now a common technique. It delivers much of the performance without the infrastructure cost.

Choosing the right model is a strategic decision: smaller, domain-specific models support RAG governance and knowledge base integrity more effectively, without blowing your budget or opening new risks.

RAG Governance: Why Retrieval Can Fail Without It

Retrieval-augmented generation (RAG) has become a go-to strategy for tackling AI hallucinations in CX. Companies like PolyAI are already using RAG to make voice agents check against validated knowledge before replying, cutting down hallucinations dramatically.

Instead of relying only on the model’s training data, RAG pulls answers from a knowledge base in real time. In theory, it keeps responses grounded. In practice, without proper RAG governance, it can still go wrong.

The risks are straightforward:

  • If the knowledge base is outdated, RAG just retrieves the wrong answer faster.
  • If content is unstructured, like PDFs, duplicate docs, or inconsistent schemas, the model struggles to pull reliable context.
  • If version control is missing, customers may get different answers depending on which copy the system accessed.

That’s why knowledge base integrity is critical. Enterprises are starting to use semantic chunking, version-controlled KBs, and graph-RAG approaches to make sure AI agents retrieve the right data, in the right context, every time.

Vendors are also moving quickly. Google Vertex Agent Builder, Microsoft Copilot Studio’s RAG connectors, and open-source projects like Rasa’s extensions are designed to enforce cleaner retrieval pipelines. Companies like Ada are proving that governed RAG can cut down false answers in sensitive workflows like background checks.

RAG is powerful, but without governance, it risks becoming a faster way to spread bad information. Grounding AI in trusted, validated sources, through structured retrieval and strong RAG governance, is the difference between automation that builds trust and automation that erodes it.

The Model Context Protocol for reducing AI hallucination

Even with RAG governance, there’s still a missing piece: how the model itself connects to external tools and data. That’s where the Model Context Protocol (MCP) comes in. MCP is emerging as a standard that formalizes how AI systems request and consume knowledge, adding a layer of compliance and control that CX leaders have been waiting for.

Without MCP, connectors can still pull in unreliable or non-compliant data. With MCP, rules can be enforced before the model ever sees the input. That means:

  • Version control: AI agents only access the latest, approved policies.
  • Schema validation: Data must meet format and quality checks before it’s used.
  • Integrity enforcement: Broken or incomplete records are automatically rejected.

This is particularly relevant in regulated industries. Financial services, healthcare, and the public sector can’t risk AI fabricating eligibility or compliance-related answers. MCP provides a structured way to prove governance at the system level.

Vendors are already moving in this direction. Salesforce’s Agentforce 3 announcement positioned governance and compliance as central to its next-generation agent framework. For CX leaders, MCP could become the difference between AI that “sounds right” and AI that is provably compliant.

Smarter Prompting: Designing Agents to Think in Steps

Even with clean data and strong RAG governance, AI hallucinations in CX can still happen if the model is prompted poorly. The someone asks a question shapes the quality of the answer. That’s where smarter prompting techniques come in.

One of the most effective is chain-of-thought reasoning. Instead of pushing the model to jump straight to an answer, prompts guide it to reason through the steps. For example, in a travel entitlement check, the AI might be told to:

  • Confirm eligibility rules.
  • Check dates against the customer record.
  • Validate exceptions before giving a final response.

This structured approach reduces the chance of the AI skipping logic or inventing details to “sound confident.”

Other strategies include:

  • Context restating: Have the model summarize customer inputs before answering, to avoid missing key details.
  • Instruction layering: Embedding guard phrases like “If unsure, escalate” directly into prompts.

Better prompting changes how the AI reasons. Combined with knowledge base integrity and retrieval grounding, thoughtful prompt design is one of the simplest, most cost-effective ways to cut hallucinations before they ever reach a customer.

Keeping Humans in the Loop: Where Autonomy Should Stop

AI is getting better at handling customer requests, but it shouldn’t be left to run everything on its own. In CX, the cost of a wrong answer can be far bigger than a frustrated caller. A single AI hallucination in CX around something like a loan decision, a medical entitlement, or a refund policy can create compliance risks and damage trust.

That’s why most successful deployments still keep people in the loop. Routine questions like order status, password resets, and warranty lookups are safe to automate. But when the stakes rise, the system needs a clear off-ramp to a human; no company should try to aim for limitless automation.

There are simple ways to design for this:

  • Flagging low-confidence answers so they’re routed to an agent.
  • Escalating automatically when rules aren’t clear or when exceptions apply.
  • Training models with reinforcement from human feedback so they learn when to stop guessing.

Real-world examples prove the value. Ada’s work with Life360 showed that giving AI responsibility for repetitive queries freed agents to focus on tougher cases. Customers got faster answers when it mattered most, without losing the reassurance of human judgment for sensitive issues.

The lesson is straightforward: automation should extend, not replace, human service.

Guardrail Systems: Preventing AI hallucination

AI can be fast, but it still needs limits. In customer service, those limits are guardrails. They stop automation from giving answers it shouldn’t, even when the data looks clean. Without them, AI hallucinations in CX can slip through and cause real damage.

Guardrails take different forms. Some block responses if the system isn’t confident enough. Others make sure refund rules, discounts, or eligibility checks stay within company policy. Many firms now add filters that catch bias or toxic language before it reaches a customer.

The goal isn’t perfection. It’s layers of protection. If one check misses an error, another is there to catch it. Tucan.ai showed how this works in practice. By adding guardrails to its contract analysis tools, it cut the risk of misinterpreted clauses while still saving clients time.

For CX teams, guardrails aren’t about slowing automation down. They’re about trust. Customers need to know that the answers they get are safe even when they come from a machine.

Testing, Monitoring, and Iterating

AI systems drift. Policies change, data updates, and customer expectations move quickly. Without regular checks, those shifts turn into AI hallucinations in CX.

Strong CX teams treat testing and monitoring as part of daily operations. That means:

  • Running “red team” prompts to see how an agent handles edge cases.
  • Tracking hallucination rates over time instead of waiting for customer complaints.
  • Comparing different prompts or retrieval methods to see which reduces errors.

Enterprises are starting to put this discipline into place. Retell AI cut false positives by 70% through systematic testing and feedback loops. Microsoft and others now offer dashboards that log how models use data, making it easier to spot problems early.

The principle is straightforward. AI is not a one-off project. It’s a system that needs continuous oversight, just like a contact center workforce. Test it, measure it, refine it.

The Future of AI Hallucinations in CX

Customer experience is moving into a new phase. Contact centers are no longer testing basic chatbots. They are rolling out autonomous agents that can manage full interactions, from checking an order to triggering a refund. Microsoft’s Intent Agent, NICE’s CXone Mpower, and Genesys’ AI Studio are early examples of that shift.

The upside is clear: faster service, lower costs, and better coordination across systems. The risk is also higher. A single AI hallucination in CX could mean a compliance breach or a reputational hit that takes years to repair. Regulators are watching closely. The EU AI Act and ISO/IEC 42001 both push for stricter rules on governance, transparency, and accountability.

The market is responding. Salesforce’s move to acquire Convergence.ai and NiCE’s purchase of Cognigy show how major vendors are racing to build platforms where governance is built in, not added later. Enterprises want systems that are safe to scale, not pilots that collapse under risk.

The reality is that hallucinations won’t disappear. Companies will need to learn how to contain them. A strong knowledge base, integrity, tight RAG governance, and frameworks like MCP will differentiate brands that customers trust from those they don’t.

Eliminating AI Hallucinations in CX

The risk of AI hallucinations in CX is not going away. As enterprises scale automation, the cost of a wrong answer grows, whether that’s a compliance breach, a lost customer, or a dent in brand trust.

The good news is that hallucinations are not an unsolvable problem. They’re usually data problems. With a strong knowledge base, integrity, clear RAG governance, and frameworks like MCP to enforce compliance, organizations can keep automation reliable and safe. Guardrails, smarter prompting, and human oversight add further protection.

Together, these measures turn AI from a liability into an asset. Companies that treat governance as central will be able to roll out advanced agents with confidence. Those that don’t risk being left behind.

]]>
Microsoft Heightens Security and Governance in AI Transformation Strategy https://www.cxtoday.com/security-privacy-compliance/microsoft-heightens-security-and-governance-in-ai-transformation-strategy/ Wed, 19 Nov 2025 09:00:19 +0000 https://www.cxtoday.com/?p=76335 Microsoft has introduced its Sales Development Agent to its roster of security and governance guarded AI agents. 

At Microsoft Ignite 2025, the company announced that its innovations for AI transformation were being introduced to Microsoft’s Frontier – its preview program for customers to gain early access to newer products. 

This agent is just one of several products Microsoft has announced to address security and compliance issues in AI agents. 

Sales Development Agent 

The Sales Development Agent is designed to advise sales teams in increasing their selling capacity. 

As a fully automated agent, this tool can be used to research, authorize, and handle outreach even after business hours, supporting steady revenue growth. 

This tool can work independently of a human agent, utilizing personalization for seller outreach with automated follow-ups to maintain client-seller relationships that extend beyond a company’s working time zone, as well as hand off leads to human sellers when needed. 

The agent operates through Microsoft’s security and compliance rules, ensuring that the tool can be utilized safely and efficiently in Microsoft 365 without security gaps. 

Microsoft has launched further security and compliance-focused tools to address frequent concerns around AI agents and how they operate around sensitive data. 

These tools are designed to be manageable and to monitor any suspicious activity, risky behavior, or possible threat to data exposure or accidental leaks, helping enterprises to govern their agents reliably. 

Other Security and Compliance Tools 

Entra ID 

Microsoft has announced that Entra ID has expanded its secure identity and access to adapt to the AI era. 

The tool allows users to manage accounts and resources securely, including multi-factor authentication for extra security checks, activity monitoring, and secure cloud workloads. 

It can also help guide at-risk users away from data threats, detect unauthorized AI usage, and prevent overprivileged agents from accessing controls. 

Defender 

One core component of the tool is to govern and protect AI agents across Microsoft’s ecosystem. 

As a unified platform for governance and threat protection, Microsoft Defender can offer protection across all environments where AI agents are active, deploying AI-powered security bots to monitor newer zones to forecast potential criminal activity. 

This includes safeguarding against any potential threats and vulnerabilities to an agent, as well as resolving and investigating incidents where necessary. 

Microsoft Purview

Alongside Entra and Defender, Microsoft Purview is included in Microsoft Agent 365 to ensure compliance across Microsoft. 

It is an AI-enhanced control plane component, in charge of handling recently deployed AI agents to prevent agent-specific risks, rather than being focused on human data. 

The tool also allows customer enterprises to view an agent’s status, their typical tasks and interactions, as well as their current risk level to prevent data loss.  

Foundry Agent Service

This tool includes built-in features to support security, oversight, and policy alignment, such as agent controls that limit the amount of data an AI agent can access. 

Foundry also provides security and compliance teams with real-time tracing and full insight visibility to investigate and review activity. 

It also works with other Agent 365 tools to handle threat detection and prevent data loss, ensuring that all agents are screened properly. 

Edge for Business Security Features 

The browser environment allows companies to hide information with a watermark overlay and set boundaries on web apps to stop data from being copied. 

These features can be used by organizations to secure sensitive information and prevent data leakage by aligning company policies to the tool. 

This can be monitored from within the Microsoft 365 admin center across various devices. 

Microsoft Ignite 2025

Microsoft Ignite will run from Tuesday 18th November to Friday 21st November in San Francisco. 

The company has emphasized its commitment to agentic AI and is set to showcase this message throughout the conference, as well as further touching on issues such as Security and Governance, and Identity and Access. 

You can find out more about the biggest CX announcements from Ignite 2025 here.

]]>
Zoho One’s Overhaul Aims to Bring Enterprises a More Connected, AI-Ready CX Stack https://www.cxtoday.com/crm/zoho-ones-overhaul-aims-to-bring-enterprises-a-more-connected-ai-ready-cx-stack/ Tue, 18 Nov 2025 11:22:24 +0000 https://www.cxtoday.com/?p=76316 Zoho has rolled out a major revamp of Zoho One, positioning the suite as a way for enterprises to streamline customer experience by reducing fragmentation and streamlining how employees access and act on information.

The update puts unification at the forefront. Addressing the fact that many enterprise teams have to toggle between standalone tools, Zoho aims to deliver something closer to a true operating system.

A Connected Workspace Designed for Frontline Service Teams

The most immediate impact comes from the redesigned interface. Zoho One’s new “Spaces” structure organizes tools by context, whether for personal productivity, company-wide collaboration, or functional areas like marketing, sales, or finance. The value proposition here is to reduce friction.

The Action Panel and unified Approvals take that concept further, pulling tasks, sign-offs, and action items from across the stack into a single view. Customer-facing roles, especially those that handle service escalations and sales cycles, often have to pull data from multiple apps. This approach aims to reverse that dynamic by pushing relevant information to the user instead of the other way around.

Dashboards and the new Boards framework also help consolidate operational and customer-related data. Because Boards can be assembled from Zoho’s own analytics or third-party dashboards brought in through single sign-on, customer support and sales teams can combine metrics, from ticket backlog to deal progression to customer sentiment, within one context.

Unified Integrations for Consistent CX Across Systems

Customer-facing operations typically rely on diverse systems covering ticketing, CRM, commerce platforms, payment portals, and engagement tools. Zoho has addressed this with a stronger emphasis on native integrations and an expanded model for third-party connectivity.

The unified integration panel gives administrators full visibility into Zoho-to-Zoho and Zoho-to-third-party connections, including recommendations for additional integrations.

Raju Vegesna, Chief Evangelist at Zoho, said during a media briefing that limits on integrations are typically dictated by outside vendors:

“There are limits in terms of capability and the exposure of their API… technically, as long as they support some of the standard protocols, it’s fairly straightforward.”

The introduction of a unified portal may have the most impact on customer experience operations. Instead of customers juggling multiple logins for CRM updates, support tickets, commerce orders, payments, and more, organizations can merge all their portals, including those from non-Zoho systems. For large enterprises especially, where siloed customer portals are a known pain point, this consolidation could help improve customer effort scores.

Domain verification, authentication records, and other behind-the-scenes tasks can also now be handled centrally. This includes new support for GoDaddy users, who can authorize automatic updates to DNS records, which is a useful capability for customer service and marketing teams that previously relied on IT intervention.

AI Steps Into a More Contextual Role

AI is deeply embedded in the release through Zia, Zoho’s AI assistant, which has an expanded footprint, and new intelligence hubs. Because Zoho One unifies data from more than 50 apps, Zia can take on tasks that span categories, for example, pulling HR, CRM and support information into a single query.

For example, it can help leaders understand how much time each employee spent with a particular account, Vegesna said, which requires cross-system reasoning not typically possible in isolated applications.

Zia Hubs, now promoted to a standalone application, automatically collects content such as signed contracts or recorded meetings into dedicated hubs. Users can then query that information directly. Vegesna explained:

“All the documents that you put in that hub… you can say, ‘hey, here are the same documents. Show me only the documents that auto renew within the next three months.’”

This could streamline contract renewals, customer onboarding, or issue-resolution workflows by exposing the right information without manual digging.

AI also assists in product navigation. Zia is trained on all 55+ Zoho applications, which means a user can ask how to run an Instagram campaign and be guided directly to the relevant tool, in this case, Zoho Social, and receive recommendations on how to use it, Vegesna noted.

Zia is coming into the status bar of Zoho One. Zia will be able to create and plug in widgets because the data is connected to the back end and powered by analytics. Users will be able to select from preset reports and dashboards and they are organized, whether they are CRM, HR or support related, such as a helpdesk overview, and pinned to the dashboard, Vegesna said.

“Because we have a broad suite of applications that are deeply integrated on our stack, and have the context that enables the intelligence… we are basically embedding [AI] contextually so that the user does not even know that they’re using AI here.”

Ensuring Data Sovereignty and Enterprise Control

Security and data sovereignty are becoming an increasing concern for enterprises as they pay more attention to where AI systems source and store data. Zoho made clear that its ability to operate its own full stack from infrastructure to applications enables deployment models that many competitors can’t match. As Vegesna noted:

“In sovereignty… we are doing these on-premise deployments in some countries where your data center has to be set up in that country, because we own … the entire stack … we are able to do it particularly when dealing with governments.”

The demand for national or regional control is growing, especially in markets where critical communication systems must remain within the country’s borders to comply with data privacy regulations. “They want to own some key aspects. For example, communication … I don’t want someone else to … pull the plug on my communication systems, so I want it on my data center within my country … those have come up a lot, and we are doing those deployments as well.”

Encryption and data governance were also focal points. Zoho confirmed that customers can now bring their own encryption keys and assign them at the application level. “On the security side … customers ask, ‘Can I bring in my own encryption keys and encrypt my data within Zoho?’ And now we are enabling that,” Vegesna said. “They can now select which applications can use what encryption key, and they can define encryption keys based on their specific set of applications, like their documents [and] emails.”

Zoho argues that AI governance, especially around permission systems, has been overlooked in broader industry discussions. Vegesna said:

“B2B LLMs are different from B2C LLMs, because the permission layer plays a very critical role. I think nobody can do a solid AI strategy and implementation if you do not have a directory system in there. And we have been saying it for years, and we are one of the very few vendors who do directly in Zoho, because that has the entire permissions structure in place.”

Cloud directory integrations that enable permissions are a granular level should be at the core of enterprise AI.

“If we do not have a directory system, we cannot say we can do good AI that is well protected, and you should not have access to some data that AI just memorized, because that all has to be under a firewall and the permission system.”

“We own the triple A: authentication, authorization and access control,” Vegesna added.

Together, these capabilities reinforce Zoho’s pitch to enterprises, that unified architecture and control over its infrastructure allow for tighter data governance and deployment models that meet stringent regulatory or geopolitical requirements.

The through-line across the release is the shift away from app-centric work toward context-centric work. This is a change that resonates strongly in customer-facing environments where speed, accuracy, and personalization depend on smooth access to data.

By pulling together dashboards, workflows, approvals, and AI-generated insights from across the business, Zoho One offers a way to reduce employee and customer effort. Functions that are traditionally fragmented, including billing, support, onboarding, and renewals, can be unified at the workflow and data level rather than treated as separate systems stitched together by manual processes.

For enterprises looking to consolidate their customer-facing tech stack without compromising on data control or the breadth of tools, Zoho is positioning the new release as a platform that can serve both operational needs and experience-driven outcomes.

]]>