If you missed the Relevance 360 Roadshow in San Francisco, there was a panel discussion that cut through the usual AI hype to address practical realities of enterprise implementation. Priscilla Garcia, SVP of Global Account Management, moderated a conversation with three leaders running AI in production at scale in their customer support organizations.
The panelists included Joyce Leung, VP of Services Operations at Illumio; Laurel Poertner, Senior Director of Digital Services at F5; and Julie Hamlin, Lead SEO Analyst at Zoom. Despite coming from different industries with distinct needs, their insights converged around several themes that apply broadly to enterprise AI implementation.
Content Quality as the Foundation
The conversation kept returning to a fundamental truth: if your underlying content isn’t sound, AI will simply amplify the problem at scale. Julie was direct about this: “Content is key. If your content isn’t good, if it’s not optimized, it’s not updated, it’s not going to perform well on your site or externally as well.”
At Zoom, they’ve established a content council bringing together managers across blog posts, knowledge articles, and community forums. Julie noted the challenge: “We have a lot of content, which is great, but that can also be a challenge as well, just the scale and the amount of teams.” The key is ensuring everyone understands how their content flows into customer-facing AI outputs.

Their approach centers on clear content lifecycles with regular audit cadences—every six months or annually, depending on content type. Each team determines what works for them, understanding these decisions directly impact customer answers.
Joyce provided the KCS perspective from Illumio: “We view knowledge as a byproduct of solving issues. Everyone, when they’re working on their cases, they’re also expected to create or maintain the knowledge base as part of their role.” This discipline creates the foundation everything else builds upon. Joyce put it plainly: “Without that data layer, then it’s garbage. It really is.”
Relevant reading: 6 Data Cleaning Challenges Blocking Enterprise AI (& Solutions)
Securing Executive Sponsorship
Joyce was candid about the importance of executive support: “We cannot go anywhere without that support. With this journey, there is an investment involved in terms of money, time, and resources. So getting in front of your execs and getting their buy-in, holding them to see the vision, is really important.”
She emphasized why this matters so much: “Without that, I think that’s where most of the pilots will fail. Because you don’t have enough support to carry on, because that pilot is not going to be an easy one. You will run into challenges, you will run into delays, and at that point, without that executive support, you won’t be able to carry on.”

The signal to pursue AI came from different sources at each company. At F5 Networks, the impetus came from the board level. Laurel explained: “A couple years ago, everybody was like, we’ve got to get in the AI game. And so they basically went to the company and said, who can get there first? And so we raised our hand.” They committed to deploying generative answers on the homepage of their customer portal and went live in April, learning and iterating since.
For Illumio, an AI hackathon in February 2025 served as an accelerator. Joyce described the impact: “It just threw everyone into the space, and they have to figure out, and from there we actually got a lot of ideas and use cases of how the team can use AI to accelerate their productivity and efficiency.” Two teams chose to demonstrate their work using Agentforce, which became the catalyst for more serious implementation efforts.
Relevant reading: Make AI Work: Unified Search & Retrieval for the Enterprise
The Critical Role of Integration
Joyce shared a compelling case study from Illumio’s Agentforce proof of concept that illustrates why integration architecture matters. Her team created two separate agents and ran identical test cases through both. One agent used AgentForce alone, while the other combined Agentforce with Coveo’s Passage Retrieval API.
“We logged it in a big spreadsheet, and then we took it to our SMEs, and we said, okay, could you guys go and rate each one based on the accuracy of the answer and the quality of the answer,” Joyce explained. “And what we found was for 80% of the answers, the Coveo plus Agentforce returned better results.”
This wasn’t a minor improvement—it was the difference between feedback describing the experience as “okay, not really helpful, sometimes it works, sometimes it doesn’t” and answers that were actually ready for customer deployment.

The reason comes down to what Coveo brought to the table: years of work building a unified index with proper security context, data structure, and relevancy tuning. Joyce was clear about the value: “We don’t need to build everything again. Because if we did not use Coveo and we’re just using Agentforce alone, we need to do all that in the Data Cloud, and it’s not even possible. They don’t have the tools in place to allow us to do that.”
The alternative would have involved complex work with S3 buckets and ongoing data synchronization challenges—hardly an efficient path to production.
Where AI Excels and Where Humans Remain Essential
Joyce offered a clear assessment of AI’s capabilities and limitations in customer support. “AI has no emotion, so it cannot empathize with our customers. It cannot maintain that relationship with our clients. And it cannot be the one that is presenting a final decision to the customer. There must be a human in the loop.”
However, AI does excel at initial triage, troubleshooting, knowledge retrieval, and summarization—particularly in high-volume environments where these capabilities can significantly improve efficiency. The goal isn’t to replace human agents but to augment their capabilities and help them work more effectively.
Laurel discussed F5’s exploration of how to surface useful prompts to customers, particularly for complex products. “Some of our products are extremely complex, looking at network packets,” she noted. “So I can imagine that there’s a lot of our customers out there that could really, really find use in a prompt that just distills that down for them, and so that they don’t have to sit on a customer call with a support agent reading through these files.”
Evolving Success Metrics
Julie highlighted an important shift in how success is measured. “We’re obviously moving from traffic and volume and clicks. Those were kind of the traditional success metrics, especially in SEO. And that doesn’t really work anymore. People don’t need to click. They get the answer that they need right there.”
Instead, Zoom tracks visibility, impressions, brand mentions, and citations in language models—metrics that reflect trust and authority rather than just traffic. Julie acknowledged the complexity: “We’ve been thinking a little bit more about what does trust look like, how do you quantify trust?” It’s a harder question to answer, but increasingly important.

At Illumio, the team tracks self-service success rates, achieving 80% over the past six months, which means only 20% of portal visitors need to open support cases. The other 80% find what they need without escalating to a human agent.
Julie also mentioned that she reviews every negative feedback signal on generated answers—every single thumbs down becomes an opportunity to understand what went wrong and improve the experience.
Practical Capabilities to Explore
Julie emphasized the value of A/B testing, which she considers underutilized by many teams. “I have done so many tests, big and small. When we rolled out CRGA, both in our support website and on the case deflection side, we did it through A/B testing, kind of like an incremental rollout approach.
Even small tests can deliver meaningful insights. Zoom ran one where they removed rules that had been prioritizing knowledge articles over community posts. “We found out that the version that actually didn’t deprioritize community posts performed better,” Julie said. Simple change, measurable impact.
She also stressed the importance of partnering with community and social media teams. “User-generated content just in general is a wealth of knowledge.” Zoom recently launched social listening that includes Reddit, recognizing its growing importance in Google’s AI overviews. During a significant outage last year, Julie noted, “people went to the community first”—making it a critical early warning system for customer issues.
What’s Next and What’s Needed
Priscilla asked each panelist about their next priorities and what capabilities they wish were available.
Laurel is focused on conversational AI as her next initiative, following the imminent launch of Case Assist. Conversational AI is definitely something that we want to look at next, as soon as next week is over with Case Assist,” she said. She’s also interested in content optimization tooling that can leverage AI capabilities rather than requiring everything to be built internally.
Julie is working on expanding generative answers to additional languages beyond English across Zoom’s 17 supported languages. She’s also training technical writers on SEO, AEO (Answer Engine Optimization), and GEO (Generative Engine Optimization) because, as she put it, “a lot of them don’t even know what these algorithms are or some of the basics of these things.”
Her wish list includes better content quality scoring: “I can understand people are getting things thumbed down, right? But why, right? Is it like something in the metadata?” Understanding the root cause of content underperformance would help teams diagnose and fix issues more quickly.
Joyce is preparing to bring Illumio’s internal agents to the external support portal after two months of internal use and ongoing refinement. She asked for better analytics tooling to demonstrate business value to executives, and more importantly, troubleshooting capabilities for complex AI pipelines. “When something does not work well, right now, what we’re trying to figure out as we do testing is, okay, where did it break? And having that tool to allow us to easily identify it will be super helpful and help speed up the deployment and implementation.”
Key Takeaways
Three main themes emerged that are worth carrying forward.
First, content quality cannot be an afterthought. Organizations need content councils, regular audit cycles, and clear lifecycle management before layering AI on top.
Second, executive sponsorship makes the difference between pilots that stall and implementations that scale. These projects require clear business value articulation, adequate budgets, and sustained commitment through inevitable challenges.
Third, integration architecture matters more than individual features. Years of work on security, governance, and relevancy tuning shouldn’t be discarded because a new AI platform can’t easily access it. The companies succeeding with AI build on their existing investments rather than starting from scratch.
What stood out most was the alignment among three leaders from different organizations and industries facing the same fundamental challenges and arriving at similar solutions. Success with enterprise AI requires less innovation than it does discipline—doing the foundational work that makes AI genuinely useful rather than chasing the flashiest demonstrations.

