Hey, everybody. Welcome to New in Coveo, the service session. I'm I'm Daniel Rajan. I'm the lead product marketing manager at Coveo. And today with us, we have Scott Ferguson, product manager senior product manager from Coveo. It's exciting. We are live, so we would love to take your questions. Please use the q and a chat, section to participate and engage with us as we look into everything that's new, the latest and greatest from Coveo for our service line of business. So there are two things that I'm super excited about. One is, of course, it's spring. We've had a dreadful winter in Canada, so I'm excited about spring. The next thing that I'm excited about is conversational search. It's really the era of conversational search is here. Our customers have been asking for a native out of the box conversational search capabilities, and it's finally here. We are excited to show you that and much more. So lots to cover in this session. Just a reminder that you have tuned into the service new in Coveo session. So we have three dedicated product sessions. Today is we are gonna see the latest and greatest from our service use case and our line of business. On April fourteenth, we have a commerce session, and then on April twenty first, we have our website session. So please scan to register. Use the QR code. It'll take you to the registration page, or just go to Coveo dot com, and it'll be easy for you to sign up. Before we dive in, I just wanna get get our disclaimer out of the way. This is, this session might include future forward looking statements, so please consider that, in your investment, options. So I want to level set and give you a reminder of what our mission is for our service line of business in Coveo. It's really simple and straightforward. It's to help users find the right answers easily wherever they are, right from customer experience all the way to employee experience. And what I mean by users is that users can actually include people. It could include your teams, customer support teams, but also now agents. So it really covers everybody when I talk about users. And wherever they are, it could be inside the product, in their self-service channels, in their case submission workflow, in service console after a case submit as after a case has been submitted as a ticket or in your employee portals as employees to help discover knowledge much more faster. So our mission is to help all of these users, including AI agents now. We'll talk we we we'll be talking about it to find the right answers easily wherever they are. What we are seeing in the market is now this AI wow experience is expected everywhere. It's largely because of the daily consumer AI experiences with ChatGPT and Gemini that we are seeing now that this consumer experience has has now become the standard. Now enterprise search or enterprise AI is expected to to catch up and feel conversational fast and personal just like consumer AI. I should add that when enterprise experience when they can't keep up, users start to escalate that experience or worse, they disengage from your brand or enterprise. So it has some drastic consequences. Gartner now says that conversational agentic AI, these wow experiences, will become the intelligent front door to self-service experiences. Now self-service is often a prime use case to deploy these conversational agentic AI experiences. And by twenty twenty, it got it's it's it's fast approaching. Gartner tells us that this will become the intelligent front door. But for enterprise AI to measure up to consumer AI is not really easy in service, especially because questions are long tail and complex. Think about the queries and the questions that come in to your self self self-service channels or to your service console as tickets. You you would find that users often describe partial symptoms and or multistep problems. And what's what compounds that challenge is that the knowledge that's required to answer and respond to these really long tail complex questions. This knowledge is fragmented. It's permissioned and dynamic. This information lives across multiple sources of truth and is constantly updated. So there's a huge expectation of AI to keep up in real time to resolve this at scale. When when this and when enterprises roll out conversationally, Agentic, without accounting for the nuances that we just spoke about in terms of complexity, we often find that that AI ambition is outpacing its operational readiness. When AI fails, it creates a domino effect. You would find that it often results in more tickets, more escalations, and less trust. When these enterprise nuances are not taken into account and we neglect the foundations, you would find that the answers start to buckle under enterprise complexity, and ex experiences start to degrade. We've all heard our these stories of how these brands roll out, these wow experiences, and then start to roll back because these end these experiences start to break under complexity. And what happens is trust and compliance, those risks start starts to increase. So what was really meant to be an advantage starts to become a liability for you. And this observation between this wow and this trust is what we, at Coveo, have observed to be the wow trust gap. It's the gap between what AI promises and what it delivers in production, And this gap keeps widening. In service, you would imagine that reliability is much more important, than impressive wow experiences. But the only way to bridge this gap between wow and trust is by making sure that you are grounding your AI in your enterprise truth, and you need to invest in retrieval systems that can consistently deliver this knowledge. So you might be asking yourself, hey. Okay. We can close, or you can use Coveo to bridge this wild trust gap, but not everybody is in the same stage in their AI maturity journey. Right? There's some of you in this call who might be just starting out your journey with AI search and others who have already graduated down the spectrum into conversational agentic. What's really required is that you make sure each of these experiences, regardless of where you are in this AI maturity journey, you wanna make sure that each experience is grounded in a shared foundation so that you will have reliable answers, consistent experiences, and scalable self-service. So regardless of where you are, you wanna make sure that you bridge this warp trust gap. What we've been seeing in twenty twenty five is that Coveo customers have been successful in graduating onto, conversational agentic using our APIs and using our MCPs, MCP hosted server, we have seen success stories on how Coveo customers have used external AI agents like Agentforce, CR, and so many others to ground those agents using the shared foundation, that Coveo provides. And now we are excited to talk to show you how Coveo is now introducing a native out of the box conversational and agentic experience. And so with that, I would love for Scott Ferguson to, share share share with you, the latest and greatest in terms of, what we have delivered for Coveo, for service. Over to you, Scott. Thanks, Danny. Hi. I'm Scott Ferguson, and I'm part of the product team over here at Coveo. Before we jump into the details and start to go through this, I wanna kind of quickly frame how this is all fitting together. Everything that we're about to cover right now is gonna really fall into four main themes. So first is buy, which is all about the generative conversational agentic component Danny was just talking about, and this really hits that end user experience. Second is build, and that's specifically about custom Agentic AI, which is really how you can actually extend Coveo into your broader AI ecosystem a I ecosystems. The third is native integrations, where we're actually gonna bring Coveo directly into the tools where people are using already today. And finally, the Coveo platform and core services. Danny was mentioning about it actually won't work if you don't have the foundations. These are the improvements we're making every day to make sure things actually work. So before we go through this, we wanna think of this as, like, a full stack. We're gonna talk about from experience to integrations to retrieval to operations. Let's start with the first theme, which is by generative conversation in Agentic. At the surface, this is what users are actually experiencing. So instead of our traditional JN AI experience, which is CRGA, when the users are asking a question, they expect to get a single answer that's grounded in enterprise. The interface is now embracing a natural conversational experience. This unlocks three big things, more complete answers, better self-service, and a much more precise knowledge discovery experience. But what's important here is this is far more than, like, us putting a chat layer on top of a search page because there's something far more sophisticated happening underneath the hood. And so how is that working? Right here under the hood, the Coveo search agent is orchestrating an iterative, multistep reasoning process. It doesn't just run one search and then return an answer. It understands the question. It retrieves information. It evaluates whether it's sufficient or not. And if not, it goes back and retrieves more context and then refines the results before generating the final answer. And importantly, it's maintaining memory across sessions. So users can go and ask follow-up questions and actually dive deeper without actually having to go and start all over. So what we're doing right now is we're giving a guided evolving conversations towards that resolution. Danny was talking about how you're looking for out of the box. Right here on the other one of the other key pieces is how easy it is to deploy. This is a fully managed end to end capabilities inside the Coveo platform. You can configure it. You can build the UI. You can preview it, evaluate it, and see how it's performing without actually having to build that custom agent from scratch. So organizations can get the benefit of an agentic experience without taking on the complexity of actually building and maintaining one from scratch. And on that note, it's also worth mentioning that the conversational search is currently in open beta. What you're seeing here is actually that in action. This is occurring right now on our public docs page. So after this session, feel free to go check it out. The user starts with a question and gets a generated answer, similarly to how CRGA behaves today. And then this is a key difference right now. They can continue the interaction using the Coveo follow-up questions. They the system will keep the context. It's gonna reason over previous responses and retrieve more target information as the conversation evolved, and it's flexible. The user can ask questions to reformat the answer, like asking it to return it into an email or a Slack Slack message. So instead of it actually just being a one and done, users can actually work through complex problems in more of a step by step fashion. Now that's the experience layer. The next question is, how do we bring this into a broader AI ecosystem? That's where our second theme comes in, which is build custom agentic AI. Here's where we've introduced the hosted MCP server. MCP stands for model context protocol, and it's the simplest way to think about it, for us at least, is it's a standard way for AI agents to securely access enterprise knowledge. Coveo's enhanced hosted MCP server acts as a managed layer that exposes key capabilities like search, passage retrieval, answer, and fetch. So instead of building a custom integration to every single AI tool, you have a single secure way of connecting them into Coveo. And we've made building this extremely easy. Through the admin console, teams can connect to LLM systems like ChatGPT, Clode, Copilot, and others directly into Coveo. It's low code and centrally managed to reduce complexity. So instead of spending months actually on building infrastructure, your teams can focus on actually delivering value with AI agents. This short demo here shows you exactly exactly how simple that setup really is. An admin can create an MCP server in just a few steps, define it, set the pipeline, configure the tools, and set instructions. From there, you can expose it through APIs for public content or secure with OAuth for private data. So as already mentioned, what used to be a complex integration for project becomes something now that you configure directly into the platform, and it can be done in just minutes. This example on Claude really shows the power of MCP in action. An agent starts with a question and tries to retrieve the passage, but that doesn't find one, doesn't get the results. So instead of it failing, it actually adapts. It switches strategies, uses search, retrieves documents, and fetches content, and even supplements it with some public data when needed and only if permitted. Then it synthesizes everything into a single answer. So what you're seeing here is true agentic behavior, the system dynamically choosing the best path to arrive at the right answer. Now building custom integrations is one path, but we're also bringing Coveo directly to the tools people are already using every day. And that's where our third theme comes into play, native integrations. Here we have Coveo Relevance Generative Answering for support agents. This is specifically designed for service workflows. It generates grounded answers based not just on the knowledge and the content, but also the context of the case itself. Things like the issue description, the product history, or the product itself. Here, we're helping an agent real time go faster to res get a faster resolution and more accurate support. It's also about delivering the right answer in the exact context where they need it. And this is another great example of meeting people where they actually work and where they already are using. Coveo is available as an app inside ChatGPT Enterprise. This means users can access enterprise knowledge directly within ChatGPT with results that are grounded in Coveo's indexed and respect your existing permissions. So instead of switching tools, users can stay in their workflows and still get trusted relevant answers while mitigating the concerns when using a commercial LLM and its inherent hallucination risk. So we've covered experience, integrations, and connectivity. Now we're gonna look at foundation that makes this all work. Our final theme, platform and core services. This starts with improved visibility for administrators. The new home page in the admin admin console acts as a central entry point, surfacing key signals like system health, activity, and opportunities for improvement. Instead of searching for issues, teams can quickly see what needs action and act upon it. And if the home page shows you what's needed and what needs attention, the system performance dashboard explains why. It provides deeper visibility into indexing activity, endpoint consumption, query performance, and system limits so teams can better understand the platform behavior and take a more proactive approach to optimization. Now underneath everything, retrieval quality is critical. Structured aware enhancements improve how Coveo processes content, especially for alternative formats such as PDFs. So instead of breaking documents down into arbitrary trunks, it preserves the structures like tables, sections, and formatting. This leads to significantly better retrievals and more accurately generated answers. In fact, we've even seen a major improvement in information recall, especially with structured content. And here we show that shift visually. Previously, content was split into fixed size chunks, which could break meaning and structure. Now with markdown processing and structured aware chunking, content preserves more intelligently. So the system understands not just the text, but the meaning and being able to maintain the integrity of how that text is organized. And finally, improvements to the retrieval layer itself with ScoreFusion. We better align document level and passage level relevance to improve ranking quality. And with merge passages, we return more complete coherent context instead of a fragmented snippet. So the system is not just finding relevant information, it's returning it in a way that's more usable to the LLM and for the user. So everything we just walked through right now highlights how enterprise search goes beyond finding just information. It's more. It's about delivering answers, guiding users, and integrations intelligently directly into the flow of work, from conversational experience to AI integrations to retrieval and platform improvements. This is all really what makes knowledge more accessible, more actionable, and ultimately more valuable across your organization. Thank you. I see questions both in the chat and in the q and q and a section. Someone asked a question in the q and a, which is why would we choose m p MCP server over an API, for a chatbot? It's a great example. For one, it's about, for instance, just time to, time to iterate, time to implement. If you wanted to go through a a very you wanted a lot of control over it, the API is a great route to take. But it requires, like, the rigor of going through the API details in configuring everything. MCP makes that a very streamlined process. You might not have every control at every level that you do with an API, but if you want to first pilot it out and try it, I would actually recommend going with the MCP. You can spin up, several of them even instantly now if you wanted to. And I should mention there's no additional licensing for MCP right now. Every organization gets access to search and fetch out of the box. Scott, I see one more question. Are you able to take that from Which one is this? Asked, hi, team. Just wondering if we have some demo system internal for SAP develop prototypes for my team? I'll see if anyone else can fill that question. I'm not as an SAP expert. Ramit, we we'll be happy to get back to you, or if you're able to add some more context, please feel free to do feel free to do that, please. Okay. Just a reminder that this session will the recording will be sent to everybody, and we'll make sure any unanswered questions will will have the responses attached when you get the recording. Thanks everybody for tuning in to the Coveo for service, New in Coveo session. Just a reminder, there are two more product sessions, one for commerce and the other for website. So please visit Coveo dot com to register. That's it. That's a wrap for today. Thank you. Cheers.
New In Coveo for Service - Spring 2026
Enterprise service teams are being asked to deliver faster, more natural experiences without losing trust. In this session, Daniel Rajan and Scott Ferguson show how Coveo is moving from static results pages to grounded, conversational self-service with Conversational Search powered by Coveo Search Agent. You’ll also get a look at the newest Service innovations announced for Spring 2026, including updates across MCP, CRGA, dashboards, structure-aware retrieval, and platform performance. Watch the replay to see what changed, why it matters, and how it can help teams improve self-service success, reduce case volume, and lower cost-to-serve.


Make every experience relevant with Coveo

Hey 👋! Any questions? I can have a teammate jump in on chat right now!
