Ce contenu n’est disponible qu’en anglais.

Oh, hello, everyone. Thanks for joining us today. I'm Lyndsey Price. I'm from Product Marketing here at Coveo, and I'm thrilled to be joined today with my colleagues in Product Management, Scott and Oscar. Today, we'll share with you some of the latest updates from the past six months and give you a glimpse into what's coming next. If you joined us last month in our Relevance 360 webinars, you know this series is all about the latest in knowledge innovations, expert insights, and to showcase some of the results our customers are seeing with Coveo. In this session, we'll cover the knowledge side of the business which stretches across customer and employee experiences, across workplace, website, and service applications. Last week, we hosted our commerce session, and that recording is available online now if you wanna join in for that. So with that, before we dive in to the core content, I wanna share with you a quick disclaimer. Before we talk about any of our upcoming innovations, please remember to rely on only publicly available information before making any purchase decisions. And if you have any questions along the way, please be sure to put them in the Q&A section at the bottom of your screens, and we'll take time at the end of this to review them. So you've probably heard about the buzz around Agentic AI. It's everywhere right now, with businesses experimenting and asking themselves how and where do they begin, how do they take their pilots to production. But this isn't just another tech trend. It's a real shift in how we approach search through automation, reasoning, and action. It's capturing the attention of leaders, shaping road maps across industries, and yet amid all of the hype and innovation, one fundamental truth still remains the same: AI is only as good as the information it retrieves. But where does that knowledge come from? How quickly can your systems get the right answer? That's where AI turns from something impressive into something truly useful. And at Coveo, we've always focused on this as the foundation. Because if your AI doesn't know where to look or what to trust, it won't know and it won't matter what it can generate. Your AI today, your AI future, it starts with retrieval. And as we shift into Agentic systems, retrieval moves from the back end to the front end stage. As Gartner puts it, search is the foundation, which supports AI assistance and agent developments. And that's where Coveo really comes in to shine. Coveo unifies your content across your entire enterprise. Our architecture is composable, secure, and agnostic. It's built to scale across all of your applications. Search, self-service, Internet, portals, websites, commerce solutions, Coveo brings together all of your unified knowledge to be searchable where you need it and give you real time insights in a single platform, making it the foundation for building search, GenAI, Agentic, and whatever comes next. And the results, they speak for themselves. Our customers are running generative and Agentic use cases in real production, and the sorry. And the ROI is real. This is what happens when relevance meets Gen AI at scale, where relevance means delivering the right answer, insight or product to the right person at the right moment every time. Whether you're enhancing self-service or personalizing commerce, deploying an agent in your digital workplace, you need one foundation that knows your business and understands your users that works across the channel to build a single source of truth, and that's the Coveo advantage. So as AI continues to evolve, one thing stays constant: you need one retrieval and relevance architecture to augment every experience no matter where you are in your AI journey. That's the principle guiding our road map, our partnerships, and everything we're about to show you next. So today, you'll see how Coveo is enabling Agentic AI success through one secure and scalable enterprise foundation. You'll see how we're integrating with solutions like Agentforce, Bedrock, Microsoft Copilot, each with different protocols, but they all share one trusted layer of knowledge. And this is how our customers have gone from basic search to AI powered recommendations to generative experiences and are now leading into Agentic intelligence to the point of action. So with that, let's dive into what's new this fall. Today, we're gonna cover what you see on blue. Though I wanted to highlight the third columns here with the dotted lines around them to showcase some of the existing capabilities you may know or already use. But I wanna show you here that there is a holistic plan. As a reminder, we have some integrations with some of the most popular enterprise platforms, which we continue to enhance with these new and enhanced offerings we're about to tell you about. And as a final reminder, if you have questions as we go, please enter them into the Q&A. We look forward to hearing from you. And with that, we'll pass it over to Scott to start to talk about what's next. Thanks, Lindsay. Hi, everyone. I'm Scott Ferguson, the Senior Product Manager here at Coveo. It's great to be with you today and share all the new exciting things that we have going around in the world of Agentic in custom AI solutions. And then we're gonna take a look ahead at our Gen AI fall updates. We made some really big strides right now this year to help enterprise not just use AI, but use it effectively so that they can have measurable ROI. Let's start off right now by how we're making enterprises even more powerful with Coveo's public APIs. Organizations today are focused on value, how you get the most out of their actual generative AI investments. That's where we're able to use and leverage our custom built generative AI applications using tools such as our Answer API for factual grounded question answering, and our Passage Retrieval API, also known as PR API, for precise context retrieval, the best passages taken from your index knowledge. And our Agentic Integrations with embedded intelligence across workflows. These endpoints make it possible to build custom copilots and chatbots that really know your business because they're grounded in your content, secured in your permissions, and optimized for your use cases. We're really excited to announce that our Answer API is now GA. Or should I say General Availability? We obviously are a really big fan when you choose our out of the box builders. But if you already have your own generative app, no worries. The Answer API is designed to make it easy so you can bring factual, concise, and reliable answers to your own UI experience. With the Answer API, developers can, you know, manage and configure rules with a simple CRUD operation. They can stream real time answers into any custom app or interface and deliver grounded retrieval based responses. So whether you're building a simple chatbot, a service copilot, or embedding GenAI into your portal, the Answer API provides the best of breed retrieval, manage prompts for safety, and deployment flexibility wherever you need it. Up next, Coveo for Agentforce 2.0. This is where Agentforce's orchestration is enhanced by Coveo's retrieval and relevance actions. With Coveo for Agentforce 2.0, think better knowledge, more context, and greater control. Coveo unifies your enterprise content across platforms, permissions, and sources, and it helps ground Agentforce in that knowledge. And the results are more accurate answers, a higher efficiency, and more complete support experiences. We're introducing prompt template support. It's powered by our Passage Retrieval API. Teams can now leverage flexible prompt templates with dynamic variables and reuse those actions across workflows. This means one action, many use cases. You can now also pass your record context, like subject category and user context, like region or roles, directly into the retrieval. With these capabilities of understanding the context of the subject and the description, you can also use that to answer and not just answer, but solve it. You can then take additional actions, like factoring this in and building a new knowledge base into items that streamline the resolutions for future cases. This delivers higher accuracy and fewer misroutes and faster resolutions while mirroring your business's taxonomy and policies. Let's put a spotlight on the value drivers here. Coveo for Agentforce 2.0 is all about control, context, and connectivity, giving you and your teams measurable support KPIs. With Flex Prompt Template support, you're getting faster implementation. With Dynamic Content Boosting, you're having cost efficiency so to optimize your Data Cloud usage in parallel. And finally, the expanded possibilities that come with it. We already gave you the ability to answer and classify your cases. Now with Coveo for Agentforce 2.0, teams can go beyond. They can solve cases with grounded prompts, draft knowledge articles from the cases, and write policy-aware, on brand responses automatically. And yes, this is available now on the app exchange and the agent exchange. Hard to believe that just in a couple of weeks, MCP has only been officially around for a year. On that note, let's talk about interoperability. We're introducing Coveo's Hosted MCP Server, our implementation of the model context protocol, or it's better known MCP. Think of it as a universal plug adapter for AI agents. Before MCP, teams had to build custom API integrations for every system. It was manual or rigorous, and at times could even be inconsistent. Now with the Coveo-Hosted MCP, it's plug and play. So, it's being designed to work seamlessly with, for example, ChatGPT for Enterprise or Agentforce or Amazon Quick Suite, among others. It standardizes how our search, answer, and passage retrieval nodes are discovered, described, and accessed. And when it comes to kind of finding that balance between moving at the speed of business and that rapid pace of technology, it gracefully handles orchestration across evolving agent frameworks. In short, Coveo-Hosted MCP server is about making integration simple, scalable, and standardized. And this is we're doing this so that enterprise can innovate faster without being locked into one ecosystem. Now let's look ahead at some of the work that we're doing behind the scene to strengthen our relevancy and generative foundations. Our Coveo Relevance Generative Answering, or for the sake of brevity, CRGA, is migrating to Amazon Bedrock with the Nova Lite model. And we're doing this with the same rigor we apply to everything at Coveo through structured evaluation and measurable improvements. Here's what that process looks like. We defined the expected behavior and built real-world data sets. We automated evaluation metrics for consistency, and we developed prompt optimization frameworks to continuously tune our system for quality. The results thus far have been thus far have been yielding some really fruitful results. We're seeing a slight increase in positive accuracy. That's about knowing when there's enough information to answer. We're seeing a notable uptick in negative accuracy. That's correctly identifying when not to answer. From a verbosity standpoint, it's pretty much stable with a slightly richer answer and word count. And one of the most impressive improvements is how this has a knock on effect on decreasing the hallucination rate. In other words, better precision, fewer false answers, and more confidence in its responding, in its responses. This migration is in closed beta today, and a general rollout begins mid January twenty twenty six. It'll be also followed by a full migration towards the end of February. In a similar vein, I wanted to announce that Coveo has officially received Amazon's generative AI competency, which reinforces our leadership in enterprise Gen AI. So whether it's flexible APIs, smarter context retrieval, gentic interoperability, or foundational LLM improvements, our goal remains the same. Help enterprise create intelligent, trusted, adaptable, and relevant AI experiences. And with that, I'm gonna hand it over to Oscar, who's gonna dive into how we produce better outcomes with CRGA for cases. Over to you, Oscar. Yes. Thank you, Scott. I was chatting. It's good to see such activity on the chat and questions. We'll get back to it, towards the end if you don't get the answers. Alright. So onto the new features that we're bringing to you guys, we've seen many CRGA implementations on case form, whether it was for a case form or an agent inside panel. And as customer, we're leveraging that solution, we realized we are leaving some performance on the table as the input of a case is slightly different than on the search page, it's usually longer and you can carry multiple intents. So what we've done is we created CRGA's sibling, that we call CRGA for Cases that's really tailored for that specific use case. And it means that instead of just taking the subject of as an input, we now take the subject, the long description, which can be quite long when we looked at the data, and also any drop down that you can have on the case for for context. What we do with this is that we perform a search, but we also reformulate all those input into a better reformulated query tailored and better suited for semantic retrieval. We then do our semantic retrieval with that better query. So we effectively do a reformulation of all the input. And the last piece that we've made to tailor this experience is to change the prompt and tailor it for case resolution rather than generic question answering. So the the way the the LLM will answer will be more solution driven rather than generic explanation. We have a short video here, that should play if I do this. Yeah. So that's the hybrid search being performed. You can see in the second step how it was reformulated. And then we have the summary, the resolution answer happening. So this could be leveraged for both a case form, but also for an agent inside panel to use all the element that you have on the case for an immediate resolution. So we're really excited, for CRGA for Cases. So let us know if you are interested. The next couple features I'm gonna talk about are really around retrieval and content improvement for for LLMs. So the first thing is about our structure-aware improvements, and we've bundled a few features into this. We really wanna make or improve RAG, and there are two ways that we saw how we could do this. One was by better representing your content, and the solution was to use markdown instead of text to keep the structure and and really help the LLM understand the content. And the other piece was changing this the chunking strategy that we have in our embedding model. So the slicing of your document is more intelligent, and we don't break a paragraph in the middle or tables into two. So the LLM and the chunks that are fed to the LLM are more contextualized and more scoped. So together, those updates really help ensure your content or and and its meaning structure are faithfully preserved. So we unlock higher accuracy, better alignment with your data, and in the end, better answers. So let me show you a little bit, like, what has changed if you are not super familiar. So this is a PDF. It has a table. And in, like, up until today, what we were doing, at Coveo is we're doing two representation of that content. We would create a HTML preview so the admin could see that PDF for validation purposes and spot check. But the main thing that we're doing is transforming into text, and the text was used for search, and it was also used for the LLM. So we would pass it through a fixed size chunker that would chop, chop, chop the text into equal pieces. And that PDF would really turn into something like this for that that would be sent to the LLM. And I took a deliberate example where we split that table into like, right in the middle, and you could see, how the the lines are up and down. And and it's not really great if if the LLM is getting that information like that. It works, but it's not optimal. And what we wanted to do is optimize this. So with our markdown processing and new chunker, we've introduced a new representation of your content. It's not changing. It's not removing the others. It's just an additional representation in Markdown. So you could see how now the structure of the document is much cleaner and and is and we've preserved the structure. So this markdown version of of your document is passed through Structure-aware Chunker, which will understand that there's where the table starts, where the table finish, where the paragraph start, where the paragraph finishes, and can adapt the size of the chunk based on the content, which, again, gives you chunks that are much nicer for the LLM to try to answer what's going on. So with this we've, this is already available. And we've seen already some performance, especially in information recall. So it's really the ability or the confidence that we have the right information in the top five chunks. And when we looked at it for PDF, that information recall metric went up from, like, seventy nine percent to eighty five percent. And the eighty five percent was really what when we had, like, table heavy query, which means a query that required a chunk from a table we have great information recall now, much greater than we had before. So what does this mean really? And practically speaking is that if you have complex documentation with tables and and long PDF, this will drastically improve the performance of passage retrieval API and CRGA. So that was a a big chunk of work. Now we've checked on doing retrieval improvements. And one that had been, in the making for a long time and that was not that easy to solve was supporting Thesaurus for CRGA. A lot of you guys are using Thesaurus to app-specific, or domain-specific terms to your pipeline and making sure search and LLM understand it, but we're not leveraging it. So when we have this query like this, "what are common issue with DS API?" DS is not really known by the embedding models. And so how do we get, like, a translation of this so we we grab the right information and return it into the the answer? What most customer have are in our Thesaurus for that DES. It actually means Data Enrichment Services, and it could also be called DAES. So that Thesaurus is already accounted for in your document search. So we transform the query into what you see on the screen. So we have, we add the "or" and so in index we search for that and we return the right documents that are matching this new query. But we're not doing it for semantic expansion, which what we are adding. So if you have that Thesaurus rule, it will now be applied and taken into account by CRGA. So the initial query will be transformed, and we will search for what are the combinations for DES or Data Enrichment Services or the DAES API which brings better results and better answers. The performance are, that we've seen, the performance increase is really high. It's especially high, and we say up to twenty percent because that's the highest number we've seen. Mainly, it really depends on the quality of your Thesaurus and the quality of your content. If you have mixed Thesaurus rules or conflicting Thesaurus rules, you shouldn't expect that much of a lift. But if you have a reasonable amount of Thesaurus rules that are pretty clean and not too overly complicated, that's where you can expect, like, more improvements. So we are really excited about this this improvement, and we know it's gonna help provide better answers that are tailored to to domain specific terms essentially. So those are where they are more like retrieval features that we've delivered. The next couple are rather on the configuration and customization and admin side of the house. One feature that was also highly requested was the ability to change or modify the prompt that we use in CRGA because we realized it was quite fixed and on purpose, we wanted to make sure there were no hallucination, but a lot of you guys requested some flexibility. So we have now an interface where you can add additional prompts or instructions to the prompt that will be appended to our main and base prompt. So you can do things like tone of voice, specific guidelines, you can tweak and and precise the role of the of the question answering system. So it's an open text box, for, which you can play with. And, and it's been really useful to, you know, give it a bit of a flavor or make sure it doesn't do specific things that could be specific to your industry or your domain. And this is available in the model section already, so you could go ahead and play with this. And I'll go back to some of the questions I see on in the chat after. The last feature that we had that was released recently was the CRGA dashboard on the admin side. So we've improved the reporting section in the knowledge hub and tailor it specifically to CRGA. So it's a lot easier to see usage and answer rate at a glimpse. And this also allow us to provide you guys with the query that get an answer and don't get an answer along with the answers themselves that you can produce and see. So I think that's that's great. And we're also adding the most cited documents. But just so you get some feedback on what's being used by the LLM to generate those answers, we know a lot of, you guys wanna improve the documentation as you were seeing answers being generated. So that will give you a glimpse of what's being leveraged by the system and so you could see if you wanna change the content, modify it, or if it's the right thing to do. Alright. So those elements are available as we speak. Now let me get into what's next. It's it's really not meant to be a full laundry list of all the features that we'll be working on, but rather preview of key capabilities that we're heavily investing in and will bring to production in the coming quarters. And to start, I'm starting with what we have today, which is retrieval augmented generative system that we call CRGA, that really combine the retrieval precision of Generative answering and search. And we've seen how it's it's amazing to get, like, accurate and grounded answer based on content, and it's ideal for straightforward information where the answer exists in one or a few documents. So that's reliability for the enterprise around GenAI systems, and that's where we're at today. What we're adding now is, and what the team is already working on, is the conversational racks. So we're layering on top of existing RAG memory, and multi turn context. So we want the system to remember previous exchanges with the user so we can allow them to refine, compare, and go deeper without restating the context or retriggering a new query and starting from scratch. So we're really moving from the steady question answering that is working really well today to a more dialogue driven discovery. And that really transform search into an ongoing exploration, where you have richer interaction. And you could also extract more of the the knowledge, and you can answer more complex question that require multi steps or additional details. So let me quickly show you what it's gonna look like. So we'll start from an existing search page. So imagine this is an existing search page. You still get a generic answer to start with. But you'll have that "Exploring in Generative Mode" button, which will take you to a conversational canva, where we've removed the search results. They are gonna stay on the right side, but they're gonna be minimized. We've removed the facets and all the filtering. So you're really more into that conversational mode where you can ask follow-up question and to the point that we had earlier in the chat, you have the follow-up or related question as suggestions on what to ask there as well. So this is what the team is working on today, and we expect to have the production version ready in Q1, so between January and March. So that's gonna be available for all CRGA customers to test and hopefully adopt as we are putting a lot of sweat and effort into this to make sure that it's still reliable, grounded, and the added context doesn't make the model and the LLM hallucinate because that's the biggest risk. The more context you'll carry, the the more chances you have for the LLM to just start making up facts as it has this bin of information to sort through. That's gonna affect an intermediary step, to be honest, because what we're really working towards with this RAG evolution is Agentic RAG. And now we've talked about it in our last New In Coveo, but we really wanna get into autonomy where the search system can plan, search, and synthesize proactively. So you, the user, doesn't have to do all the heavy lifting, but the system is doing it. And Agentic RAG really means that we orchestrate retrieval tools and reasoning steps. So we can tackle more ambiguous questions that needs reasoning and decomposition. And also optimizing all the information that's in the in the carcass. So Agentic RAG moves beyond conversation, it needs conversation because we still wanna allow for the back and forth with the user, but it's able to reason and use all the available search tools, so we maximize the impact of your knowledge base, to provide answers to the users without much work on that. So that's really the one, two, three step that's driving us, that's driving the the knowledge team, both in product, design, and development. The immediate next step being conversational. So expect more updates on this and expect that in the next New In Coveo that is gonna happen early March. That feature will be ready, and we'll get into all the nitty gritty details about it. And yeah. So that's about it. But the long term and the plan for twenty twenty six is to get to Agenetic RAG in production. And with that, a quick recap. So some of the features that, Scott and I have shown are about integrations. So Coveo for Agentforce 2.0, the hosted MCP server. The big move for CRGA to use Bedrock LLMs rather than OpenAI. And I've shown a couple of new features such as CRGA for Cases, some retrieval improvement with all the Structure-Aware, the Thesaurus, along with admin features such as the Prompt Enhancement and the CRGA dashboard. And the next thing the next big things, because there's a lot of small things that we don't have the time to cover here today, are the Conversational RAG and then the Agentic RAG. And with that, I'll pass it back to Lindsay. Thank you both for, covering all of that. I think there's no shortage of new content here, and there's a lot of exciting things to come. And just quickly on screen, if you wanna keep hearing more, I encourage you to check out the sessions we ran last month. Available if you scan the QR code. But we've also seen a lot of good questions come through, so I just wanna take some time here to jump into kind of a live Q&A. We've answered some in the chat, so thank you guys all for jumping in and following those. But a few to throw at both Scott and Oscar here live. Starting with Agentforce. It sounds like a great big step forward. What is the biggest benefit teams will notice right away with the updates from Agentforce 2.0? I mean, the the biggest win that we're seeing right now, beyond going, just beyond going through case assist and case resolution is creating the new knowledge content and then actually taking actions so that you can write an email from it. The number one quote we've actually received from a customer that's using it is it went from a real, it was an okay moment when they were using it without Coveo. And then as soon as they added Coveo to their solution, it turned into a wow moment. There's probably also a cost benefit analysis that needs to be made, using Salesforce and Data Cloud as your retrieval system and your storage and retrieval system versus using Coveo. Without going into the details, I definitely encourage you to do a cost comparison. Because the Agentforce underlying storage solution can be quite expensive depending on the amount of content that you have. So I think that's also an another benefit beyond the performance., i's also your optimizing your dollar amount. Awesome. Thank you for that. Next question is regarding the Bedrock migration. So you talked about the Bedrock migration. What does it mean for customers already using Coveo's generative features today? Yeah. I mean, initially, our end, the migration to Amazon Bedrock, you know, creates a unified AWS cloud infrastructure, and how that, how a customer benefits from that is customers. It means that we can go beyond what I mentioned earlier in terms of, like, the different performance upgrades that we shared, the future opportunities that we can embrace things like model flexibility. Today, Bedrock has a foundation library of, it supports eleven providers and twenty eight different model families, and that's about triple what we currently have access to today. So it really expands the breadth of opportunities since specific models specialize in specific use cases down the road. Probably also benefiting from, like, less migrations, because OpenAI's pace of releasing new models just insane. So it keeps us on our toes to constantly migrate to the most appropriate version. So with Bedrock, we expect less migration and smoother migration, giving back some R&D bandwidth to new features and also saving you time from, like, testing and all those new the changes. Amazing. I see some excitement here around Conversational RAG. The question here is, can I use generative answer multi turn and conversational REG in my chatbot? Yes. So that's, that's also one of the big reason why we are, we're doing this and we're moving towards this. You can already use CRGA on the chatbot. We have several customers doing it, like, as a custom implementation. But adding the memory and keeping the previous question and answer in in the memory of the LLM we'll definitely provide a better experience for especially for, like, chatbot use cases, which are by definition, like, a more conversational interface in our surface. So, yes, that's gonna be that's gonna be available for every surface really or every channel, whether it's a search page, an agent panel. But we are really keen to to to see how it's gonna affect positively the chatbots and and custom chat integration that are out there. Amazing. Thank you for double clicking on that one a little bit more. So with that, let's wrap. I think if you have further converse questions or answers, please let's keep the conversation going. We will send a recording of today's event to you in the coming days. And with that, thank you everyone for joining us, I hope you guys all have a great day. Bye. Okay.
S'inscrire pour regarder la vidéo

New in Coveo for CX & EX - Fall 2025

Explore what’s new across Coveo’s AI Relevance Platform, from Agentforce actions to a growing API suite and tools for both managed and custom builds. See how Coveo grounds GenAI to scale relevance, optimize accuracy, and prove ROI. Whether you're building with AWS Bedrock, extending SAP Joule, or looking for a turnkey solution for a support experience, this is foundational AI for every point-of-experience—customizable, and built for what’s next.

Lynsey Price
Senior Product Marketing Manager, Coveo
Oscar Péré
Senior Product Manager, Coveo
Scott Ferguson
Senior Product Manager, Coveo