Dieser Inhalt ist nur in englischer Sprache verfügbar.
Good morning, good afternoon, or good evening, depending on where you're joining us from. My name is Juanita Olguin. I lead product marketing here at Coveo. Today, I am excited to be joined by my two colleagues, Matthew and David, and we're gonna take you into what's new, but but a little preview into what's upcoming as well. If you're joining us, you probably did listen in to relevance three sixty. So this is our continuation of sharing the latest in product innovations, hearing from experts, and showcasing our customers and the amazing things that they're doing. Before we jump into our core content, I did want to put up a disclaimer. We will be talking about future innovation. So as always, please refer to publicly available information when making any purchasing decisions. Now we would not be the AI relevance company if we didn't start by talking about the latest and greatest in technology, which is Agentic AI. This is the new big thing. Agentic AI is the next breakthrough capability, and we can see analysts sharing how they believe this is going to become a competitive necessity. We already see a ton of predictions about the value that Agentic AI will bring. Here you can see that it's expected to resolve eighty percent of common customer service issues without human intervention by twenty twenty nine. That's just a few years away, and this is quite a big prediction, but definitely one that we also see happening and trending towards. Now we know AI dot AI is a little bit newer. It's only been the last few months, maybe into a year, that people have been starting to talk about this breakthrough technology. But as you can see, enterprises and organizations are testing. They're they're deploying about twelve percent, and they're really starting to consider implementing this truly breakthrough capability. Now for those of you joining us, you may be wondering what is Agentic dot AI, and we borrowed a few definitions from our friends at Forrester, TechTarget, and Gartner. Really, it's a system of foundation models, rules, architecture, and tools that enable software to flex flexibly plan and adapt to resolve goals by taking action. What I really did like about the Gartner chart and a way to really, interpret this new capability and what it is able to do is that it really takes you from being low agency, so more static and reactive, to being higher Agentic. Right? Being more autonomous, being proactive, able to plan and do a lot, autonomously. This is truly the future of of where we're going and and where the market is going, and and it is very real. And we're excited to say we will also be building Agentic RAG into our platform to deliver those dynamic and adaptable experiences. Now this is a future, a future feature and a future capability that we're going to be, working into our platform. So we'll be able to do things AI understand reason, retrieve and answer, still staying true to what Coveo does best behind the search, retrieval, the relevancy, but now doing that within this Agentic framework and, new way of working. And so you'll you'll you can expect to hear more about this into the future. But, of course, you know, is about what's here and available now. So we do want to invite you to join our Agentic AI design partner program. We'll drop the links here. And in this program, we'll be able to share what's available for you to use today and to start getting value from. And then if you wanna learn a little bit more about our futures, we'll be happy to share that as well. I also just wanted to highlight that Sebastian Paquette, our VP of machine learning, and myself will be doing an Agentic master class in just a couple of weeks. So we'll go a little bit deeper in our into unpacking what it is, what are the use cases, how it works. So giving you, a good education and a a try us try to simplify the way that you're thinking about this. So So we invite you to join us for that. Now, again, we wanna talk about what's here, what's now, what's real, and that is truly what New Calveo has been for us at, over the last few years. So what we're going to cover today are the items that you see in blue. So we have three categories of innovations. We have Agentic and custom AI solutions. We have innovations on our managed generative answering capability, and then we have an update to our native integrations. Now you can see items here in the, you know, the the blue dots around them. We won't cover this. These are existing capabilities that many of you, our customers and partners, probably already know about, probably already use and are are getting value from. But we wanted to highlight them to show you that we have there's a holistic picture and a holistic plan for what we're trying to do here. On the API side, really strengthening what we're able to do to help make your builders' lives a little bit easier. And then on the native integration side, we'll talk about ServiceNow today, but it's a reminder that we have native integrations into some of the most popular platforms that are out there. So I will pass it over to Matthew and David to take you deeper into these different areas. We'll be monitoring monitoring the chat, so please submit your questions, and we'll have a q and a towards the end. So, Matthew, over to you. Awesome. Thank you very much, Juanita, and I'm very happy to be here today to share some of the latest and greatest innovation that we've been working on at Coveo. So the first thing I will be discussing is agent take and custom AI solutions. Right? So, obviously, as Juanita just shared, there's a lot of hype, AI lot of excitement, but we've been working hard at Coveo AI identifying how can we actually help our clients gain more value from agent take frameworks out there, or how can we help our clients build custom AI or custom agent take solutions for their organizations. So the first thing we did, and we're quite glad to announce this, is, as we have a large amount of clients that use Coveo within Salesforce, we went ahead and, deployed our first agent tech integration, which is Coveo for Agentic force, which essentially allows our clients to ground Agent Force with secure, accurate, and relevant knowledge from Coveo, allowing to tap into Coveo's large indexing and retrieval capabilities, keeping the item level permission from different repositories and bringing that into Agent Force with highly accurate retrieval and passages to then make agent force agents more intelligent, smarter, and have essentially a broader amount of content that it can retrieve an answer from. So that first integration in a nutshell will be augmenting AI agents with Kavu for Agentic force, bringing our indexing retrieval, relevance, and classification capabilities into Agentic Force with a Coveo powered, answering action, which is our first action that we released. So, it essentially allows to retrieve passages from Coveo's index, from Coveo's accurate retrieval to power question answering use cases within agents or a bot, whether it's in a self-service or a service cloud use case, for example, for human agents. But we're also working on adding and bringing additional Coveo capabilities within Agentic force, such as case classification, which we've been doing for many years very successfully, but also case resolution as we have, we specialize in delivering accurate retrieval. What regardless of what the query might be, whether it's a simple question or whether it's a case subject and description that requires a bit of fine tuning in order to find the best solution for that question for that case. So quite excited about this. This is something that we're inviting our clients to try and give us some feedback. It's our first integration in the Agentic world, But, obviously, we're exploring with the program that we need to have shared earlier, different integrations, different agent tech frameworks that could certainly benefit from Covell's retrieval, capabilities. Another interesting update we did is, you may have been aware of the passage retrieval API. That's the API that powers and retrieves passages within that agent force integration that we just looked at. But this API is also available for our clients to use to deploy custom LLM solutions or bring Kyvio's retrieval capabilities within their own agent tech framework or their own agent tech AI integration. So we're doing some improvements at API essentially in order to scale and to be able to provide this to large enterprises that need large amounts of content, where you'll set to get all the benefits of Kaveo, our secure connectivity, your unified hybrid index, our relevant segmented retrieval delivered to you in accurate passages for an LLM to consume. Think of this as search for LLMs. Right? Search for AI as opposed to search for human. So in order for this to work, whether for enterprises, we're bringing some additional improvements such as the support for up to fifty million chunks. Right? So you can support this for large amounts of content from various repositories and feed this to your LLM. We're also gonna support up to twenty passages returned at the same time. So if you'd like to consume more than just five passages, you wanna bring give more context to your to join an answer or do complex task, that's something we'll support. And also in the same optic of supporting enterprise needs, we're also increasing our max queries per second so we can support large amounts of queries very fast, with, up to five queries per second being supported. Now that's if you wanna get passages from Coveo. Right? And you wanna build this into your own LLM use case, whether it be a custom chatbot or your own agent take AI. But we've also been seeing clients that wanna get just a final answer from Kube. Right? So we've had a lot of success in the last year with our or our clients actually have seen a lot of success over the last year with our generative solution where we provide a generated answer using the best passages possible. But we've decided to make this available via API so the clients can actually use the answers provided by Coveo, where we manage the prompt, we manage the complexity, and we essentially deliver an end to end answer to a question so that you can then use this answer directly, again, whether it'd be in your bot, Copilot, or agent framework. So looking at the same platform, the same ecosystem that we've looked at before, we're also able via API to deliver answers. Where we manage the prompt, we take care of the generation for you. And with a simple query to that endpoint, you'll be getting a generated answer that is accurate and relevant from Kudio. So we're looking at fast, accurate, secure generated response, that are returned from a simple payload, which essentially returns a generated answer that you can then use in any use case you'd AI, and and it's gonna be accessible from everywhere, of course. Right? So if you don't wanna use an out of the box component that we offer for search experiences and you wanna bring this into something more custom, a chatbot, again, a a Copilot, any Agentic, framework, well, you'll be able to get those answers from Kyvyl directly within seconds using the answer API. Now we've looked at a few use cases. Right? So we've spoken about generative answering, which which is our our our, out of the box solution where we can provide generative answers within the context of a search page for self-service, case deflection, for support agents, websites. David will be talking a little more about innovations that we're continuing to do for that solution. And then we just spoke about our newer APIs with the answer API that allows to get that same answer and the passage retrieval API to get passages to feed to your own LLM or your own agent framework. So how should you choose between these different solutions? So we build this this view to help you choose and understand where you'll benefit the most and depending on the use case, what would be the best solution for you to use? So if you're looking for something out of the box. Right? You want AI search and generative answering. You don't wanna manage your prompt. You don't wanna use your own LLM. You want something that's ready to deploy fast for existing search use cases, whether it be self-service, case deflection Agentic, intranets, website. Relevance generating will be the right solution for you. It's fast time to value. It's easy to integrate with our UI frameworks. It contains and will come with robust analytics for you to understand the value and the impact it's having on your business as well as evaluations tools that, David will also touch on, to understand the answers and see, the quality and, again, the outcome you you'll be getting from this. So ideal for generative answering, knowledge discovery use case set. Now if you're looking to get the same answers, right, the best of bid retrieval and relevancy, grounded answers, but for your custom application, whether it be a chatbot, Copilot, you want us to manage the prompt, but you would like to have flexibility to deploy this everywhere. This is where the answer API comes in. So you'll get the same answers from our out of the box solution and our component, but delivered to you via API, which allows you for the, again, the flexibility to bring this into a chatbot, Copilot applications, q and a, AI agent as you see fit. And then finally, passage retrieval API. This is if you wanna build it yourself. Right? So you wanna use the retrieval from Coveo. You wanna use our large indexing capabilities or accurate retrieval capabilities and bring this into your own LLM based application with one advanced retrieval mechanism, flexibility to use your own LLM, your own prompt, your own use case, whether you wanna make it AI content, whether you want to make it write an email, or whether you wanna make it talk like a pirate. The choice is yours to then use those passages, build your prompt, and deliver this into your own solution, where you'll have good full control over the prompt, but where we'll be there to assist you to ground those LLMs with accurate secured content. So ideal for custom gen AI applications, agent applications that require the advanced grounding and retrieval from Coveo, where where you wanna have more control over, the delivery of those answers. So with that said, I'll pass it over to David to talk about some of the great innovation that we've been working on for our, our key solution here, generative answering. So over to you, David. Thanks, Mateo. Awesome job presenting something that's a bit complicated and making it very simple. So as Mateo just mentioned, we've been continuing our advancements on conveyor relevant generative answering. As much as as we've been investing in our APIs with the password retrieval API and the answer API, Our focus was for twenty twenty four was also very much making sure CRGA or RGA was available across all of our touch points. So across the year, you would have seen it available within, your SaaS based products, on your sites, employee portals. The latest addition was through your case submission flow, as well as through your within your chatbots, if that's something that was interesting to you. And AI, within your service app and agent, on on your desktop. So what that means essentially and by AI at the end of the day is that AI the end of twenty four, this was delivered as of late twenty twenty four. CRGA is available across every single touchpoint completely out of the box and nearly no code, I wanna say, whether it's through our builder based solutions or through our Coveo UI libraries, which makes it very easy and accessible for, and accessible for all of you to have on all of your, experiences. Now that's great, and that's awesome. But as we've been investing and doing additional investments there, we've also been doing a lot of investment in the background to ensure we improve the relevance and the scale of RGA, across all of these touchpoints. What that means is our continued investment in in in in having access to the latest GPT models. In this case, it's the GPT for a mini model that we've been, working on with a lot of our customers to have available across the board. The migration is currently in progress as we speak. And I think as of this week or next week, every single, RGA model within the Coveo platform will be powered by GPT for a mini. We've seen amazing improvements from just migrating to this model. Just an, an average of about, seventy six percent, accurate and and responsiveness, with an average of six percent increase since the last model and up to fifteen percent for certain customers. So that's a massive uptick, in answer rate. And that's not the last one. This is something we'll continue on doing and improving on our on on the models that we use in the background to ensure you have the best quality answers, at the best pace as well with the best performance possible. On the skill side, what we also been doing is making sure we support additional languages. So that's something we had released last year, and making sure we have more languages being able to support and being able to make sure we can support different, different content languages. And with that, we've been also adding new languages, especially helping in making sure we can support a lot of the EMEA market, that requires a lot of languages just to be supported. Obviously, making sure we can we can tailor for all of these products. Now the interesting bit is that obviously having different languages is great. It's great for for enterprise grade, reliability. But at the same time, there's also a case where a lot of you might not have all of your content in all these languages. You might just have content in a single language, which is probably English. And so with that, we've also made sure that we support these languages with the ability to translate. What you can see here is a German interface, a German locale that has an English that has English content with English AI. And these citations work as expected in English, but the answer gets generated in German. And in this case, it just supports and takes that specific locale and generates the content from the original language into that German into that German, response. All of these languages listed above, are available. Note that, we have personally tested because there are these ten languages that are on the left hand side column. The right hand side one are not fully tested on our AI, but can be absolutely tested on your end, and can be used already out of the box, with the caveat that obviously that they haven't been fully tested on our part, but we can absolutely support in that testing phase, when and if needed. Now as, and more of the final bit that is available for the implementation of, RGA, something that we've been, doing and probably behind the scenes, some of you are aware, some of you are are not aware, is obviously making sure that the implementation of RGA is the simplest possible. Now we've been doing a lot of the implementation, for the parameters to a JSON configuration in the past. And as of this week, you'll be able to, out of the box, completely set up your RGA model, and its parameters without ever having to touch a JSON, essentially, where you'll be able to, establish a number of items to consider, making sure you have the right, relevance threshold for your chunks or your passages. So, essentially, turning up the heat or the strictness first for the model or making it very loose and making sure it answers, as many as many answers as possible. And then lastly, ensuring that, rich text is fully available out of the box. It's a feature that we had released last year, and we wanna ensure that it's, available completely out of the box as you create a new model there. This will ensure the best output quality for the for the response and ensure it has proper, proper formatting and everything that comes with that. So that's something that's gonna be really great if you wanna create new models, or tweak your existing model for for that case as well. Now for the kind of the large pieces that we had presented at, New Incoveo last year, that was not available yet. So as Juanita mentioned, we do talk about some AI that are upcoming. Now we can say that this is now fully available. As of, this quarter as of q one, I should say Agentic twenty five, the knowledge hub is now completely available in beta, for all customers that have RGA or an RGA license, with Kvit. It comes available with two key features, which I will go through, the answer manager and the trunk inspector. And its main output is ensuring that you have accurate, reliable, and compliant responses. Now why the knowledge hub? What is the knowledge hub? The knowledge hub is the new platform that's dedicated for the Gen AI world for within the Coveo, tooling and platforms, where it focuses on specifically generative AI, whether it's through those passage retrieval API, the answer API, relevant generative answering, or everything in between that, flows to Coveo will pass through the knowledge hub, to ensure, we have control over those generative responses. It also ensures, RGA transparency. And by RGA, in this case, we do mean, the out of the box component, but it can also work for, again, the answer API or the passive retrieval API to ensure that we open that black box that is generative AI. Obviously, a lot of you here on the call have been trying or have already implemented generative AI in one of your solutions, and you very much know how difficult it can be to understand how an answer gets generated. Well, in our case, we wanna give you that into your hand, a full understanding of how an answer gets generated with Covet. And then the last bit is an increase of control and autonomy. We want you to feel empowered in coming onto the Covet platform and feeling like you do know what you're doing and feel like you can take control of the output of the generated responses if you need to, whether it's adding an element of blocking a specific query or being able to report on those successes and being able to repeat that, over time and ensure you have the right quality responses on the back of that. The way that gets translated into features, the first feature that that is available is the answer manager. The answer manager allows you to collect feedback, qualitative feedback from your RJ responses, whether they're helpful with a thumbs up or not helpful with a thumbs down. And being able to take action AI importantly on the ones that are probably not helpful to apply a blocking rule as an example to block any harmful queries or any queries that you would like to be, blocked, from, from being answered at all. This ensures that you don't have to touch any part of the admin console that is extremely powerful, on on Kaavo AI, but can be AI complex to use and more tailored for implementers. This is really this this, this tool is really tailored for knowledge managers that are here and wanna be able to quickly take action without having to touch anything else, from the implementation process. This ensures that you have clear visibility on all the feedback that's coming through for your generative answers and being able to take action on those, efficiently. The second part, as I mentioned earlier, is to crack open that black box. And what best way to give you to crack open that black box then to literally tell you how an answer gets generated, literally show you which chunks, which passages, which parts of documents were used to generate a response. And in this case, with the trunk inspector, you can clearly see which chunks are being used, which parts of the, which parts of the text was used, whether they were sent to DLLM or not, whether they were cited by DLLM or not, and at the other day, whether it was used to generate a response or not. And all of that at your fingertip, using a combination of both the trunk inspector and, the answer manager to be able to troubleshoot that, end to end. As we've been investing in the Knowledge Hub, and as AI mentioned earlier, we haven't given up on our truth and faithful lovely integration. This is something that we will continue on investing on. As much as we love generative AI, and we will continue on loving it, our first, true, value that we've been able to provide with Coveo is our native integration, and that's something we'll continue on doing. And so the latest in, investment that we've been doing in our integration is on our ServiceNow, library. Now ServiceNow has been, one of our largest integrations, at at Kaleo. As you would have known, probably in the past, some of you probably have a ServiceNow integration. It's been using a lot of the legacy tooling that we had in place that were great, in some ways, but were extremely, uncontrolled and really, custom. What we've been what we've been sure to do is for the our ServiceNow integration to support, and use our atomic library, which ensures that you have access to the latest, greatest innovation from Coveo, which provides you essentially faster time to value, the latest innovations, whether it's from generative AI or others. It doesn't just have to be generative AI. And making sure you have the latest and greatest experiences, within that ServiceNow integration. The current package the current atomic library, atomic service package, excuse me, is currently in review with ServiceNow actually and should be hopefully available this week or next week. So we'll make sure to communicate that very clearly to everyone here on the call, when the ServiceNow atomic package is now officially available and accessible to all of you. With that, I think I will pass it back to Juanita. Thanks, David and Matthew. Well done. So many great innovations. Really excited about the ServiceNow one too. So that was a lot, but I wanna remind everyone that at New and Hubeo, we do try and talk about some of our major innovations and major capabilities that we're releasing. Of course, though, we have a bunch of other enhancements, a bunch of other, things we're improving constantly and releasing constantly. So I would like to direct you to learn a little bit more about what we have by visiting our level up where you can take courses on some of the latest, features and capabilities covered by David and Matthew. We also have our new and Coveo pages online where you can be able to just click through a little bit more, and it'll take you directly to our doc site if you wanna dig deeper into implementation. So lots of online resources for you to engage with. As I mentioned and, you know, we we do wanna invite you. There is a QR code here so you can, scan to register for this. Seven are gonna unpack Agentic AI, what it means, bring the Kubeo lens. And if you know how we've been operating, you know that we always say we're last to hype first to results. So, we are going to continue that same theme and really try and break this down and bring you, you know, the what's real, what's usable. You know, maybe Agentic isn't for you. Maybe there's existing technologies that you can leverage as well. So we'll unpack all of that at this workshop in the next week and a half. So I see a lot of questions coming into the chat. You can still please submit your questions. We'll take them, one by one and answer as many as we can. So I'm going to invite Matthew and David to join me together, please, on the screen. And, I think you've already kind of started answering some, Matthew, but shall we go through them? Oh, yeah. We can. Certainly can. Do you wanna talk more about the cost for Coveo for Salesforce license and passage retrieval queries and where where information can be found on those? Yeah. So all our solutions are are really tailored to our clients. So I'd highly suggest you reach out to your account manager, account executive, or customer success manager, whoever's working with, with you to get, you know, to get proper pricing. When it comes to Agentic Force in particular, the Agent Force integration itself is essentially free. As long as you have a Coveo for Salesforce, integration with Coveo, and you'll need passage retrieval queries to actually use the passage retrieval API to ground Agentic force with, with accurate passages. So, so, yeah, that's essentially it. But if, you know, to for the specifics of the each entitlement, again, you know, I highly suggest reach out to your account executive and account manager who'll be able to guide you and find the right solution for your needs. Thank you for that. And if I can add to, for at Cameo, we do charge in two ways. So we do have seat based licensing. So if you're, you know, purchasing for the agent use case, there's that or for an Internet internal employee use cases. We also have, consumption based. So you would have heard us talk about, you know, QPMs, queries per month, generative queries per month. So we have a consumption basis as well. And and, so those are two just for you to consider. I'll move on to the next, which is, for you, David. Can the RGA feedback be customizable and knowledge hub report customized for different metrics? That's a great question. So the out of the box feedback is, obviously, out of the box and more generic. That's very much intentional on our part as we want to ensure that most of the feedback that's coming through will help us then train the models in the future and make sure we can get a better, better outputs there. Now you can customize some of the responses as long as some of the intents, are the same behind the questions. But, in the future as actually from, q three probably onwards, we'll try to make that more accessible through our UI libraries as well and allow you for more customization around the feedback itself. On the reports themselves and on the the for for the for that part, there's multiple ways to look at it. So we do have reports, that allow you for some form of customization on our admin console. We are also looking to add out of the box reports in the knowledge hub that are more standardized and dedicated for gender VI specifically. But you also have access to a Snowflake reader account, which you would like to add more customization. By using that Snowflake reader account, you can take that data directly from Snowflake and being able to then create your own dashboards and your own reports on your end, with any Power BI tools you might wanna have there. So we have all, those three options available. So a lot of flexibility there, if needed. Thank you for that. Maybe along the same AI, and it's a little granular, but I'll ask it to you. I think it's maybe not too granular. Is it possible to enhance answer manager to display user information? Is user ID a good enough start? AI, customer may need to get ahold of an end user. That's a good question. So in terms of, adding user information, we're trying to prevent him from providing dedicated user information within the answer manager. However, we can expect that in the future as we add more capabilities to the out of the box report that we're building this quarter and should be available by the end of the quarter, that we add more information about, the content and the context of the query. And so for for you to be able to understand a bit better where the feedback is coming from. In terms of who provided that feedback as well in terms of the answer manager specifically, there are, other additional tools that we'll be working on within the knowledge hub to be able to evaluate directly in the knowledge hub without having to go and use the, the feedback model that's in the end user experience, which would then allow you to have more control on who's evaluating and how they're evaluating and to split that experience between both. So it's a great question. So to to answer and make that answer a bit shorter, today, no. But in the future, we might be able to add more elements about who the user and what they AI. Awesome. Thank you for that. Matthew, I'm gonna turn this next question over to you. What is meant by case classification when we talk about case classification? That's a great question. So we had this capability for a little while. When you go and open a case using our case assist flow or using a custom case form, you can essentially call a large language model that we have that's called case classification, which will learn from past cases that were closed and accurately predict how to classify this new case that's being created. So that if it does get created, of course, our objective is to then use this information to deflect the case and avoid, you know, a case for known issue for from being created. But if the case is for a new a new issue and it does get created, we'll ensure that it gets routed to the right team. Right? So that's at creation time. So what we're looking to do with agent force is to bring that same case classification capability inside agent force. So a support agent that gets a case that's not properly classified can essentially ask, you know, agent force to classify the case. And using Coveo's case classification capabilities will provide the classification, which can then be applied, to the case. The API also would technically allow for cases to be automatic classified before the record is even, you know, seen by a human agent. So lots of interesting possibilities there that we're exploring, and this is why we're we're bringing that that capability into Agentic force. Thank you for that. David, this one's for you. Can you share more about, the translation capability? How reliable is it? Are you using any particular machine translation to support localization? Yeah. So in terms of the the translation, so as you would have seen in some of the slides, there are two columns. One that is already available, one that says, upcoming. The one that's available, today has been fully tested on our part, and we fully trust and is as performant as our English, responses. In terms of the ones that are upcoming, as I mentioned earlier, they are not tested on our part because there are way too many of them. So there might be some variance in some of the results that you get. But we have the team available to support if there are some of these numbers that you're interested in just to collect some feedback and ensure we can provide improvements there. In terms of how we power that, it's using, an e five encoder on our side that's more standard in the market today, that just, make sure that those answers can get, translated. Excuse me. And so, it should provide pretty standard, pretty standard market responses, as as other would see it. But, obviously, there could be variance there, so it would be a case by case basis to understand those variances, if needed. Thank you for that. We have a little bit of a follow-up here, which is, can RGA generate answers from mixed language content? AI, I'm not sure if you know this one. I think it should. I don't think it, it's it is specific to any language because, again, the encoder would be in place and standardized across the board, and it would just translate, directly from that specific, content. So you wouldn't have to charge anything because the encoder would already be standardized for the for those multi languages. Correct. Our our retrieval is multilingual. So even if their query is half half and the document is also half French, half English, for example, we'll be able to retrieve it. But the language of the generated answer, though, will just be in one language depending on the language that the user is using. So based on the AI the user's locale, which is something that our clients can also override, should they wanna have the drop down selector, for example, to allow clients to choose the, output of the the language output. Thank you for that. We do have a question here around QPM volumes and being able to control or set rules on when native answers get generated. I can answer that one, Juanita, I think. So just to AI. So, obviously, on the for, for generative answering, the the metric to track for is GQPM, so generated per month. So just to clarify that, so there is a separation between queries per month, which is more classic search, and g q p m, which has generated queries per month. There is a way to control that output. So, obviously, the model will only respond to content, that you've set within the Clearview platform to answer from. So it doesn't have to answer from all the standard content that you have today. So imagine you have a total of a hundred thousand documents within, your Coveo index today, and you would like the the model to just answer for the top fifty. Then you can absolutely decide to do that using our query pipeline rules, and being able to control the output there. There's also other ways to control that. If you wanna control for specific queries, whether they're harmful queries or as I mentioned earlier, let's say, mention of a competitor or anything like that, well, you can also do that within the knowledge of directly and being able to set those up. But this is more of a case by case basis and doesn't scale really well. So the idea would be to ensure that you only select the from the documents that you want the answer to be generated from, which can be easily controlled within the Coveo admin console in this case. And our team can help you set that up that up pretty efficiently there. Thank you. Nice thorough answer. Maybe this is a question for Matthew. Yes. There's a requirement there's a requirement we'd like to get the general answer from Gen AI and the search results API. Will passage retrieval retrieval support that use case? I'm sorry. If you'd repeat the question, I don't have it on my screen here. Yeah. Sorry. It's there's a requirement where they want to get the generated answer in the search results API. Will passage retrieval support this use case? Not exactly. So if you wanna get an answer from Coveo, you know, that's gonna be using our out of the box component, then it's gonna be made available in a search. If you'd like to use passage retrieval and then write your own answer and then make those available in search results as well, That can be done, but it would be more of a custom deployment. So it's perhaps something that our professional services team or your CSM, or even our CS architects could give you guidance and try to understand, like, exactly what you're trying to achieve, what's the use case, you know, which LLM you wanna use, and how can we, you know, feed the passages to that LLM and then provide those answers in the context of search. I I hope that answers it. But, yes, I'd highly suggest reaching out to your CSM, get a CS architect, or someone from professional services to help with this, though. So, we're sure to give you and find the best solution for the need. Yeah. Thank you for that. Maybe we'll just do one more here, and then any we don't get to, we can certainly follow-up with you offline. I think this is gonna be a little bit of an open one, but I'm curious to see what Matthew and David have to say. How Agentic generated answers are going to change a deflection calculation, implicit versus explicit? I mean, when I see this question, it's a big question. Italian dollar question. Yes. So do you guys wanna take a Yeah. I I could take a stab at it. And, David, perhaps, feel free to add afterwards with what we're doing to bring some some of that reporting in the knowledge hub. But, it's a very complex question, and we haven't seen, like, a clear answer from the industry. So we're also trying to answer this question ourselves. When it comes to implicit deflection, we tend to consider on our end, at least at Coveo, and we tend to try to calculate this as scientifically as possible. Right? Looking at intents and looking at paths and how many users go from a full search and and then go to the case form to try to understand and, get a good idea of how much of those self-service successes could have led to a case, which we then deemed to be an implicit deflection. With CRGA and and, you know, being available in a full search, well, you know, obviously, there could be instances where someone doesn't click on anything, doesn't interact with the answer, doesn't click on a citation, doesn't click on a result, and still gets what they need. So we still consider success on our end to be someone who who has interacted in some way, shape, or form with the answer, whether it was to copy it, whether it was to like it or any anything like that. But, as much as explicit deflection, the same thing, we'll obviously be looking at someone who, started the flow and has not completed a case and hopefully has clicked on something. But, because it's so hard to know whether someone who just saw an answer is a success or not, we're also starting to look more and more at the other side of the the metal. So what is the submission rate? So what is the rate at which people go to your full search and then start to open a case? What is the rate at which people start opening a case and then actually complete it and submit a case? And that helps us get a good idea of the impact that it can have even if it's sometimes harder to measure because they might just see an answer and leave. So we look at the outcome and whether a case was submitted and what was the impact on that submission rate to get a better idea of of, you know, the impact on on the success. And so far, we've seen tremendous results for our clients with CRG even with that difficulty to measure the impact of CRGA. We'll look at visits that did get an answer shown versus visits that did not get an answer shown, and we'll typically see a very interesting uplift in self-service success and also reduction in submission rate for those visits. So, so, yeah, we have a lot of data. This is something we're actively exploring. So, if you like to have more discussions around this, I AI you to reach out to your SESM, and this is certainly something, we we can entertain. We'd love to get more feedback from the industry, as no no one seems to have the perfect answer for this, but it's something that we're looking to address as scientifically as possible, to help our clients understand the impact of those solutions on their outcomes. With that response, Matthew, we're gonna need a, blog authored by you. Awesome. AI, thank you for your final question. We're gonna do it because this is why we're here to show you what we have, but to also answer your questions. So, I'll invite Matthew or David to help me with this one. Have you had success thus far with using AI entitlement based content or secure content use cases? I can take this one. Well, in terms of, AI the Coveo platform handles that completely, out of the box, where we have all the tools in place, to ensure that we can generate content on secured, or generate answers, excuse me, on secured content and ensure that there's a clear definition of who can see which type of content. So within the Coveo platform, we have a very powerful enterprise grade, control there where you can create your own groups and ensure that only the users that are supposed to see that specific content can see that. Even though the model gets trained on the entirety of the content, RGA respects those rules, of security and permissions and ensures that only the users that would have had access to that document can see it. So you can imagine a way where an agent, would have different accesses than an end user that's on a, let's say, a self-service portal. This so that's an example where the agent would have access to additional, additional, content that would not be accessible to, the end user, on the self-service portal. And so if both the agent and the user would ask the same question, they would have different responses. Even though the model was trained on the same, on the same content, with the security with the security layer in between, we ensure that only the right answer gets shown for each of those users. So, yes, to answer your question shortly, yes, we do. Well, love these thorough, answers and and descriptions. Thank you for that. Alright. That is going to wrap up our new Incoveo session for you today. You will get the recording, and I believe you'll get access to this presentation as well. And that that therefore, you'll get those links I, I sourced earlier so that, like, I see your question there. Thank you all so much for your time, your attention. Anything you wanna follow-up on, especially if you're a Coveo customer or partner, your account teams are there to support you to share more. We will stay tuned for that Agentic master class. We'll continue to bring you these new Coveo sessions. So just a big thank you for your time and attention. And Matthew and David, great job explaining. Again, complex topic, so appreciate you. Thanks, everyone. Thanks, everyone.
Registrieren Sie sich, um das Video anzusehen
New in Coveo for CX & EX - Spring 2025
Watch to get a front row seat to how Coveo is shaping the future of digital experiences across CX and EX:
- Next-gen GenAI innovations to simplify and supercharge knowledge discovery across customer and employee experiences
- Agentic AI in action with new packaged capabilities like Coveo for Agentforce, delivering guided, intelligent support experiences
- The power of our Relevance-Augmented Passage Retrieval API, built to ground any GenAI application in secure, enterprise-grade content
- Knowledge Hub insights to measure and optimize the quality and performance of generative answering
- Practical demos that show how to cut through complexity and deliver award-winning experiences across every digital touchpoint

Juanita Olguin
Senior Director, Product Marketing, Coveo

David Atallah
Product Manager, Coveo

Mathieu Lavoie- Sabourin
Product Manager, Coveo
Next
Next
Make every experience relevant with Coveo

