Hello. Welcome. Good morning. Good afternoon. Good evening, depending on where where you're joining us from. My name is Juanita Olguin, and I lead product marketing for our service and knowledge division here at Coveo. I'm really excited about our time today with you, especially that we have two of our very own product managers joining us on our presentation today, David and Matu, who you will hear from shortly. Now we do have a very full agenda for today, but before we jump in, I wanted to cover a few housekeeping items. The first is that this webinar is being recorded, and you'll get access to it within the next twenty four to forty eight hours. Second, we do have a large team of here on the call helping us to moderate the chat as well as the q and a. So if you have questions or you wanna just engage and you have a feeling throughout this presentation, please feel free to add your thoughts, submit your questions. Our team will be moderating. And we are gonna allow for some time at the end of this presentation to go through some of your questions. Now before we jump in, of course, we are gonna be talking about some current as well as future state capabilities. So please refer to the latest publicly available information for any buying decisions. Now we wanted to start today's presentation by actually thanking you, thanking you, our customers, our partners, our advocates, our stakeholders. We really wanna thank you for continuing to put your trust in us and staying committed to everything that we're doing, and we've been able to achieve some impressive things, as a result of that. The first is we've been a we've been named a leader in the recent Gartner Magic Quadrant for search and product discovery. We were also recognized as an AI search innovation award winner. And you can see we have a couple of great partnerships as well as certifications we've been able to establish, including working with Genesys on an integration as well as with Optimizely for those of you marketers out there. We also recently were mock alliance certified, which demonstrates, the commitment to composable and mock standards. And last but not least, of course, we have our ongoing relationship with Salesforce, the expanded partnership with data cloud integrations to bring you best in class digital experiences. So a lot's happened in the last six months. In addition to that, we've been able to deploy over thirty live implementations of generative answering. Now, obviously, you see some of our customer logos. Maybe some of you are on the call. If you are, thank you. Look at these results. They're quite impressive. You can see that we've been able to help companies like Xero, SAP, Forcepoint, and f five significantly improve the digital experiences for their end customers, including improving self-service success rates by twenty percent and over. If you joined us for Relevance three sixty last week, you would have heard from SAP that not only did we help them reduce, their case submissions by thirty percent, but we also help to decrease the cost to serve with an eight million euro in reduction in annual cost to serve, which is really impressive. So there's a lot to be excited and proud about. And while others are out there in the market trying to figure out how to make Gen AI work, how to implement it properly, we are now on the next iteration of our of our relevance generative answering product. So those customers of ours that do use this solution and capability, you would have received an email for us about our fall updates where we made a few, underlying improvements to further enhance this overall capability. We have a new underlying semantic encoder, an improved prompt prompt structure, and a new model, which altogether on average has led to an increase in answer rate of fifteen percent. Now you may be wondering, you know, why answer rate? Well, I just showed you some of the great results on the slide prior. We we do wanna talk about answer rate just because, obviously, we wanna ensure the technology is working for your end users and giving them an answer or helping them achieve their goals. But beyond that, we wanna talk about, of course, those business outcomes as well. And, again, you just saw what we've been able to deliver for our customers across industries and enterprises. Now we do wanna talk about CX because this is our new for CX. And the reason why this technology and any technology is important is because it's meant to bring great experiences to your end users. And those of you that are using generative answering know the potential and the impact that it has, but, also, I'm sure you know what a bad experience looks like as well. And as you can see here, three point seven trillion dollars of revenue is at risk globally due to bad customer experiences. Now, of course, we do not want you to be in this bucket. We don't wanna be in this bucket. This is why we try to bring you the latest innovations to keep you ahead of the curve. Now I feel like it's important to unpack CX a little bit as well. For some people, it's either a pre purchase or post purchase conversation. The name customer is in the word CX customer experience. And so it's important to understand when we talk about CX, what components or elements are we really alluding to. For some people, if you're on the service side, service is, you know, CX in itself, it might be very much an agent view. Others, if you're on the marketing digital experience side, you'll have a different perspective. The fact of the matter is that CX really does and should represent the entire customer journey, including employees who are internal customers too. So when we talk to you today, we're gonna talk about CX from the perspective of using AI search and generative answering to increase your customer self-service success while reducing the burden to the contact center. And it's important for us to call this out because, again, we know that different teams own different parts of CX depending on your digital maturity, but also depending on your industry. So those of you joining us, you'll hear us talk about self-service. Really, this is referring to those customer self-service sites, and it could be someone in your Barca, collaborating with your service department to manage this, but it could also just be your entire support organization that's still in with us. Just wanna set that context so that when you hear from my colleagues, you get a sense for what we'll be covering today. Now getting to the meat of our presentation, I would love for you all just to take a look at our latest innovations for the fall. Those of you not familiar with new and Coveo, this is something we do twice a year, and we do it to catch you up on all of the innovations that we are releasing ongoing. You might have heard of us talk about being an in a a subscription to innovation. Well, we release all the time constantly. We're rolling these things out, but we know that everyone's not implementing implementing them right away. So the point of this is to catch you up so you can see what we have to offer, get support from our teams, and also get any additional, maybe support you need in getting buy in across the organization. So there's a lot here, and we're not gonna cover all of these items. So I will highlight just the components that we'll be covering. You can see the innovations on the generative answering side, our nongenerative AI models that we have, a lot of activity in the integration space, as well as improvements to our search and UI experience, and, of course, the back end admin and, security of our platform. So with that, I am happy to pass it over to Mathieu who is going to take us right into this. Awesome. Thank you, Anita, and thanks again everyone for joining us. I'm quite delighted to be here to share some of the innovation and features we've been working on over the last few months, as well as some of the innovation that's coming down in the next, in the next few months looking forward. So I'll begin by talking about, relevance to the event answering product for the end user experience specifically. But before I dive into, you know, some of the innovation that we've been working on and the features, I wanna just highlight a bit how our clients have been deploying this over the last year since our solution was released. So we've seen clients deploy this within their SaaS based products to essentially join their clients where they are on a daily basis and provide answers and solutions directly within your product. We've seen clients deploy CRGA within, self-service support sites, communities, but also for employee portals to help both, clients as well as employees get answers and be able to be more efficient. We've seen our CRJ solution deployed into case submission flow to help move from a case creation flow into a case resolution flow where a resolution can be provided by a large language model to avoid the cases from being created. We'll get back to that one shortly. We've seen also our clients deploy our general answering solutions within chatbots, with our own turnkey solution, but we're also now making it possible to use your own LLM to deploy within those types of use cases. We'll touch on that also a little bit further. And finally, we've seen clients deploy our solution within service apps or agent desktop to allow support agents to be more efficient at solving cases. We've even seen instances of agents reusing the answers that we provide and copy pasting them into emails, obviously adding their own flavor as the answers are provided so accurate that they can enhance, you know, case resolution, but also make, obviously, the the work of the agent easier as a whole. So let us begin with one of the latest addition to that, you know, journey that we just looked at, which is allowing CRJ to be deployed in case submission workflows. So being available in case submission flows allows us to provide an answer during the submission process to help the user self serve and effectively deflect the case. Instead of links only and just results, we're now seeing end users get a secured, relevant, generated answer to resolve their issue. Again, moving from the idea of a case submission flow into a case resolution flow. And these implementations can be done, and we invite you to partner with our professional services or with partners to deploy this, although we do have technical documentation about how to deploy in this type of environment. As this has been bringing tremendous value to our clients, I wanted to show a quick demo what this solution, looks like, in in real life. So I don't know, maybe, Juanita, if you can hit the play button because I'm not sure I can I can actually start the video on my end? Perfect. Thank you. So the customer journey in support portals typically starts with a full search. Right? So you obviously might be familiar with our solution here, but we'll provide generated answers, you know, on a self-service or support portal for clients to be able to get answers and resolutions to their problem. You can see here things like the, show more, the collapsible option. But, unfortunately, what we've seen and what we've been hearing for clients is sometimes users will head directly to the case form as we can see here and might not essentially consume, the search results. So, essentially, having this in the case form allows us to take advantage of our case classification model to classify the case. So if it gets submitted, ends up to the right agent, but also use this information, those classifications, to provide a more accurate answer based on all the information provided by the user, essentially providing an end to end resolution to the user's problem, avoiding them from opening a case altogether. So this has allowed to really, you know, plug that that the the bottom of that funnel because, again, we've been hearing from clients that sometimes the users, as much as you can bring the horse to water, you can't force a horse to drink. So we've been seeing, obviously, clients sometimes just skip the self sort of portal, head directly to the case form. We're not able to provide them with a resolution in that in that situation. Now another thing we've been hearing from clients is this idea of conversational AI, conversational discussions of being able to talk and follow-up with an with an NLM model. We've been also hearing from clients testing this that an open ended solution sometimes leads to a bit of a dead end. As such, we're working on a beta of a follow of follow-up questions. We'll essentially, assist users by providing them with the next logical steps to allow them to search and explore topics further. Those will be LLM generated questions. Right? Will be relevant to, obviously, the initial query, ensuring they lead to the proper answer and the proper outcome. So the idea of this, of course, will be to maximize self-service, increasing answer rate, deflecting cases, and, of course, just enhancing the overall user satisfaction. We've also made, rich formatting available in our answers now. This has come quite handy for our clients who may have code, tables, or different types of information in their, in their content. So rich formatting is now automatically included in the generated answer, and they will display on various format based on what's the best way to, you know, to provide the answer in the first place based on the question that was asked. We've also just released support for multilingual. This is quite exciting. We have a lot of clients that are working, you know, in many, many places across the world. Large enterprise clients that are international and thus need to serve their clients in multiple languages. So we will now support multilingual, and we'll provide the answer, of course, in the local language of the users. It does require the content in the first place to be available in those respective languages. We started with a beta a few weeks ago for French, and we're now very happy to announce that we're expanding our beta to support other languages as well that you can see here on screen. So we're quite excited by this, and, you know, some of our large enterprise clients are as well. So, you're invited, of course, to go and try this as well. And then finally, for my part, for the, again, the end user experience with CRGA, we are working on a we have a proof of concept, essentially, that we're working on that we're looking to productize in the near future, looking for perhaps early twenty twenty five, where we'll be able to provide personalized answer based on the contextual data from business applications. So the idea is to connect the search queries with structured data coming from your business systems to be able to provide contextual knowledge. So, obviously, the knowledge itself, but join that knowledge with context about the user to provide a more personalized answer. And this, of course, providing this business context can help users get a more personalized experience as a whole and a faster resolution leading them to self-service and a better outcome. So now with all those great answers, of course, clients can provide feedback. And it's important for our clients to be able to measure and tune the answer performance over time. So I'll pass it over to David now to talk about the admin experience. Over to you, David. Thanks, Joe. I'll, share my screen on my on my end to make sure, I can jump on a quick demo as well afterwards. So, hopefully, everyone can see my screen. If there's an if there's any issue, just please let us know in the in the chat. But I'll say hello to everyone on my end. As, Matt and Huita very well introduced, the last few months, and also a full year have been heavily invested in terms of end user experiences. But as well, we've been investing in admin experiences as well. So as we've been progressing our end user experience, there's been a need and ask for our customers to be able to do much more, with, generative, answering. And so with that, we're extremely proud and excited to introduce the Knowledge Hub. The knowledge hub is our vision and mission, around at Coveo that we're trying to essentially segment and provide a more tailored experience for the needs that are behind our products. The way we encompass knowledge is essentially in terms of on two sides. One side is documents, and this is something probably a lot of you are aware of and are they're using and answers. And this can be generated answers or non generated answers like our smart snippet solution. But with the need for generative solutions to be a bit more better understanding around those, our mission right now is to provide more control for these solutions within the knowledge hub to allow you to better troubleshoot, better report, be able to evaluate, and be able to go a bit deeper in terms of how you wanna set up the generative solution. At the moment, and as, some of you might have seen in the market or even using our own generative solution, it's a bit of a black box. And so behind that, what we're trying to do is to open up that box slowly but surely to make sure that the answers that are generated are the ones that you are looking to have and provide the most impact to our customers. So, of course, today, our customers are already getting a lot of value from our current general solution. But what we believe is that by opening this up and providing more control, it will be a much more tailored experience. And so with that, I'm gonna jump into a quick demo, and hopefully, I will not be, hit by the curse of the demo that we sometimes have, in live instances. And in here, I'm in the in our administration console. And what you'll see now, for those for for our customers that are familiar with our console, for those that are not, this is our administration console where most of most of our, products are in and features. And to to be able to go and start using the console and the knowledge hub, all you need to do now is go and access knowledge hub, which is something brand new that we did not have before. From there, it's quite a simple approach to start using it. In here, you'll be able to already start collecting some evaluations and some, and some control tools there. But for for what you'll need to do is using and leveraging our out of the box builder experiences, you can just take a little configuration that we've created for you, go into your specific search page that you would have created completely out of the box, double click on that, which will lead you to our builders. From here, all you need to do is add a little configuration, do a quick save. And then from there, you can already start generating some answers. So as an example, I can use a query that we've, preset, which is, what is a band saw ball? Our model here is is Taylor de Ron Wood. And so it will tell you what is a band saw ball, which is fantastic. From there, a lot of the requests that we got from our customers is to be able to better evaluate the quality of the answers that are coming, from CRG. And so if I type up this query again in the actual propped up search page, you'll be able to get the answer generated, but you'll also be able to provide feedback. And that's one of the key, outcomes there. From there, if I do a quick thumbs up, I can say, well, hey. This answer for Bansaw Ball is awesome. So I'm gonna say, this is great. I think all of these are right. The answer is readable. It's pretty clear to me. And I'm gonna add a quick note that, this is awesome because we believe it's awesome. From there, I can just send the feedback away. And then if I navigate back to the knowledge hub, I'll be able to, with a quick refresh, be able to see, the query that I've made, what is a band saw ball, the answer that's been generated, the note that I've provided, and whether the answer was output or not. And that's kind of the first step for you to be able to understand, are the answers generated good or not? And this will be able to be quickly filled up by by your SMEs, as they're on the ground testing or by your end users as they're on the ground testing and finding wanting to provide feedback. What's very important that all this data is then collected into the product, which will then help us also improve the quality of the model on the back end of that. As we collect more data, the more we'll be able to improve the quality of the models moving forward. But we're we don't we're not stopping there. As I said earlier, we're trying to provide more control as well with the with the knowledge hub. From there, I can go and decide and create a rule. Right now, we only have one rule in place. So this is, enclosed beta doing just one rule to in order to test, the sequences there. But let's say I don't want the model to answer anything about band saw ball. I don't know. Band saw ball are dangerous. But But you can imagine in a more in a more contextualized information, you don't want information by your competitors to show up. There are some sensitive topics you don't wanna show up. All of that is then being able to be you'll be able to control that with the blocking rules. So from here, I can say block bandsaw. And, again, I can say the word that it's it could it's it could be a word or phrase. In this case, it's a basic contains condition, and I can say I wanna word I wanna block the word bandsaw. I click save. And if I navigate back again to, my the same query, the same page that I had before, and I just do a quick refresh, instantly, the query is no longer generated. So at this point, we've provided you a quick way to control the output of the model, which has always been a challenge before as it used to be hidden in term in in our toolings and our in our specificities within the Covell toolset. Now you can just do it out of the box quickly without the help of any developers. Now that's the first part of the control. We're looking to add a lot more to this as we move forward. The next field of that is imagine the the query that's been generated is not a good one, and it's hallucinated. It's difficult to understand why it has hallucinated, what's happening there. One of the tools we've been working on and we've we've launched internally first, and now we're looking to expose, in q four and add, in a in an open beta in by mid q four, I believe, is a tool that we call the trunk inspector. The view that you're seeing now is the prototype internal view that we have. So the one that will be exposed in the product will be a much a much nicer and and better UI. But from there, you can say, right now, this is on our Coveo docs, search. So this is collecting data from Coveo docs directly. I have the ability to now dig deeper in terms of, first of all, per document, how is that document actually split, the one that's being used and cited by, the model. So if I go into models, I can select it my my generative model. And by just using the item unique ID, which is easily, identifiable within the Coveo platform using our relevance inspector, You can just take that item unique ID. It'll take a few seconds to load, and then it will instantly split up the document to chunks. It will tell you how many chunks this document is in, and you can literally read the chunks themselves quite quite clearly. So if you're unsure how how our our models and how we work by by chunking our documents on our on our second stage retrieval, this is how we will be going through that. But the next phase, and I think the part of the tool that's the most interesting part that will help you to go deeper, is the pro query section. In here, you'll be able to again, by finding this easily within the Coveo platform, you'll be able to just by by using a search ID, which is specific to each search that's that's done, within your search UI. Again, by clicking a quick fetch, it will start by telling you which chunks have been used by the, for for the specific generated answer. It will tell you the text itself, the score, so the actual similarity score that's been used by the model. It will rank it from, the highest score to the lowest score. It will tell you which document which parts which, excuse me, chunks were cited, with this little check Barca, and it will tell you if it was sent to GLM. If it was cited, it means that it was actually used. It was in the references as at the bottom of the answer. And then from there, you can go deeper and read the chunks themselves. So if in a case where where where, an answer is is hallucinating is a hallucinated answer as an example, you can just easily go in here, type the searcher ID, and start digging deeper. You'll be able to see the chunk count for the specific, generated answer, the document count, how many documents were used, the chunks that were cited, and then the chunks that pass the threshold. So it's a very quick and easy way to go deeper into this. So this is right now, it's still at the at the at the very high level. And as we move forward, we're looking to go deeper and provide a bit more specificity behind this troubleshooting path. And this is already, leaps and bounds ahead of where we were at the start of the year, where this black box was really really a big dark hole, I'm gonna call it. And right now, slowly but surely, we're starting to see the light, and it's something that we're extremely excited about and you hopefully, you guys can get a touch on by mid q four this year. I'll go back to, the presentation. And, with that, I'll pass it along to, I believe, it's Mathieu, for a bit more about AI models. Yes. Thank you, David. Thanks for the great demo. Awesome. So so, yeah, now that we've covered the admin experience, I wanna talk about some other improvements we've done through AI models beyond generative answering. So the first one is quite exciting, and and, frankly, this is something that we had a lot of requests for clients. You may have heard the word bring your own in a la lamp, chunk, chunk API. You know? And we said she settled with the relevant segmented passage retrieval API. What we're essentially doing is we're allowing our clients to build your their own, retrieval augmented generation systems by using our best to breed AI search platform that already comes with all of the enterprise grade security and connectivity. So you can essentially retrieve the best passages that are ready to be used by your own LLM in your own application so that, those, use cases of large language models will be grounded in actual facts with the best knowledge available. What we see on screen is essentially their what Cavio already has to offer, right, with our secured connectivity and, you know, our our panoply of connectors, our unified hybrid index, which does both lexical and semantic retrieval, and then how our retrieval is then augmented by using search and certain business rules, lexical relevance, semantic relevance, embeddings, but also by our behavioral AI models that understand user behavior and use those signals to enhance the results and the passages that are delivered. And all of that will then be made available, via an API called the passage retrieval API. So you can then utilize those text chunks and send them to your own NLM in your own NLM powered application, to cover a wide array of use cases beyond the ones we've highlighted earlier. So we have clients talking about using this to generate content, for example, or to use this as part of chatbot that may be integrated with different automation workflows. So you can have this an a trusted source of knowledge and a ground truth, essentially, that you can fetch information from using, again, the entire Kubu infrastructure and everything we have, to offer. So I'm quite excited about this one. And then in that same optic, you know, in in order to serve large enterprises, we've also increased our vector search capability. So we have clients already that have indexes with us of a hundred million documents and more, which we have where we have a mix of lexical and some semantic retrieval. When it comes to full vector search, we've been enhancing our capabilities to be able to do passage retrieval essentially on up to five million documents. So for enterprise with large volumes of data, and multilingual content, this will come at, great use, and will help, of course, improve the quality and the accuracy of generated answer by tapping into that content. So we're essentially supporting more documents, more text chunks, while keeping the same performance that you usually expect from Coveo with queries coming back to you in in fractions of seconds. In terms of integrations, we've also expanded a bit our our ability to stay there. So I'll start integrations, and I'll then pass it to David to continue because we have a few things to cover here. So the first one is quite exciting. We're now we now have an integration with Coveo for, called Coveo for Salesforce data cloud, where we can essentially maximize, the Salesforce cloud by bringing in the Coveo data directly into data cloud with Coveo for Salesforce data cloud. So we'll allow you to essentially leverage, data and insights from our Cavill index to improve user experience across your different Salesforce clouds, but also even bring in Cavill usage analytics, as structured data, again, inside data cloud to be used across Einstein service cloud, commerce cloud, experience clouds, all the different, pieces of the Salesforce ecosystem, ensuring essentially a unified experience and being able to utilize our connectivity as well as the insights that we gather with the usage analytics across different use case in the Salesforce ecosystem. I'll pass it to David now to continue, across some of the great integrations that we're working on. Over to you, David. Thanks, Mateo. Indeed. And as we've been, investing more in our integrations, one of the key asks that we've had from customers in the past is, to in order to expand is to expand our ServiceNow capabilities and more importantly, our integration using our atomic library. So in the past, for those of you who are not aware, our ServiceNow library was mainly on our GSUI library, which is our older UI library framework. And, since, the last for the last couple of quarters, we've been investing in our in, in pushing this integration into our atomic library, which essentially allows us allows it to fully leverage our builder based integrations that we've been working on and we have been working on since last year, which means that you keep you're now fully capable of setting up whether it's hosted Search Bridge or hosted inside panel completely out of the box within, ServiceNow. This is expected to be available by mid q four, of this year. So if this is something you're interested in, please don't hesitate to reach out to us, and we'll be happy to share more in terms of how you could leverage this, specific integration. Additionally, what we've also, provided a lot of investment in is our new Optimizee, connect connectivity options. So Optimizee is a content management system and digital experience platform, as some of you might know. But for any of our customers that do have Optimizee, you can now, very quickly and very easily have it, work with our Coveo integrations. So now you can use our GraphQL API or other universal connectors to easily, set this up and start having the our conveyor data conveyor data flow into your optimized into your optimized setup. This, at the end of the day, will help improve, content findability while leveraging cutting edge AI and essentially help discoverability, and answers. So, again, if there's something you're interested in, please reach out to us, and we'll share more specifically on the Optimizely integrations and connectivity. And that's gonna be our for our integrations. I say it as if it's a little amount. It's it's quite a lot, actually, considering, all that's been going on. But one of the key areas as well of investment that we've been made is in our search UI. And I just hinted on the new, atomic library. And as, we've been investing in the atomic library for our service line integration, we've also been investing in furthering our overall UI libraries. And with that, as of, this September, our you our whole UI libraries are now pushed to a new version, which is our v three version, which is for headless, Atomic, and Quantec, specifically for our Salesforce integration, which essentially allows you to have, access to the latest and greatest we have to offer as Coveo, allows you to use CRJ completely out of the box with the latest innovations we provide without having to ever go and having to push too much updates on your side. So leveraging those UI libraries is the fastest way and easiest way to get access to our innovation. And within our documentation, you'll now be able to see a quick easy guide to how of how for how to move from your v two setup to your v three setup if this this is something you're interested in. With that, I'll pass it along to, Juanita for the admin and security piece. Thanks, David. Alright. Getting these slides. Alright. I have a few to cover here with you to close this out. The first is our projects feature within the admin console. This is something that we actually introduced in the spring of this year. It was very much in beta, And the idea behind projects is providing an easy way for you to set up your search and AI projects. There's a lot that goes into setting up these great digital experiences. I know as end users, we see an amazing experience, but on the back end, there's a few different elements and components that need to be set up. So you can see here with the screenshot, items under resources. And these are all the different, what I'll call, components and capabilities that you would need to set up and that go into creating those flawless digital experiences. And so now our team is ready to GA this at the end of this year. Hopefully, you've been trying this out yourself. If you've not, obviously, you can dive right in to do that. And beyond making it easier to set up projects, the idea behind here was to also help support your different departmental teams that each have their own different search hubs and UI experiences that they're trying to set up. So really allowing for more departmental ownership where they can kind of own and manage these things a little bit more easily by themselves. Also, we've heard from our customers the need to have enhanced security and more control over the security that is provided. So because of that, we are we are providing bring your own key, which is currently in beta. And the idea here is, again, to give enterprises, especially those that might be a little more highly regulated, more control over their data and encryption keys. You'll be able to revoke access to our index data at rest, stay compliant with different regulations, while also, obviously, ensuring that you're getting access to the latest innovations we have to offer. So if you're interested here, I know our teams, our product teams are, interested in getting customer feedback. We can point you to the right team to learn a little bit more. And lastly, as you know, we take security seriously. We've mentioned it probably ten times throughout today's presentation. And so the latest certification we have is the ISO, twenty seven thousand eighteen and two thousand nineteen. What we wanted to showcase here, this is just one of many certifications and attestations that we have here at Coveo. Again, we take this seriously. We respect your access permissions, your access control list. It is something we our security teams are always, you know, keeping an eye on to make sure that we're compliant and offer the latest and most secure, you know, secure certifications that are available out there. If you have any questions here, please do, let us know. We'll be happy to connect you to the right teams internally. Now we're nearing the end here. So I hope if if there are no questions, please get your questions ready. We will, open it up for some q and a time. So wanted to just share with you a few upcoming events that you can tune into. The first is obviously our new in Coveo for commerce. So we have that session happening on Thursday. You can see our amazing commerce, team here that'll be presenting to you. Following, upcoming next week is our new incubator for EX. And at that, new and Coveo session, we'll dive into a little bit more of the integrations and cover some of the items that we did not cover during today's session. So you'll have some time to dig a little bit deeper, next week. We also have two other special sessions I did wanna highlight. The first, it's apparently a tentative date, but it's a special session we're planning to do with Salesforce, again, to just dive a little bit deeper into our extended partnership, speak a little bit more about data cloud, as well as everything we're working on in collaboration together. So it's it's tentatively scheduled for the seventeenth. We'll keep you up to date on that one. And, again, if you joined us last week, you've heard about our master class with Blake Morgan that's coming up on November fourteenth. We'd love for you to join us at that session as well. Lastly, there are a couple of places we would love to point you to. For our customers, we do have our level up community. This is our learning, site where you can come up to speed and learn about all these different capabilities. And then we also have an updated new incoveo page on our dot com where you can see a lot more about all of the things that we did not cover today, including those we did cover. So with that, I am going to open it up to questions. And, Matu and David, I'm gonna welcome you to join me in this one. Absolutely. And, I'm just taking a quick look. I see a lot of, positive reactions to multilingual support. So I see the first question. When are we gonna get similar to our AGAS evaluation on LMM responses? That's something I can bring to our LMM scientists. I'm not exactly sure of the answer for that one. It's probably something that our, ML scientists have considered, but I I do not know for sure on a date. Great question. And maybe while we wait for other questions, if we do get them, anything you're both most excited about. Go ahead, David. Yep. I I can continue. So, the team that I'm working with and we've been in that, we're solely and extremely focused on our admin experiences. And as we move forward and as we continue on progressing our road map and opening, as I was saying, that black box, I'm excited to see the impact this will bring. We're looking to improve that overall experience to all of our users, which is something we've heard a lot a lot about and a lot of, and we really wanna improve that. We really want the Coveo platform to become the space you wanna go to and you wanna use. So as we continue on maturing this and as the end user experience matures and gets great, I'm really looking forward for that admin experience to be as great or even better, if that is even possible. I don't know if that's possible, but that's kind of our mission to make this as easy experience for you guys and ultimately, to be able to trust, the output of those generative models is the fight that we're fighting. It's a fun one. It's a challenging one, but at the same time, it's an extremely rewarding one when we do get there. So, hopefully, in the next few quarters, we'll be able to come back again in this this new incubator sessions and show even more of those updates as we get there. So this is my my my levels of excitement there are are pretty high. It's a it's a it's a tough journey, but, we're gonna get there. I believe that. Before I share my part, perhaps we can answer two questions that we got in the chat, and afterwards, I can share one, what I think is the most exciting. The first one might be a good one for you, David. And the question is, is there a sunset date for the GSUI support? It's a good question. Right now, there is no sunset date, per se. So the GSUI is still in maintenance mode from our side, so we're not looking to innovate the GSUI, library at all. It is currently being maintained by professional services team who is actively working on this. At the moment, any net new implementations that we are receiving are on are are headless, atomic, or quantic libraries, and that would be the path forward. In terms of exact dates for Sunsetting, JS UI, none so far. But if there is one, we'll make sure to communicate that efficiently, as as we get there, and we'll leave plenty of time for you to if there is a need to migrate to that, we will definitely share that. For now, no concerns there. So I would say if you're on GSUI, stick to GSUI. Or if you would do wanna move to our to our, headless atomic or clientele release, we'd love that too because it gives you access to, more innovations there, but that's, then completely up to you after that. And, David, if I can pick there, is there a reason is there other reason that we're staying in maintenance mode because we know that our customers have a lot of investment in JS UI and maybe customized sites? Correct. That that's that's the that's the main domain, the the the main reason. JSCY has been the the the the main the only UI library for a long time at Coveo. Now that we have quite a few ones, I know that a lot of investment have been put in there. We've been discussing with some of those customers already. But if we do end up having to move to a path to to to, or force a path to to to the new UI libraries, we'll definitely be here supporting and provide the migration path that's efficient there. But again, as I mentioned, right now, there is no expectation from our part for anyone or any customer to move. So, again, if you're on GSUI and you're comfortable there for now, stay there until we communicate that more and more precisely. There is no concern there, but please do not expect any innovation on j on the GSUI front. That's one thing that I will that I will say. I I have one more follow-up for you. I'm sorry. Could could a customer use both? Let's say they have certain sites built on JS UI, but they wanna send up a new use case, new site with the latest and greatest atomic headless. Could they have both? Absolutely. So if you have I'm gonna put this in our in place. You have a search page right now or community search page that's on GSUI, and you've been very happy with it. But you're looking also to add an agent inside panel within Salesforce to your customers. You can absolutely leverage our builder based solution, which is completely based on Atomic and Quantic, or you can decide to build a fully custom Quantic inside panel if that's what you're looking for. And they're completely unrelated right now. Leveraging the projects, feature that Juanita just shared, you can then split this up in term in in in the Coveo platform and have a better structure to understand which is which. But right now, you can absolutely have. We already have customers that are on essentially multiple UI frameworks. Again, the path for if you're looking for innovation, however, I will say try and start thinking about potentially trying to move to those headless atomic or quantic, UI libraries. I can answer the other question here, a very interesting one, which is the manual evaluation of generative answers, are those used to fine tune the model for customers in specific? So we don't do a fine tuning of our large language models, but we do use this information and this behavioral data to enhance the retrieval and essentially enhance the passages that will be retrieved to power the generated answers. So yes and both yes and no that we both use the UA data to power again the passages, the results that will be used in an answer, but we don't actually fine tune the LLM itself. If you'd like to use a fine tune model and you like to fine tune a model and train a model on your end, you could essentially try our passage retrieval API where we'll provide the grounding, the passages, the source of truth with a model that has been fine tuned to speak and behave and interact in a way that you see fit for your industry or for your your specific domain. So great question. I see we have one in the q and a as well. Might need a little extra context, but maybe you guys will get it, mister and David. How should we be watching outputs through human oversight? That's a good question, if I understand the question correctly. Right now so the in terms of tooling and in terms of the ability to, make sure you control the human oversight is under control and there is the right answers there. We don't provide much shooting. So it's when an answer gets generated and the evaluations go through, it's on a case by case basis where, most of the time the subject matter experts will go in there and provide their their two cents on that specific answer. And that's the reality of all generative solutions. It's not just at Coveo. This is a market, market situation right now. However, as we move forward, as we collect more information, as we have a better understanding of what's real and what's real, what's right, what's wrong, we'll be able to provide more and more tooling around that where human oversight will be less and less needed. However, I'll be interested to hear a bit more about which what exactly is your concern there? Well, what more would you like to know about this so I can go into specific cases? What I will say is that as part of our admin tooling, we're definitely looking to go towards that direction, which is provide a more outcome based solution where if an answer is generated, what do I do with this? If it's wrong, how can I improve it? So this this idea of having an outcome based approach is what we're going towards, and, we'll continue to go towards that. We do also have, bulk tools to essentially test queries in bulk and see what the answers come out. So without having to go on your specific implementation and test out queries manually, we also provide a way to get, you know, answers in bulk to provide them then to internal testers or subject matter experts to test. And there is, however, some variance. So even if an answer that is coming back from the, the tool might be in a certain way, you'll get mostly the same answer, you know, when you retry, but there's always a slight variance between the exact way the answer generated in the exact words. And that's just a reality of the large language models today. The way the algorithms are built, there's always some variance in the way that you're never gonna get exactly the exact same answer with the exact same word. But because the answers are powered by passages that are retrieved from us, the passages are always the same. So the ground truth and what the answer essentially says will always be the same. You'll get just some slight variance in the format, essentially. I see another one for you, Matu, potentially. You mentioned some rag pipeline improvement. What is that specifically? Great question. So two things. We're constantly improving how we're using our our, users and analytics data to further enhance, again, using those behavioral models, the passages that are retrieved. That's one thing. And another thing we're exploring also is the way we do chunking. So the way we chunk and then split, you know, documents into multiple, multiple passages. So we're looking at different ways and different methodologies out there. So our PhDs and our our experts and our our natural language, team are are looking at this thoroughly on the best ways to make sure that specific chunks understand the context of the entire document that they're within. Again, this is, like, at the the, you know, the the, we're kind of at the at the peak of the technology now when when it comes to to these things. Right? It's evolving very rapidly. There's constantly new studies and, I mean, you know, new work that's coming out. So we're obviously, you know, staying at the forefront and making sure that we're gonna adjust our our technology and our solution in a way that follows the best and latest innovation, but also providing the best value to our clients. So, definitely some improvements coming in the pipeline there. Got some good questions coming from Vasil here. Yeah. I can probably continue on the ground. It is also something we're exploring with our PhDs and our ML teams. So it was brought up. I wish my or one of our other colleagues who's the, one of our other PMs in our team was here because he's been exploring and very passionate about this subject. So, yes, it is something we're exploring as well and seeing how we can integrate that into our solution. Awesome. Well, I don't see too many coming in. So maybe I can share maybe just to finish the the one thing I am the most excited about. David I'm also very excited about what David shared about providing, obviously, you know, a better admin experience to manage and tune answers. Another thing I'm very excited about, however, that we looked at today is the personalized answers. Right? Essentially, being able to use the context of a specific user to make the answer more accurate. And we're evolving to look at using this not just in a use case where I wanna know how much vacations I have left and being able to use that, but also to other use cases using other types of context, just context about your session, context of and details about the case you're opening, for example, context about the chat thread you were having in a chatbot, context about the page you're on within a specific product. So all this context essentially is that is fed to us. You know, the first way we're looking to explore it again is personalized answer, but there it opens up a very, very wide opportunity to use this across much more use cases to make the answers more accurate and provide much more advanced resolutions to more complex questions. So for me, that innovation is probably what's the most exciting because it will open the door to lots of interesting use cases. And you're not just saying that because you're working on that one? A little bit as well, along with other things. But, you know, there's great innovation in the pipeline. So, you know, we'll be obviously, quite keen on sharing more in the next few months as, as this comes to fruition. Awesome. Okay. We have more questions. This is good. What model is built on? Who wants to take that? Well, this is a simple one. We're just about to move to four o. So on the latest, version of three point five, we're about to move to four o. We're also exploring other models. And for clients who wish to use other model, again, we've made the passage retrieval API available. So you can fetch the passages and as ground, truth and use your own one, whether it be CLO, Gemini, or other models out there to generate answers. Again, on our part, latest three point five, four o is coming, in the next few months. We're exploring others as well, but that that's a further discussion. And, Madeline, if you can share because I I we do get that question a lot. More questions on, like, why we're not on four o. And I think there's a it's a big question because there's a lot that goes into deciding on, you know, changing. Can you say more about that? Yes. Absolutely. Well, you know, when we were one of the first to deploy CRG and gene event swing solution to enterprises. Right? So we had to go with the vendors that had the most protections and the best infrastructure, and Azure was really the the choice provider when we released our solution about a year ago. But when it comes then to migrating to different model versions, they tend to hide slightly different behavior. Right? As such, we're doing extensive studies because we are serving enterprises that need a safe, secured, accurate, and consistent answers. You know, we studied the different models extensively before releasing that part of our solution. Right? So solutions can come with essentially a a guarantee of of quality. And then we really are, you know, very, thorough about this. This is very important for us because, again, we serve large enterprises. So this is why we're always taking the right steps when moving to a newer version to ensure that, you can expect the same quality, same security, and lack of elicitation as you're used to in, working with us. Thank you. So I could go on about the process for whatever, but I'll leave that to our ML scientists. They're they're the best at this. No. That's awesome. Alright. Thank you both so much. I hope you all enjoyed today's session. Please feel free to connect with us as well. You can find us on LinkedIn. We're here to answer any questions. If you wanna follow-up, if you're a partner or customer, please reach out to your partner managers or your customer success managers. They will connect us in, as needed. Thank you for joining us today. Again, thank you for, trusting us and helping us, you know, be able to do great work every day. It's it's you guys, which is why we're here. So so thank you. Appreciate you. Matthew and David, thank you for joining me.
Register to watch the video
New in Coveo | CX | Fall 2024
Unlock the future of digital and customer experiences (CX) with our latest innovations. Explore how our new platform and generative answering enhancements and integrations improve agent efficiency and boost self-service success, all while delivering cohesive, multi-site digital experiences and reducing costs. Elevate your global service and digital CX with AI-driven innovations designed to meet the evolving needs of your customers and enterprise.

Juanita Olguin
Senior Director, Product Marketing, Coveo

David Atallah
Product Manager, Coveo

Mathieu Lavoie- Sabourin
Product Manager, Coveo
Next
Next
Make every experience relevant with Coveo

Hey 👋! Any questions? I can have a teammate jump in on chat right now!
1
