Hello, everyone. Thanks so much for joining us today for new and Coveo's service fall release. My name is Bonnie Chase, and I lead product marketing for our service line of business. And I'm joined today by Oscar Parey, who is our senior product manager. Now before we get started, a few housekeeping items, first this call is being recorded, but you will receive a copy of the recording within twenty four hours at the end of the presentation. And we will be taking questions today. So please feel free to drop questions in the Q and A portion at the bottom of your screen, and we'll take those at the end. Now before we get started, I do wanna show this disclaimer. We are showing some products today. So, you know, wanted to show this disclaimer on the slide just to really, make sure that you're aware that some of the some of the product information we're showing today will be, roadmap items, future items, So please keep that into consideration. Now at Kobeya, we do provide more than service solution. We do offers, opportunities for, across many lines of business including commerce workplace and, website. And we are a composable AI search and generative experience platform. So that means we take all of your content and data, pull that together. We leverage AI models in including behavioral machine learning, deep learning, and LLMs, generative experiences now. And we do this through API frameworks, native integrations, and and providing you with those toolkits so that you can build your own as well. So getting started, let's go on to the next slide. You know, new in Kobeo, the fall release, we do cover a lot of features. But for today, we'll really focus in on the service, capabilities. With With Koveo, our service product vision is to ensure that we're delivering the best service experiences across the entire customer journey with data and AI. So as you saw on the previous slide, you know, the fact that we have that composable search UI as well as the generative experiences, we wanna make sure that, we're we're able to extend that across all of your touch points. Now for our vision and and what we're going to be covering today. Obviously, effortless customer experiences it's first and foremost on top of our minds. But some exciting product things that we're going to be sharing with you today are around creating autonomous admins and leveraging no code GenAI. So we really wanna make sure that as you're building these experiences for your customers, it's as easy as possible for you. I'm really excited to share some of these GenAI capabilities. So before we get jump into the GenAI stuff, which I know it's going to be really exciting for many of you, Oscar, could you take us through some of the recent, improvements that we've made with the builder? Sure, Bonnie. And maybe to or, well, recap of it. The builder is, a capability that has been, launched in twenty twenty four at Covalo. So, it's already available. And what it does, really, it's an interface builder that allow you to configure, deploy and update, experience without the need for developer. So it's a easy implementation, and you have all the pieces of Cobille in a no code solution. And we're gonna show you, all the updates that we've made, so and what you can leverage, today and what you're gonna be able to leverage at the end of this quarter. So what we're really excited about is the search page builder, that we are, launching in a few weeks. It will allow you to create, like, a new search page from scratch, with all the controls that, you already have through GSUI or through, custom code, and you will be able to deploy and update, based on feedback quite quickly from the admin console. So you can see here a screenshot of how you configure facets and relevance. But there are a lot of options. So we're really excited about this, this new builder the builder that already exists, and that can be leveraged, one of them is the in product experience builder. So we It's available. We've got a good traction with it. A lot of customer are trying it at the moment, and it's really easy to use. You can configure much more than just the colors. You can configure the relevance. And we've added additional features to this builder. So, secured search, a contact support, we've added the quick view, and the generated events during as well, which I will talk a little bit later on. So IPX is like a seamless, integration with the builder. The other builder that we're really, found off, and that has been live for a couple of quarters now, is the, hosted Insight Panel Builder. So that's how you configure the agent interface in your in your search console, in your, Coville in your Salesforce console, using Coville. So it's been a it's been a lie for, for a couple of quarters, I said, but we've made some really great improvements such as, like, the feature tag, the smart snippets, adding manual sorting, and all the, result actions are now now available. So What's great, with this with the posted inside panel builder is that it's, feature parity, it's that feature parity with the previous, inside panel that we add in Salesforce. So we encourage, customers to try it out because they can find everything that they love in their current inside panel. Inside panel, but with the, the new security and a new feature that we've now leveraged. And the couple addition to the, hosted inside panel builder that were requested and that we've, we've shipped, recently are the viewed by customer tag. We know that it's a really important feature for you guys. So it's available, now. Along with something that was also feedback from our user base, it was finding a way to have a version history of those builders. So it's great that you can do no code changes, but you wanna keep remaining control of them. So that has been released, the builder version history. And so you can quickly go back to any version, to make sure there are no misconfiguration, that could have happened. And with those updates, I'll pass it on to Bonnie for smaller. That's very exciting Oscar. And, you know, the the builders really do make it easier for everyone to to customize the experience in a very easy way. So excited about that. Excited that we're going to make it just as easy for generative answering. And, you know, this is something that we've been work on throughout the year. We've had a beta program with many customers, and, excited to be announcing the launch at the end of the year. So we're planning on going in December. But just a few words about Povea relevance, generative answering, you know, our our main objective with this solution is to address the main concern, that people have around security, privacy, sexuality, avoiding hallucinations, and things of that nature. But in addition to those those technical needs, we wanted to make sure that this is something that you can embed throughout your entire experience. So something that you can add to your in product experience, your community, your support portal, and your agent experience. So with that, I'll pass it over to Oscar to walk you through what we have. Yes. So we're really excited. We got ton of great feedback from our beta customers, and I'm gonna show you, what the colon is gonna look like and what the feature is gonna look like for, by the end of the quarter. So as as Bonnie said, relevant, generic events, ring allows your organization to answer complex questions, with secure and relevant information. So We do respect content permissions. We are connected to your content and, how often it's refreshed. So we we we keep that fresh ness of the content. And we've sat in, in place several mechanism to limit the listening as much as possible. So it looks the component that we're gonna be, delivering at J will look like this. The key feature, as we said, it's grounded LLM, specifically for question answering task. We will support up to ten million vectors. So it's roughly a a million document. That you can, feed that model. And the great thing about it is that you're gonna be able to combine multiple doc, document and sources to generate those answers. So not only does it look for, knowledge based articles, but you could also leverage PDFs and other type of document, to get richer, answers. The other thing we wanted to point out on the component itself is that, we will have improve the off citation feature from the beta to the GA, where you could see them as, like, oh, over, on the screen. So you could see which snippet was chunk of content was retrieved to generate that answer to ensure it's, it's factual, and it's verifiable. The last element we've added are the reformulation. So you'll have three options step by step, bullet list, and summarize options, to generate a new query and and refine your search. There is also, like, a toggle at the top, copy paste button. So a little extra feature that we, we got feedback from our beta customer to make sure this component is really usable. Lastly, it will be available in all region, and it will be heat pack compliant as well. So We want all our customers to benefit from it. And with that, we are really excited to share some preliminary results we've seen, with X0, one of our beta customers that went live and, which did a an AB testing. And what's what's really interesting. And I'll walk you through the number really quick because I know a lot of people are questioning, or I have questions on, like, the results. So through the AB, the AB was not, like, hundred percent all the time or fifty fifty from the beginning. So that's why you see search session being, slightly different. But what were what's really important to us was the self-service and the the the the rate of case submission. So, there was more cases in in the node gen AI, pipeline, mainly because of more search volume. But when you bring that down to the number of cases per thousand searches. You could see, like, a big drop of twenty one percent of implicit case deflection. So we see a lot of value. We're confirming those numbers in the coming days. But that's a direct impact on your self-service, experience. So, generative events ring as a positive effect. The other thing that we noticed are more like engagement, metro such as time on page or session duration, where we saw the, the time on page, going down. While the session overall lasts lasted longer. So we see, great engagement with the feature overall. And with that, as I said, we will, we'll be confirming some of those numbers and other beta partners are launching their, AB test, or we're gonna get more and more numbers. But beyond the self-service, case deflection, benefits, what we wanted to give to you guys is something that's easy to implement and that you can leverage quickly and not something that's complicated and take months to implement. So or objective is to give you something that you can, create and deploy within, like, ninety minutes. And for that, you if you've used our Kaville machine learning models, you know, that we have, like, an admin section with the UI and configuration flows, and we've done exactly the same for generally advanced learning model. So you will find, that card in our mission learning section and go through a couple of steps where you can configure your model and see some summary stats, to check if, the build matches your expectation. And if you have all the documents that you'd expected in the model. So that would be the conflict from the admin console, and it's really a few a few steps. Once you've done this, you have a model in your org while you're not quite there to give it to your end users. But with the power of the builders, you don't really need a developer to go in and edit your search interface, commit some code, deploy, etcetera. What you will be able to do is just go to one of four builders, and check their relative, relevance, generated answering to the left. So automatically, the front end component pops in the page and is automatically the moment you would save on the top right button. And we think that's the power of, of working with Coville for generative model is that you get the end to end experience. But not only can you have it in your in a search page, you can also use this, model configuration and builder configuration for, well, community support portals, which is currently available for beta customers. But you could use this in an intranet or in a workplace, scenario, same builders, same models, and it will answer just as as good. As a search page. Continuing with the power of builders, on top of this, generative model, Well, you will be able to add it to your, Salesforce console within the same few clicks. So in the same at, at GA on December fifteen, you will be able to have it for the, full search experience of the agent. And quickly after you'll have it in the inside inside panel. So, really, we aim to push, the adoption of this feature and so people can reap the benefit of it across all touch point in the service experience. And that goes along with IPX, where you could add the the model and the components, right, right away from our, IPX builder. And with that, we are trying to combine both strategy of, going really fast to deliver our customer the the most value with generating mentoring while, accelerating the deployment of it through our hosted experiences and builder. So that's what we're gonna have at GA. Generative builders, the whole package. But we're already thinking, like, twenty twenty four, twenty twenty five. And I wanted to touch on the key themes, that we, that are, close to to us and to the product team. The first is relevant accuracy and factuality. In a similar fashion that this year, we focused on limiting hallucination with generated answering, entering the the content is fresh and grounded. This is gonna remain, like, or main mission. Make sure that everything that goes through a generic model is factual, is helpful, and and provides value to, to the customer. The next big, chunk of work is gonna be around conversational. We know customer expectations are changing. And for some issues, It's probably better to have that ability to dig in, a specific topic and go in what we call multi term question answering. And we want it to be to deliver something quickly following generating events, right, we wanna deliver conversational quickly to our customers. So Kaville will remain that domain expert that trusted search assistant across your interfaces. And lastly, it's a little bit more like a platform play, outside of service, but it will benefit from it. It's the ability to open up our platform and and give you guys, some some tools to build on top of the index or embeddings and our models. So you guys have some control and you can leverage the co the content that you've placed in Coville along with machine learning models to power some additional use case that might fall out of out of their Coville scope. And with those themes, I wanted to show you a little bit of, like, a glimpse of the future, with conversational because it's on on top of a lot of customers' minds. So one thing that's gonna be also interesting for us is, well, even just the the generated answer, we wanna make it even more engaging to consume. So imagine the links, imagine pop ups, when you hover imagine, that generated answer to support tables and code, code snippets. So we wanna make the component, like, really robust and really engaging. So it's It's enjoyable to see that answer and and engage with it. But beyond that, we wanna get into the conversational aspect of things. So you could see that at the bottom of my answer, I'm getting, like, a ask follow-up option with some, question suggestion as well, relevant to this topic. And so the moment I engage in this experience, in this follow-up, experience, I get a specific answer that is from the initial context. So we keep some, some elements in memory. We, help refine the the pro the question and the answer. To deliver a better solution to the end user. So that's gonna come up pretty soon. We are testing prototyping discussing with customer how to make this work, and, we're really, really excited about this. As far as road map and kind of like how it all pans out, you'll see the the key elements without going spending too much time, but you'll see how we're continuously improving, semantic and lexical search into what we call hybrid search. We have our conversational, aspect that I just mentioned and the reach formatting. And the other topic of interest is down at the bottom. Where we have embedding as services. So, allowing you to, use the Google content and the embeddings and the ranking. For your own additional, use cases. And with that, I'll pass it back to Bonnie. Awesome. Thank you, Oscar. It's very exciting stuff, especially with, you know, generative AI being such a hot topic today and giving our customers the ability to launch it quickly, leverage the data they already have through Kobeo and take advantage of our AI and search capabilities. So we do have some time for questions before jumping into that, you know, we do have more information and links available. So when you get the recording and the slide deck, you'll be able to, click on those links and, check out more information. So we do have some questions in, the Q and A. So so Oscar, I'll just start with the first question. It looks like we have a lot around generative AI. So First question, what security measures does Covayo have in place to protect user data? It's simple. This is, this will probably, needs some, some of our security experts, but we have several, compliance, mechanism in in place. To protect your user data, which is outside of generative or anything of that nature. Just our index is, like, single tenant, your data doesn't leave, the Coville platform at any point in time. So maybe we could follow-up with, like, all the accreditation, that we have from our security team because that could be, a pretty lengthy topic, but, this is, a number one of the number one priority, making sure your data is, is safe with us. Yep. Absolutely. And and just to, you know, second that, I mean, you know, our security measures have been in place not just for generative AI, but with our core platform. So it's something that we do take very seriously and is really important to our solution. The second question is, you know, how do you prevent hallucinations? So we're the the main technique we're using to prevent the destination is to ground all the the YellowM contacts within your content. So we use LLMs for their, capabilities to write text and and their their grammatical aspect of it. But we don't tap into what they've learned from their initial, training that was with the open AI or other providers. So we do what we call, rag process. So, relevant segmented generation, we do we we do search first, and then which based on the most, relevant document and passages, that's what we feed to the large language model, say generate an answer only out of those those elements. And that's how we keep it on on track and and, preventing from, going, bananas, with Mhmm. Yeah. That would be the main main way we keep a destination in check. Yeah. Absolutely. And I think that kind of highlights the importance of having a solid search foundation when it comes to generative answering because with Koveo, obviously, you know, our relevance and our AI that automatically boosts the most relevant content at the top of search results That's also leveraged as part of the generative answering solution. So when we when we send that prompt to the LLM, it's always the most relevant content, the most relevant snippets from that content that we send over to to get the answer. Okay. So another question here, you know, this this is a an interesting question. So we do have a case Assist product. Can you speak to how this will integrate with case assist? Yeah. So case assist is, is an API on top of for, of the rest of for, Covio platform. So it will work as, just as, as well. We we might add some, additional values and, and attributes in the parameters. So you you see a bit more for on with the LLM. But it will work the same with with case assist. We haven't designed a specific UI or changed the way the UI is designed for the for the case form. But we that's something we will explore in twenty twenty four because we know, generating an answer on the case form will really help with deflection. So, it's almost like low hanging fruit. We focused on, setting up the foundation with the search page, having all the mechanics in place, but spinning it to case assist will be, a lot, a lot faster after we we've launched, the main, the main feature. Okay. Another question about DenAI. Does this use contextual data to create customized responses? So based on the user profile, services that the user has, etcetera? Yes. And that's also why building on top up search is really interesting is because we The context that was passed from, from you guys into the search experience is kept, for to to generate the the most relevant documents and we're working on passing that, that context again to the second stage retrieval and the generation. So, it's taking into account, and we wanna leverage even more, with the, second stage and generation. So Yep. That's, that's taken into account, and we're gonna expand on it to make sure it's, it's really working and and really efficient. Okay. Great. Another question about GenAI. So this is about indexing sources. So are there any limit patience in terms of the format of the sources. So HTML, PDF, etcetera. You know, can you speak to that a little bit? Yeah. So you can use, multiple sources, multiple type type of documents. Best. We still recommend knowledge based articles, but as you saw earlier, like, PDFs and some other format will be will be supported. So that's one on the format. Now, smart snippet had an extra complexity, because it was an extractive question answering model, you would look for HTML tags to parse the content. Generating answering, does it need this? We can cut and parse the content wherever the model, find it relevant because, and the model will then take three to five passages and combine them into a, a new sentence. So it doesn't really matter how we we slice it and dice it. But you don't have that extra constraint with, generating events during, and you can mix and match sources. We we do I think one thing to keep in mind is the quality of the content, still remain important, in any question answering model for that, to that extent. If you have a lot boiler plate content, headers, duplicates, duplicated content, that will tend to, bias and and confuse the model. To some extent. But, so choosing your content care, carefully is is the very first step. Make you you don't need also a lot of content. We see amazing results with as few as, like, two thousand, three thousand, knowledge based documents. So you don't need, like, a million document. And if if, my personal recommendation would be avoid the million document because you will probably introduce more confusion than performance. And I could speak for him that for a long time, and we're gonna, come out with, like, best practices. So all the findings and learnings will be documented and available online. K. Great. Alright. So Well, I see that generative responses reference several to learn more suggestions. How do the learn more suggestions compare the traditional search results listed below? So the learn more, all the citations. So they are the passages, so the extract or, passage of text that were found in relevant document, and that are matched together, combined together in the answer. So what you'd see that appeals would tell you, well, the model used one to up to five passages to create the answer that you see on the page. I hope I answered the question with them. Okay. And how are they comparing to the traditional search results they are from the traditional search result. It's just that we pinpointed, like, a specific section of the document, a passage, and found it more relevant than other in that document and decided to give that as context to the large language model. Great. Okay. Another question. The can Coveo integrate with content management systems like SharePoint? Yes. We already do have a SharePoint integration. So nothing changes. From that perspective, whether it's, like, ingesting content in, SharePoint or, like, putting, like, a search page or a search interface into, into your sharepoint. So that's supported and that that isn't that doesn't change. Yep. Yep. Okay. Does oh, so back to GenAI, does each customer have their own instance for generative answers or do multiple customers get sent to the same l l l location? No. So we're using, open Azure, service Azure OpenAI services, but each, each customer gets their own, single tenant, kind of like, answer. So we're not mixing, services and that data there. So, and, again, we could send some, documentation on how we deal with, Microsoft, Azure services because we've had a several question or through the beta program, and we've answered those, extensively, so I'm sure there's documentation for that. If you wanna get in more details. K. Great. Another question is conversational search customer facing on a support portal, for example. Now conversational search is a road map item that will become now next year. And so I I believe the plan is to make this customer facing. Is that correct? You decide, you wanna use it for your own internal teams, or you'll wanna put it if you you wanna put it in an export portal. You decide for us it's similar to generating event you could use it in a workplace or in a customer facing context, you could decide. Great. Okay. I've got another question here. As an early adopter of chatbot. What advice would you give to someone who's going ahead and implementing chatbot as a self support tool using Cobeo as a search engine? The complexities include multiple sources of content across multiple portals. Now I can start here, Oscar, and then you can jump in to add to this. But, you know, one of the the great things about Koveo is it can be leveraged within the chatbot as well. So with any chatbot that you have, you know, using our API solution. We can integrate Koveo so that your answers not only are more relevant, but you get an you get relevant even if a question has not been fast. So, again, with our composable search, you know, we wanna be able to bring that anywhere including chat box. Oscar, anything to add to that? Yeah. Exactly what you said. We presented, UI components, through this webinar but you can simply use the API and the results to power an existing chatbot solution you have. So Coville will be the intelligence behind your chatbot interface and, to to make it a composable solution. So you don't need to use or or visual components, you will find everything as, like, through the API call and, and leverage that. Okay. Alright. A couple more questions around generative answering. You know, one is around analytics. Can you share any details around how analytics are impacted by generative answering? Yeah. So it's, with the zero and our own customer zero coville, implementation across different, portals we have. We see a slight drop in clicks array. Like, if you, like, you're looking at, like, search metrics, specifically, search KPIs. We see a little bit of, like, a drop, but not saying that's, like, worrisome or, like, out of, like, what we'd expected. We we see, like, more engagement longer time on the search page, which leads to longer sessions. As far as and then I think you have to kind of, like, step, step back a little bit from, like, the search KPIs and look at your business metrics, such as, like, self-service, case assistance, or a case avoidance, and case submission, right, where there's the correlation is maybe a little bit, could be a little bit harder to do based on your situation, but, that's really where we see the impact. And and so far, it's been really positive. So we we're we're pretty confident, that it's gonna generate some, some good result. Maybe some we didn't mention through this presentation is is that there will be a report template already baked in, or Kaville, admin consoles you have a reporting section, you'll just go to the templates, and you'll see a, a generating events during template that already takes care of showing you the volume, the amount of likes, these likes, and all the things you can imagine, on this. So, yeah. Okay. Now, so just to to take in what you said, you know, we we will have a report available with generative answering and kind of activity that's happening with that component. There are a couple of questions around the details that are being captured within the analytics. You know, Do people have access to the prompts and the answers that were provided to the customers? Yeah. So there are a couple of things in in those last questions And so, right now, we're measuring we're capturing everything that's happening through, like, server side events. So you in incoville data, you know, or in oral events, the the data exists. So for that specific user and that specific query, what was generated, We need that just for only reporting purposes, but we also understand that customer might need it for auditability and any of, like, verification. At launch, it won't be, immediately available to customers. So the data will be logged and capture. But not, accessible. And I think we're gauging the interest of seeing all those sensors because that's a lot like, that's a lot to scan through. So we have the data. Are we gonna create an interface to show it to you guys? That's for grab, and and that will be decided in the next couple of quarters based on how many, people can, like, submit a request for it. I think that's, that that on the prompt, we're not yet, opening up the prompts to any customers. We that they remain within our, data science, domain. And we have an item on the road map that's for the platform extensibility. To allow you to do, like, change the prompt, test the prompt, like you might have seen in other tools. We will get there. But we know it it's got like a loaded gun that we're, handing off to our customers because the moment you change the prompt, you take you take on the responsibility of the results because you've just injected something different that we can't control versus the prompt we've, we we are using have been tested and evaluated. And every time we change even a single word in the prompt, we have a batch of evaluation test that run to tell us if the performance as, increased or decreased on, like, a set of, private data and public data that has been annotated. So have a pretty rigorous process behind it. We just wanna be careful when we open it up. So that's why it's not up there, four g. Okay. And we'll just take a couple more questions here. Can Coveo integrate with third party applications? Yes. It's a broad question, but, yes, you should talk to either your, your architect or, professional service, depending on which integration you are, referring to. But, yes, we have, connectors, and we've, integration that are baked in. You can do custom. So, yeah, It's CS, and then depends, you might wanna discuss more than the specific integration you are looking for. Okay. And, oh, there is a question about on, with Kobeo generative answering, leverage our own company's Azure OpenAI instance and subscription. So right now, we're using or, like, the Coville, Azure instance. We are discussing how we can make it so you can bring your own model and bring your own subscription. We're also assessing the cost because we are getting, like, discounts from, Azure, obviously because we were bringing, like, a lot of traffic. Some of you guys might be in very large corporation So you might have even better rates, but some, not everybody might have those rates. We also negotiated, like, or the, the the thresholds of, like, and and limitation of queries per per second. So we think we have an advantageous package to start with, but we'll definitely be looking into that. So twenty twenty four, because more and more customer out there in subscription, and it makes it easier for legal purposes and compliance to, work with your own. So we we're aware, and it's probably gonna be a twenty twenty four item. Within my item. Great. And it's generative answering, you know, is this also available to support service specifically to provide replies to users on their email? I'd be And you could, through, like, like, I would see that an email being, like, similar to a case. So someone submits a question through email and it falls, it falls down in your inbox as a case or on an agent's inbox as a case, and we're investigating how we infused generic events during in some of those use cases. My colleague, yes, me, is working exactly on the that discovery right now. So we're expecting this to, to be ready or to have some at least a prototype early, early next year for, like, the cases and the email support, as well. So k. Great. We'll take the one last question. You know, how does this perform with users still searching with terms and keywords versus asking questions? It's a very interesting question. It's the change of behaviors that we were discussing. Definitely the we see a trend of, like, long queries going up or longer queries going up. Keyword search, remains, the, the, the biggest, the, type of queries that have the, the biggest amount of, of searches, but, we, we, we think the trend is just gonna continue to go up as the more people see that when they input long queries, they get more granular, more precise answers. This is gonna be a competitor. Change their behavior. So we're not gonna replace keyword search and power users and in some context, you know exactly what you're looking for. So you'll just go with keywords, but we're seeing the, natural language questions that are longer. Starting to rise. So, yeah, we'll we'll definitely keeping an eye on that. Yeah. Yeah. So we'll be able to provide, great results either way. But, yeah, I do I do see, you know, this will be a behavior change for people, you know, everywhere, especially as this continues to be something that gets embedded in in all of our experiences. So, you know, Oscar, you shared a lot of great information, some really exciting stuff around our dinner answering capabilities, as well as our builders. Really, you know, as I said earlier, our goal is to help you create those effortless customer service experiences for both your customers and your agents, and, you know, doing this through autonomous admins and NOCogen AI. Now as Oscar said, this is something that's very new, generative answering. We're excited to launch this GA in December. And we're continuing to evolve and grow, and this will continue to be a top priority for us. So the more feedback, the more questions we get from you, the better solution we can build. So thank you so much. And, we appreciate you spending time with us today, and have a great rest of your day. Thank you.
S'inscrire pour regarder la vidéo
New in Coveo: Fall 2023: Service & Support
a New in Coveo video

Bonnie Chase
Gestionnaire senior, marketing chez Coveo, Coveo
Next
Next
