Hello, everyone, good afternoon, good morning, good evening to those of you joining us from Europe. Really excited, that that you're here with us today. My name is Devin Poole, senior product marketing manager at Coveo. Happy to be joined by Eric Immerman, from Perficient. Eric, so glad to have you. Nice to be here. Thanks for having me. It's a a great session that that we've got lined up. Talking, no, not about the the masters, and who we got to finish at the top of the leaderboard today, but, rather, talking about the the hottest topic in customer service and particularly self-service right now. You know, how can we leverage generative capabilities in order to improve, you know, resolution for our customers? We've all now Coveo the the past, eighteen or so months been watching GenAI, looking at it, researching it, exploring it. And what we're really excited to talk about here the today is how can you bring this capability to bear within your organization? Both Eric and I have been working with a number of, organizations and enterprises that are now live, with at least on the Coveo side, Eric, you know, in in your world as well. And so it it holds a lot of power for us, and we're seeing these early successes. And that's our goal today to help you to understand, you know, how the you can build a a generative answering capability and what the key components are going to be, and giving you some sense of what other organizations have done. Now let's start by taking a quick look actually at why we're particularly looking at self-service today. That's because when you look at recent research, you see that today's customers, you know, demand in many ways self-service. Right? If you look at the left, seventy percent of, customers from a Gartner poll, say that they prefer self-service over contacting a CSR. Doesn't mean it's the only thing they do, but it is where the vast majority of customers today begin their resolution journey. Middle, the folks at CMP Research, seventy two percent of executives say that they've seen an increased demand for self-service across the past three years, and for a lot of us that started with the pandemic where we were forced into doing this, and our customers' digital dexterity increased significantly. And people who were resistant to self serving before got used to it and learned, oh, that this is good. Right? And then on the right hand side, from a a Gartner poll, almost forty percent of Gen z customers say that they'll give up on an issue entirely if there's no way to self serve. Right. That means I'll just keep if it's a feature or a functionality piece, I'll just let that go, and it will fester and those folks will leave you. And if you think, oh, this is just the the the kids. They're just complaining. Gen z doesn't know what you know, they they'll learn eventually. Well, about thirty two percent of, millennials and high twenties of Gen x said the same thing. Right? So folks today are used to doing things on their own. And, Erica, I'll pull you in just a second because I wanna get your thoughts and what you're seeing and hearing. But, you know, to me, the this doesn't mean self-service only. I think one of the the big mistakes that organizations often think is, we will take this block of customers, our gen z customers, and we will make them self-service only. And that's not the way to think about it and certainly not the way to interpret this slide. What it means is that there are certain issues, certain experiences that should be self-service only, that you should drive hard, and others where you wanna draw customers in. Right? So we know this is the the starting point. It's gonna become a bigger piece of our our channel landscape, but not the only one. Eric, what what are you seeing in your world as well? I I think I'm gonna hit on something you just highlighted, which is the channel landscape. Right? Realistically, there's not a one size fits all kind of need for customers here. Right? This isn't you know, I'm gonna make my website great. Everybody has to go to my website. That's the only way I'm going to help them. They're going to wanna interact over your chat experience. Right? They might wanna interact in your app. They may want to interact within an experience that you provide, right, in product as it were depending on how your organization runs. And so the other piece that we're seeing here is they don't only demand self-service, they demand coherent self-service. Yes. Right? If I if I give you self-service answers that are different on five different channels because I have different knowledge bases or different Gen AI models or anything else, in each of them, all you're doing is creating a confused customer, not a assisted or a helped customer Yeah. Which ultimately is going to either alienate them and push them away and say, you know what? Not worth it. I'm gonna go find somebody who can can help me better. It shouldn't be this hard. Or they're gonna call your contact center or find some other way to solve the problem. And anytime that's happening, you're out of control of the experience in some way. Right? You know, if if you can't help me, I'm gonna go to Google, and I hope Google gives me the right answer or even brings me back to you as a company. Right? Because you're you're kind of out of control of that experience, and in that case, it's a little bit of a scary place to be. Yeah. So much of it comes down to a trust issue. Isn't it? Like, am I able to trust the information that I'm getting and look at the sources of information? And to your point, if a company is providing different information on their own website because it's being powered by different knowledge bases, that just kills trust each and every time. And so that's where, you know, we we've seen this rush towards self-service. The success metrics haven't been there for the decade plus companies have been investing in it. We've always said, oh, we want self-service. We know customers want it, but we keep hearing, you know, the background of this demand for it. Oh, we need customers to adopt it. And those two things are at, in many ways, conflict with each other. Customers have adopted it. They may not be adopting it with your organization, and that says more about you than them, at least it it would the the way that I see it. Of course, as self-service has come up, organizations are looking to utilize and harness the powerful new capabilities of generative AI. Right. That's where a a lot of companies are turning when it comes to how are we going to, you know, improve self-service. You can see, you know, nearly ubiquitous on the left. Ninety five percent of service leaders expect customers to be served by an AI bot at some point in the service journey by twenty twenty five. So that's, you know, now eight months away for most of us. But, you know, and what really struck me about this piece is at some point in the journey, it doesn't mean it's only gonna be that way. It doesn't mean this is gonna take over everything, but it will become a critical piece of the functionality that we offer. In the middle, eighty two percent of companies expect to offer GenAI in their customer facing self-service, by twenty twenty five. And on the right, eighty six percent of companies plan to use Genet in their knowledge bases by twenty twenty five. And so that's something we wanna see here. You know, what is the, opportunity? And by the way, thank you for the the folks who are jumping into the the q and a already. I forgot to mention at the top. Please, we want your questions. It can't just be Eric and I talking here. Get your questions into the the q and a box here. We've also got a number of polls that that we're gonna start running. The the first I'm gonna bring up for you, now, which is tell us where your organization is on the AI journey. Alright. What have you done so far? Are you still sort of formulating your strategy or identifying potential use cases? Do you have to find use cases and you plan to be implementing it in the next six or so months? You have defined use cases and your implementation time frame's further out, six to twelve, or we're delaying investment, or we're not actively looking to use this in our, service capabilities at all. So you can see the responses coming in here, and really interesting to to see where things are going so far. Couple of folks now jumping in saying, yeah. We've got something. We're gonna be implementing it in the next, zero to six months. Almost, you know, thirty, twenty five percent now have been saying, we're not actively looking to use this. That is really interesting. And you wanna jump into the the q and a or the the chat and start to tell us what you're doing there, why you're thinking about it that way? Would love to hear from you, what questions you have, what I'd ask for those of you who are saying we're not actively using it. It's what's stopping you? Are you concerned about security, regulations, risk, something else? So with that, we can, close the poll. Thank you for everyone that that voted. We see a wide range of people who are are voting here, and that's great. The majority of you there are the the the largest number, forty one percent, say we're still formulating our strategy. We're identifying use cases, and that's fantastic. That that's why we're here today, to start to talk about those things. So closing the poll, and, you know, what we've learned, Eric, is that the path to success isn't about building GenAI as a standalone capability. In fact, GenAI, is built on the backbone of tried and true, you know, information retrieval methods. And so what you've been working on in your practice, I found it really interesting when we had a couple of initial discussions here about, you know, what what what's gonna lead to success? What does that framework look like? Yep. So, generally, the the big picture here is let me see. Big picture here is when we take a step back, you've invested as an organization in knowledge for a long time. Right? You've got maybe knowledge articles or cases or documentation on your website, you know, health FAQs, wikis, any number of different things that your organization has likely put a lot of effort and content into over time. What we don't wanna do is immediately say, great. You've done all this curation, all this answering. We're gonna throw it out the door, and we're gonna stick, you know, chat GPT in front of it. And it's gonna be magic and and give all the answers for you. Right? What we found is that all of that knowledge that you have worked and curated becomes becomes a very essential part of making a, I'll call it, accurate, trustable GenAI sort of solution either for external customers to self serve or internal agents or internal customer success folks to use to help your customers if they come through a different kind of human driven channel. K? Essentially, what's happening is we're finding that for most, I'll call it trusted deployments of LLMs, search is essentially becoming the basis of the LLM opportunity. Right? Whereas a lot of people are looking at this and have looked at generative AI and said, well, what's this gonna do for Google? Is this, you know, get get rid of Google. I'm just gonna go to a a chatbot, type in what I want, and get an answer. We're finding it's actually evolving on top of the traditional search experience that knows about all of those different knowledge bases and pieces as an additional layer that you put on top that builds off of the work you've done before rather than, replaces the work that you've done before. K? So I wanna take a second here and kind of just give an idea of what does that spectrum of search look like in terms of how does this all build on top of each other, and why does it matter that we're paying attention to maybe some of these things that are not monitored, you know, generative LLMs. Right? It's work that you've done for probably a long time trying to make your content accessible. But how can all that work that you've done for a long time help refine and optimize what a large language model, what a generative AI tool will actually give to your customers and your agents? K? So step one here, at the lowest level of search, if we look at this as a capabilities level, you have keyword search. Everyone here, I can almost guarantee, has used keyword search before. This is kind of the level of search you were looking at back in the days of, like, Excite or AltaVista. I mean, I might be dating myself a little bit. Right? Essentially, all this is is a very simple algorithm. It's this b m twenty five algorithm over at the right. Right? How many times is a word found in every document? Alright? What total percentage of a document is made up of a word? And if there are multiple words that I'm searching for, how close are they together on average? K? Everything here is about matching the words that a user types with the content that you have in your organization. K? This could be content. This could be products. This could be PDFs or word documents, any number of things you might wanna be giving back. Alright? Beyond that, everything that we do around trying to improve keyword search is simply trying to make words match. So I can put synonyms in place or an ontology. Right? I can understand that, cardiac arrest and heart attack are the same thing by just saying telling the system these are the same thing. If you see a search for heart attack, search for cardiac arrest as well. Right? I'm doing spell checking if somebody makes a typo or fat finger something. I'm doing, boosting and burying to say, hey. My FAQ document should come back above the blog posts that I have from five years ago. Right? We're essentially working around the edges of basic keyword capabilities. K? The next piece of this is gonna be semantic search. Right? So semantic search is the idea here that we take a machine learning model, we have it read through your consent, and we break that content down into concepts. Okay? And so just to give an example of how this works, because I often find that people struggle, myself for a long time included, to really kind of mentally understand how this works. Computers work in numbers. Humans work in words. Computers work in numbers a lot of times. Right? And so what they need to do is they need to understand how a concept, it can be represented in numbers. Alright? The way we do that is to break down a specific word or a specific term or a specific document into a series of concepts and how much each of those concepts is represented. So at the top here, a royal. Right? A royal is maybe ninety nine percent royalty, one percent male, one percent female. Right? There's not necessarily a gender connotation to royalty as a term. A king, however, is ninety nine percent royalty, ninety nine percent male, one percent female. There's a male connotation to royalty, so I can say, okay. Here's the the percentage male this is. I can give a a, representation of how strong it represents that concept of male or female. Right? A queen is ninety nine percent royalty, maybe two percent male. There's a band called queen that's made up of men. There are some men who refer to themselves as queens. Right? So maybe we have a little bit more male in there, but we're still about ninety nine percent female. K? This gives us some really nice behaviors from a a computer mathematical perspective that that come in handy here as we're we're looking at semantic search. First things first. I can do math on concepts now. Right? I can do king minus royal and see that, you know, if I do point nine nine minus point nine nine, I get zero. Point nine nine minus point o one is point nine eight, etcetera. This is roughly equivalent to a man. Right? So that concept of a king who isn't a royal, that's a man. We know that kind of intrinsically because humans who think about English and talk about these concepts every day, but now I can make a mathematical model that understands that. Right? The other thing that I can do is I can represent these numbers essentially as lines. I'm gonna bring everybody back to their, like, high school geometry. Right? Think of each of these as a line pointing off somewhere in space, maybe x y z for this three three digit one. Right? I can use cosines really bringing back to geometry now. I can use cosines to say how close are these lines together. And the more close those lines are, the more similar those concepts are. Okay? What this lets us do, if I go back to this last slide, is it lets us get a knowledge graph essentially or a vector space view of all of your content and say, how close is every piece of content I have here together? Right? Is this FAQ on shipping times similar to, you know, returns or not? Is it less or more similar than details when working with FedEx? Right? I can start understanding how similar this is that when someone searches for a concept, they type plain English into the search bar. I can run that same model, understand the concept of what they talk they search for, and say, is this similar to what somebody was looking at? So now I can understand that knowledge on a conceptual level to begin with. K? What you're looking at on the right, this is a a actual graph, I'll call it a vector space, from a top twenty five US ecommerce company, right, of all the products that are in their their organization. We're able to see here how those products relate to each other conceptually based on how they're interacted with. And we don't have it in how this is displayed, but this is actually a an animated image. Right? This moves over time based on how those products are bought together. So I can see similarity in purchasing behaviors of all of these various users in order to understand how the you know, I should have products together or group them or recommend them and so on. K? All of this is very, very powerful in itself, but it's even more powerful when added on top of keyword search. Because what I can do is do a semantic search and a keyword search at the both time at both at both at the same time. And if they two overlap, I can say, this is even more important. Because not only are you using the same words as I have, but also the concept of what you're asking for matches. Let me boost these up together. Let me make this more important. And so we can use this to kind of understand and do not only improve search capabilities, but also contextual relevancy. I can understand that you're looking at a document maybe in the top right corner, that little purple dot, right, of the of the graph. When I do a search, let's search for documents that come back that are conceptually similar to what you're looking at so that I can give you things that are contextually relevant to you based on the journey you're undertaking so far. Right? If I'm at a hospital website looking up information about diabetes, let and I search for a doctor, let's show endocrinologists that can help me with my diabetes, not a, you know, a orthopedist who's gonna talk about my broken elbow. Right? Some of those are gonna be related concepts, some aren't, and we can use this to contextualize for the user on the fly. K? Moving on. The next piece here that we get into is called behavioral search. You see a lot of this or you run into this all the time on the web. You know, you you experience this every day. You probably don't think about it much, though. But, essentially, what it is is we can keep track of the behavior of everyone who is using our experiences, be it our chat experience, be it our web experience, be it our app experience. And we can find signals from them. What did you search for? What did you click on? What did you view? What did you buy? Did you open a case? Right? We can look at all these signals and use these to, first off, identify what I'm gonna refer to here as digital twins. Right? I know that Larry and Sally are searching similarly to one another. So when Larry does his next search, let's look at what Sally has done in the past and what she's found successful and use that to adjust what Larry is going to find. Right? Adjust the answers that Larry is going to get. I'm using that history of every interaction with my company to drive better outcomes. Right? We build this on top of semantic and keyword search because now I'm taking those results that are just, you know, native based on the content, but I'm bringing past user behavior into it as well. Right? You constantly have this source of learning and signals and information coming at you just by how your users are interacting with your organization. We can harness and use that to drive better outcomes. Right? And this happens all the time. You see, you run into this with Amazon, with Google, with any search experience and and really kind of the big companies you use all the time where your results are constantly being modified by the success and the behavior of everyone around you to make sure that you're getting the things that are best for you in both a real time kind of trending way, but also a longer term what is the most popular, what is most useful way. K? Now you're just saying here, great. You've taught me all about search. I thought we're here to talk about GenAI. Where does all of that come into this? Right? The layer on top of this is the newest, obviously, and this is gonna be this generative AI layer. So there is an approach that we use with generative AI called retrieval augmented generation. You might also have heard of it as RAG. Kind of a harsh word for it, but I might refer to it as RAG from here on out. Right? The idea with a rag implementation is what you do is when you get a used question from your user, say, in a chat context, I take that question, and before I send it to my large language model, to my, you know, Google Gemini model or my Wama model or my GPT model, Right? I first send it to a system, in this case, a search system, to say, give me useful information about what this user is asking me about. Right? You're asking, you know, what is the return window on this product? Go and get all of our internally written documents about return windows on products, return those, and then I'm going to call the generative AI model, the, you know, chat GPTs of the world, and say a prompt is similar as follows. Answer this user's question. Insert the question. You must answer it using these results that I'm sending you, and you must cite your sources. Right? Effectively, in real time, I'm grabbing useful information from my own kind of canonical knowledge bases inside my company, and I'm passing that to a generic kind of off the shelf commercial large language model and say, you have to answer based on this data. What I'm doing is I'm turning that large language model from a knowledge engine, where I'm just hoping that the Internet has the knowledge this person's trying to answer, right, to a reasoning engine. Its job is to read my documents, read my content very quickly, and give back results accordingly. Right? What you'll if you think about this a little bit, what ends up mattering here is the quality of the search underneath this rag approach ends up really, really being important. Because if I get better, more contextual results, if I get things that are related to the journey that I'm on or the product I just bought or the question I just had, I'm going to get a lot better information fed into the generative AI model that is going to give me a much better answer from a generative AI model accordingly. K? Beyond that, there are a lot of other benefits that of this approach that we can use content security. Right? Because search systems can inherit the controls and the access lists for who's allowed to see what within my organization. I can make sure that in, say, an HR context, someone doesn't ask how many weeks of vacation do I get, and they get the answer for somebody who's been here for twenty five years and get six weeks of vacation, whereas they started yesterday and they get two weeks. Right? Because I know who you are. I can enforce what you're allowed to see. I can bring in context as I've highlighted. Right? What are the journeys we've been on? Or an example of this we're seeing with some of our customers right now and a little bit more of the tech space is you're asking a generic question. I received a can't log in error. Okay. There's very different answers to how to solve that based on the product that you're using. So if we know from context what products you own or what product you're currently in, I can pass that as context to the search, get back the answer for that specific product, and have a large language model generate an answer on how to solve that problem in the product you're in versus in a, you know, a product you've never heard of or used before. K? I can minimize hallucinations. You know, one of the big fears of large language models for a lot of organizations is the fact that it could give the wrong answer. Right? You know, you've you've heard, you know, horror stories out there of dealerships that sold a car for a dollar because somebody figured out how to work around the LLM or, you know, you know, there's the the poor lawyer who used large language models to cite cases and, you know, got brought down by the judge for, you know, citing a lot of fake cases that essentially large liquid model made up. In this case, we're not letting it make anything up. We're saying it has to be information that we are providing it, and we're saying it has to cite its sources so that the user can go and verify and look at that document if they need to. K? It lets us use up to date information. I don't need to retrain a large language model and push new data every day. My search systems can connect every five, ten, fifteen minutes to my various systems and get the latest updates and bring that in. So if you have a late breaking issue, you know, a new problem, a recall, or something else going on, we can immediately have that information that can be answered via generative channels rather than having to wait till our next load, our next retraining of the model is a week from now. K? It's cheaper because I can use simpler large language models. They don't need to know everything in the world. Right? They just need to know how to reason through data, and we're providing them a limited set of data to read through. Right? So this can be cheaper, and it can be faster because you can use smaller, not as intelligent models to reason over this, but still get high quality answers. K? And it's future proof because in the future when, you know, Meta comes out with llama three in the coming weeks, they're saying it's gonna happen or when when, Google has the next version of Gemini or Gemini. Right? We don't need to retrain all of our content on those models. We just need to take our RAG approach and say, hey. Instead of sending that prompt to GPT three, send it to, you know, llama three because it's the newest best model out here. Right? Or if GPT five comes out, just send it to GPT five instead of GPT four, and I don't need to go and rebuild everything or rearchitect everything internally. K? A lot of really good benefits here. And at the end of the day, what it means is that we can start powering conversational experiences and generated experiences where people don't get a list of ten blue links. They get the answer. Right? But we can do that off of known good information and known verified information rather than putting this out and trying to train and give lots of examples of good information and hoping that it'll pick our information rather than what does the Internet say about this general topic kind of more broadly. K? Any thoughts on that, Devin, in terms of of the approach? Yeah. You know, I mean, that's as you know, we practice, rag modeling here at Coveo, and it's exactly for these reasons that that you lay out. Right? You one of the biggest misconceptions when when chat GPT hit the world, people started using it, and we all thought it was made to be that knowledge base to give us correct answers, but it wasn't. It was never designed for that. It was designed to converse in a human like manner, to create new and unique, text that that looked like a human had written it. And that in and of itself, like, that was the biggest innovation because it now draws on that and can speak the way and write the way that that a human would. But to what you're talking about here that, you know, the the the security, the the contextuality of this, and the freshness of the content, that is something you as a company wanna own. You do not wanna seed that control. You wanna build it into that single unified source. Right? We have our source of the truth, and that is what we can control, and that is what we will use, you know, for that that transforming capability to look at that, like I said, read it really quickly, synthesize it, understand the meaning, understand the context behind it, something that, you know, the the best humans can do, but not nearly at the speed at which a large language model can can do. Right, so it it is not gonna replace all of your humans because there are new questions, new things, unique situations that are coming up. But this is gonna help with it formulate those answers for the situations where you do have an answer. It exists in your company's documentation, but it would take quite a while for a customer to go through it. Several hours. I've gotta read four different forty page PDFs and then synthesize that answer, or it would take a good agent a lot less time, but it would still take them the time that and, a company can do really quickly. And So we we have a couple of good comments come in here, you know, on the, the the q and a, and thank you for everyone who's putting things in the q and a. And then the chat, I'd say, if you've got, like, a question, let's throw that q and a comments. Let's keep the chat. But either way, you put it anywhere, we will find it. One that came and said, you know, a number of senior business leaders, are looking at AI as purely an IT thing. So how have you helped business leaders to see, you know, AI possibilities as strategic opportunities organization wide, not just, yeah, this is something for IT to worry about, which I think is a great question. Eric, you got thoughts on that one? Yeah. So, I'll be I'll be really kind of just straightforward here. The way that that we've done this, most effectively is helping people to understand the money behind this. Right? At the end of the day, right, you have users who wanna be self served, right, that that want to solve their problems. You have probably another set of users that maybe never wants to be self served. They just wanna call and talk to somebody and have them solve it for them, right, or open a case and talk to somebody. Alright? The short story here is that whenever someone calls your contact center as an organization, you are likely paying for someone to answer that phone and answer that question. Right? And if that person internally has a hard time finding all the information, they're going to take longer to answer the question. Right? If that question could have been answered by an external channel, I didn't have to have that person internally at first. Right? And so what we've done or what we've seen here for many business leaders is that, fundamentally, this is about, from a monetary perspective, it's about making the contact center more efficient in terms of having better informed, essentially, agents that haven't had to have as much training. Right? I don't need to keep them six or eight weeks learning all of our knowledge bases and the ins and outs of all of our products and all of our company if I can have a something that's going to generate an answer to them that they can essentially read, understand, and maybe dig into the details because it's given those details right with the answer, it's also made it so that we can front and reduce the need for some of those service agents to begin with. Right? One of the use cases that I I see that, I didn't expect to begin with, to be quite honest, because I always I'm I work in computers. To begin with, to be quite honest, because I always I'm I work in computers. Right? I'm I'm more of a a digital millennial sorta. I I'm very much part of that statistic that never wants to talk to a human if I don't have to. Right? But there are people who never wanna talk to a computer. They wanna talk to a human. And a lot of times, what you'll see is I have to go and submit a case, through our systems. Maybe I I have to say what's the problem today. I have to trigger kind of what the information is that I need to, you know, help with or, you know, how high a priority is this, what have you. And many of them would do that without ever trying to look for the answer themselves. Mhmm. Right? They would just come on. They would submit the case, and they would say, good. I've I've offloaded this to my my t or to my your folks to go do the work for me and give me an answer. Right? I can get that mindset, to be quite honest, but that's a very typical, approach. What we found is that if during that case creation flow, be it in a chat type experience, be it in an app, be it on a website, we go and right before they hit submit the case, we read their question. We go and do a search and generate an answer and put it in front of them and say, hey. Before you open the case, we actually did some research and found this answer. Could this help you? We can have really big deflection points. You know? Twenty, forty percent deflection points here over people who just didn't happen to look and say, oh, wait. That was you know, I if I had looked, I could have found this. I didn't. You still did the work for me. You just didn't use a human to do it. You did a system that could do it much more kind of scalably and quickly and efficiently for me to give me that answer accordingly. Right? And so from a business perspective, the original question, yes, IT is often going to be involved in implementing it, but they're implementing it around business generated content and business generated or business controlled logic about who should see what, where does the context apply, what is important for you. And, ultimately, it's the business that is going to get the benefits, particularly on the service side of an organization, because you do not need to staff as many people. Or if you're growing, you don't need to grow your service organization linearly. Right? You can grow sublinearly because you're using the technology provided by IT to essentially reduce that scaling factor as it were. K? Ultimately, everybody and every company I've ever worked with has budgets. They have goals. They have targets. This can be a way that, you can essentially hack some of that by having a onetime cost or a fixed cost system replace variable costs or, you know, scaling costs of labor, in certain scenarios. Yeah. And, you know, Eric, I I totally agree. Right? It's it's about connecting it to those strategic value drivers of the business. What is it that we need to do? And, you know, Bob, to your question, like, is it an IT thing? No. Not necessarily. But is it a business thing? No. Not exclusively either. It's both. Right? This is where we need to form that partnership where, to your point there, IT is gonna be doing a lot of the implementation, lot of the maintenance, but they shouldn't be the only ones doing the maintenance. Right? It is a cohesive and the way that we've designed our admin consoles that you want business leaders getting in and and working with this, together, with the IT. So it's, hey. Our goal is to in your example, which is great, increase that deflection, point at the point of case submission for those people who you know, they they may just I don't wanna, go looking for the answer on my own or I didn't even know it existed. I'm just so used to your people handling this for me to go, oh, wow. There's an answer right there. Right? Like, at the end of the day, I remember back to some research, I've done, in the early days of creating the effortless experience. We asked about customer preferences when it came to channels and things like that. And if you ask customers their preference, they'll tell you all day long. It'll sound like they really want this. You know, oh, yeah. I want humans. Just give me a human all day. That's why they humans can solve this this problem. But, you know, and I was running this user group, actually, where this person this is the early days of chat, and he was saying chat chat this chat that I always look for a chat. And, we we asked him, well, what what if chat wasn't wasn't available? Would you, like, not do business with that company? And he said, no. I just looked for the next best thing and used that. He was like, wait a minute. You just spent, like, ten minutes ranting out jets, the greatest thing in the world. What gives? And he said, look. You guys don't understand me. I don't really care how it gets done. I care that it gets done quickly and easily. My goal at the end of the day is to get this problem solved so I can continue using your product. And so there's two types of preferences that there are little p preferences that are often manifest as I want a human, and then there's big p preference. And the best news for us is that it's the same thing it's always been, which is resolution. Fix my problem. Make it go away, and I will continue to be happy. Make it go away as quickly and easily of a manner as possible. I'm gonna be even happier with you. And so looking at this, right, start to see a lot of these things, like, connect and build those partnerships. And think of it as a tool in the toolbox, but, you know, if IT doesn't know that, the service function is building a, you know, a deck on the back of the house and you think, oh, we're putting in sidewalk in the front of our house, well, that's never gonna go really well for us because we're gonna provide you with tools to, build a deck. We're like, what do you mean? I need a cement mixer. Why is that not here? Right? So that that type of good partnership and communication saying, we can bring AI to bear. It's gotta be for the the right things. And I'm starting to see some interesting, comments come in as well. And, Shraddar, I'll ask you. We're gonna show that in just a second. You're like, can it have memory? Can it use earlier iterations? Absolutely. And that's where this world is going. Right, how do we make sure that we are, having the the the right types of conversations with our data? You know, and can an organization sort of implement, that that control to control the information exactly right. That that's what you wanna do. You are, controlling your information, and in fact, I'll start to show you exactly where the this is. Right? And and, Eric, you so perfectly laid out the case of why RAG modeling and RAG approach is so important and why good generative answering and not just and then see some people, oh, we wanna implement AI. I'm also hearing what I think is another common, confluence that that comes together, which is all AI is generative, and that's, of course, not true at all. We'll we'll talk about that in just a minute. But the idea that, you know, a a good generative is built on the foundation of knowledge retrieval and relevant knowledge retrieval, because that becomes critical. You can, we we saw this with chat g p t in the early days. It can generate an answer, but whether or not that answer is relevant to my situation, whether it's factually correct, and whether it is contextually correct for what I'm doing, that makes the difference between a shiny new object, and a good business tool that we can start to leverage as providers here. So let's start to take a look at it. I'm sorry that this, page is a little bit of an eye chart for us. So if you start sort of at the bottom left hand corner, that that's where we'll go, and we'll work our way around clockwise here. So we're sort of starting at eight o'clock with our content sources. That's where all of this, relevance and successful generated answers, that they start. Where is your content being kept? What is the type of content that you need to bring to bear with within your organization? And for most companies that we work with and most companies today, that content lives all over the place. Right? It can be, you know, product content in Jira or Confluence. It can be marketing content that lives in YouTube or or it could be product videos that live in YouTube. It can be on your website. It can be, things that you pull in from your CRM or systems of record that say, who is this person that that is asking? So you need to securely connect those content sources into that single version of of truth, and that that's number one here. Right? Start to understand, you know, not just what this document is, but what's contained within it. Who has permission to access the this document? Because, you know, Eric, to your point of, is it two weeks or six weeks of PTO? Well, it depends on who's asking, and you will understand who that person is, who's on the other end, by building that secure, index, by, doing, you know, document chunking. Right? So where those embeddings, those chunks of document. And then generating vectors, based on that. That that's another security layer to put in. Right? That, you are vectorizing that content that allows you to find the relationships between things. So it improves relevance as well as, when we'll come down the line here, protects your company from something, you know, like a JPT or a large language model, getting their hands on it and using that to train. Then you build your indexing pipeline. Right? What are the right questions to be answered here? What what do we want to, start to, you know, put in that pipeline for our customers? And if you move to number two, sort of dot left hand side here, that that's where this unified hybrid index comes into play. Right? Where, the the embeddings and vectors live, right, a vector database contained with within that pipeline. So you could say, this is where answers come from. From there, we're able to start to practice not not just RAG, but relevance augmented generation. Right. So we we are, building that buffer. We are putting everything in our index that that is controlled by our company to find the right documentation is where Relevance AI comes into play. Alright. So two people asked the same question, about, again, something as innocuous as PTO. Well, that answer is going to be different based on who it is. We're getting the most relevant results to come up. We're grounding that in a fact based. That's how you control for hallucinations. Essentially, build your own little model, or or, you know, your own little box that has all of our company's information. And then, as Eric so clearly laid out, answer it using this information that we have found and only this information that we have found. Right? And that sense of grounding, and citation allows you to know this is the right thing, and here's where it came from and how you build trust to your customers. And from there, it moves into that unified relevant experience. I'll actually dive into that one a little bit further, on the next slide. Right? But the idea here is that your customers are able to, have a a unified source of information no matter where they're interacting with your company. Whether it's, you know, you're a SaaS company and it's in product, or it's on a a help and support site, or it's on an FAQs, or a community, or a a user community, anything like that, or even in a bot. Alright. It's all coming from that same source. Right? Using search, you know, semantic search, recommendations, models, to sit below a a generative answering that puts it all together in that nice, cohesive, easy answer. And then bottom five bottom right hand corner, box number five here that you are, performing analytics on this. Right? That, you close the loop, and then every single click, search, input, what works, what didn't feeds right back into that relevance model. So if, two people search for things and they're both successful with that, well, when the third person with a similar situation, comes into search for a similar topic, we know that this is, what's worked successfully before. So we can start to generate that answer. Now how that all comes together. Right? It is, what we view as the unified relevant experience here at Coveo. Right? One intent box. And that's different for a lot of organizations because we've been so used to, a single, you know, search box or a chat box or or a all of these different boxes that come in. But when it comes down to it, it's tell us what you're trying to do, and we will use some combination of all of the callouts that you see, going down the page here to get you to the right place at the end of the day. It could be, you know, here's a generated answer, you know, abstracts and steps, and we're citing that source of content. So okay. I've got the answer here. Now I need to do a little more reading. I'm gonna click in, and I'm gonna read the content that way. It's also helping the customer to know what they don't know. Right? Here's the question you didn't think to ask, and not because you're not a smart customer, but because you just you didn't know yet. Right? And so thinking about that, it's using a different version of a large language model, called smart snippets. Here's what other people have been asking in that same situation. Oh, wow. I didn't even you know, the one I always think of is, if someone's, financial services institution. How do I start, you know, college savings account? What do I need to be saving for college for my child? Yeah. Here's all of our information on a five two nine, but you ought to ask, you ought to ask, how do I start a home budget? How how do I do budgeting at my home to fund this account? Most people don't think about that yet. And if you can prompt them, then you're just drawing that customer in even further, engaging them digitally with you even further. Comment that came into the the q and a, you know, to the the comment that came into the the q and a. You know, can it be, you know, contextual? Yes. Right. If you are where we're going, with this is if you ask a question, you know, what's the difference between a, personal loan and a commercial loan, and you're gonna get a generated answer telling you the difference between those. Yeah. What are the rates on those things? Oh, well, it's gonna know the rates are between a personal loan and commercial loan. It's going to keep the context and thread of that conversation going, while also providing relevant ranked search results and recommendations for that customer. And, you know, Erica, I know that you've been working very similarly, you know, on projects like this. As you guys look at the world, this intent box and this unified experience, when when do you start to see that coming to, fruition, and and how do you see companies utilizing that today? So, generally, what we start seeing is this intent box. A, you have this intent box as a core part of maybe some core experiences. Right? But the one thing I'll say that that could be a misinterpretation here is we don't want one across an organization. We want one functionality behind that intent box. Right? That I go and I search, I answer these questions, etcetera. But that same idea of the intent box, we can bring across channels. So I might have that version of the intent box on my community versus my home page, versus my help site, versus my chatbot or my app. Right? And in any of those channels, we start giving back this information. But we also occasionally need the power to also change relevancy within those different channels. Right? An example is if I'm mobile, I may not you know, I I'll think of it. I'm gonna use a commerce example here. Right? We had a a customer that sold very large items, think like boats and trailers. Nobody's gonna buy a boat or a trailer from their desktop. Right? They're going to buy it from their store on their mobile app when they're actually there with their truck ready to pick up the boat or trailer. Alright? And so they can change the relevancy based on contextual signals of what you know based on the channel where someone is coming from as well to make sure that we're using that as kind of further clarification and contextualization of the intents that are given. K? Yep. One other piece you mentioned I wanna highlight on because it it it calls out a a question, I believe from, on on the the chat. A very important thing to note here is the analytics capture of this. Right? I'm getting data, and my business is getting data that they can use and they can build off of from these, essentially, intent boxes from these capabilities. That data is very valuable for machine learning and gen AI and everything we're talking about here, but it's also very valuable for humans, specifically those humans who are responsible for making sure that the content is up to date and accurate and useful. Right? I can have analytics on a user asked a question. They went and opened a case right after I gave them the answer to this question. It seems like our answer wasn't very good. Right? Let's look at the documents that went into making that answer and highlight those for our content teams. Say, is this up to date? Is this accurate with the latest version of our products, of our our capabilities, of our processes for how we work? Right? And use that as an update. Right? I can also start essentially, what we found with generative AI is it builds on top of search is that generative AI almost becomes a torture test for search in a way. Right? Because, effectively, where I used to have to have users that would go and ask questions, right, and and search for things, and so we were limited by the capacity of people, and you were limited by their expectation that they already knew how your business works. So we tended to put those questions in a way that made sense to your business. Mhmm. A generative AI system is gonna ask these questions in the dumbest way possible or look for this information the dumbest way possible and kind of challenge a lot of the assumptions that you've made from your business that knows how things work, assuming how your content is, and give you different angles and perspectives on it. So what we typically are advising our customers as they're rolling out, large language models is this is about a fifty percent technical, fifty percent business project. Right? There's lots of tech we are talking about here in the capabilities, but you have a lot of business folks who are going to be involved looking at the content, validating that the answers are correct, and making sure that this looks good to them, where typically they're not necessarily changing anything on the tech side. There are a lot of times going and potentially improving content or making sure that their content is accurate or can be contextually applied to a specific scenario, very easily. And so to the bigger questions about how does this work with business, how do we how does business get involved in this, or should this be an IT led scenario? My personal experience is that if it's only IT working on one of these solutions, it's gonna fail because IT doesn't understand the context of the answers that are given and why they're needed, whereas the business does. And so this to your point you mentioned before, Devin, it needs to be a partnership between both sides effectively to make this valid both in hitting the KPIs the business needs, but also making sure that the answers that are given are applicable to what an end user expects in a way that your tech teams may not understand. Yeah. And, you know, at building on that, right, how do we prepare our organization successfully if we don't have, you know, one hundred percent of our content mapped out? It's you don't use a hundred percent of your content. You start small. Right? A generated answer, it's not going to be the only solution that that we're using. It's not gonna be the only way to return results for a customer. What we typically see right now are, organizations that are using between ten and twenty percent of their content to generate answers. And answering, you you know, somewhere between, you know, thirty to forty percent of the questions that are coming in with with a generated answer. So it doesn't replace everything. It's not a new thing that that's gonna replace all of these things. That that's what we're seeing on the page here. It is a tool that we can use, and that will be very powerful in deflecting, those, cases and questions from moving and rising to the level, of a a human assisted interaction, I guess the answer is there. Right? And even that, it's every incremental gain. Right, we were talking with a large financial service firm that said, yeah. What we we've are are at the minimum seeing just in the early tests that the POCs that they're doing a five percent increase in call volume, and that's saving millions of dollars. And you use that to fund the the next project, the the next iteration. And so you you can start to generate ROI all along the path as you go. It doesn't need to be one big bang project. Right? So you start there to your point in the analytics. You use that to build. Hey. This wasn't right. What what do we need to do to fix this? Hey. These are the questions that are coming in, that we weren't able to generate an answer. Do we have content that can help to to answer this? Now let's continue to build that pipeline, build that index for generative answering up while returning search results for others and helping to guide them along the path. You know, so it doesn't seem like, Bob, what you call that in the chat, the the idiot bots, right, that cause stress, that are just rules based. I can help you with these four things. I've got other. Okay. I I can help you with these four things. Let's start over. Like, just hoping that your issue suddenly changes. That's not the world where we're gonna live in. Though, you know, something that that we're we're talking about, right, here is a potential risk that is of our own making. Right? You mitigate hallucination risks and security risks by practicing RAG modeling, but there there's a risk of our own making, and that is a business strategy risk. Right? GenAI cannot become a separate and siloed, interaction channel here. And we're gonna have a couple of minutes for questions. So if you guys have additional questions, pop them in in the q and a. But, what what I what I'm looking at here, right, that this silent interaction is this is a story we've all seen before. The fractured digital experience that happens when we make investments and think one new technology is gonna replace everything else or, we make investments in silos. Right? So for most companies and their digital experiences, we've got one, two, three, maybe four search engines that are powering different parts of the digital experience that are returning inconsistent results. And then totally separate from that, we've got a q and a thing or a chat or a chatbot or UX and UI design. Right? And so adding the Gen AI bot on top of that, it is going to return the the same results. It's doomed to fail. Right? And that is where all of these things come together into this unified in the experience. The this unified intent box that says, tell us what you're trying to do, and we will get you along that path. Right? And so I'll end here just by looking at at one company that, has started to roll this out and I'm sorry. They they are fully in production, with us here at Coveo. That that's a is a company called Xero. Xero is a software firm that makes a software for accounting companies. What they found was that in just six weeks of having this this generative, answering capability on their website where people would submit cases, they would increase self-service resolution by twenty percent. Right? And what they attribute that to is faster answers. Customers were thinking, I might have to look through all this documentation. Oh, wait. Here's an answer. Not just a document or a search result. Here's an answer to my question. And you can see that in the purple diamond, on the bottom right, they also observe a forty percent, reduction in time to resolution. People are getting things faster. Using rag approach, they knew it was secure and accurate. So they were giving the right results to customers. And by citing their answers where this came from, they boosted customer Right? Here's the answer. Here's where it came from. Here's the citation, and that allowed further discovery. Okay. So it seems like, you know, this is legit to me, not just some font that's generated an answer, and I have no idea what where it came from. And so, if you wanna explore more with Jira, we've got that additional information, videos that you can watch on how they've been doing, what they've been doing. But suffice it to say, just six weeks from starting their time, they returned way more than their ROI, because they they were, you know, finding customers loving this, and they're continuing to build on it. You know, we've got about six organizations in production with Coveo right now, another dozen or so who are, going to be live very, very soon. And so we're continuing we'll we'll continue to tell these stories. If you wanna learn more, plea please ask us about it. We'll check the the q and a and and chat here for any additional questions. But, you know, Eric, any final thoughts from you as we start to wrap up? No. No major thoughts. I think just kind of, kinda summary that in my mind, right, your customers aren't getting any more patient. Right? You know, your your your customers, your users, your your members, everyone is becoming more and more of the digital generation where we expect, I'll call it app level experiences. Right? That customer experience has become paramount, and it's really a competitive differentiator. Right? Generative AI, everyone has seen it now or heard about it or tried it, and it's becoming one of those features in a lot of the organizations I'm working with that, you know, why don't you have this? Right? My my local you know, I had a kid recently. The the app to track, you know, the baby drinking or eating or anything, There's a Gen AI bot in that app. Right? Like, you know, it becomes a a a feature that everyone has to have. And in many cases, they're not actually using it for anything that efficient or effective, to be quite honest, in many of these apps. But when it comes to customer service time and time again, we see a huge amount of effectiveness that we're driving via this approach because you're doing work that has to be done by someone, and oftentimes, that's your customer. Right? So you're essentially lifting that work off your customer's shoulders and doing it for them, which is appreciated by everyone generally in the form of either, loyalty to your brand, you know, reduced time actually talking with you and cost for your internal organization, repeat business, every every metric you can kinda see along that chain. Yeah. That that's it. The the the rapid mainstream adoption of this technology from consumers is showing us we've all got to make moves on, how to incorporate generative AI into our service business in twenty twenty four. So with that, we've got about thirty seconds left. So fantastic timing. Thank you all for joining us today. Thank you, Eric, for the the conversation. Look forward to talking to you all soon. Have a great day.
S'inscrire pour regarder la vidéo

Faire de L'IA générative une réalité en 2024

L'IA générative recèle un énorme potentiel pour améliorer les résultats commerciaux en matière de coûts, d'expérience client et d'expérience agent, et de nombreuses entreprises prévoient de déployer des capacités en 2024. Mais toutes les GenAI ne sont pas les mêmes, et les organisations de services doivent aligner les solutions génératives sur les problèmes commerciaux critiques pour créer de la valeur.

Au cours de ce webinaire à la demande, Coveo et Perficient plongent dans les sujets suivants :

  • Écouter les leçons apprises par les entreprises qui ont déployé avec succès la GenAI.
  • Apprendre un cadre pour aligner les différentes solutions GenAI aux KPI de service.
  • Voir les meilleures pratiques de Xero generative answering.
Devin Poole
Senior Product Marketing Manager, Coveo
Eric Immermann
Practice Director, Search and Retrieval, Coveo