Hi, everyone. Thank you so much for joining our webinar, scaling service in the era of chat GPT. My name is Bonnie Chase, and I'll be your moderator today. We have a great panel today featuring Tom Sweeney, the founder and CEO of ServiceXRG, and Coveo's own VP of product, Neil Kostecki. Now before we get started, I have a couple of housekeeping items to cover. First, everyone is in listen only mode. However, we do want to hear from you during today's presentation, so we'll be answering questions at the end of this session. Please feel free to send those questions along using the q and a section at the bottom of your screen. Today's webinar is being recorded, and you will receive the presentation within twenty four hours of the conclusion of the event. With that, let's get started. So today, we're going to be talking about scaling service, in the era of chat GPT. And, really, you know, we're gonna break this up into the three different parts of the conversation. First is the promise of generative AI and large language models, then we'll talk about implications of next gen intelligence, and then cover some short and long term actions that you can take as you're considering including this type of technology in your business. Now by now, I'm sure we've all heard and read many articles about the promise of technologies like Barca and ChatGPT, but it's important to take a step back and level set here. What exactly are we talking about? So we've outlined what generative AI is for you on the page here. But, really, you know, when we think about ChatGPT, this is really, you know, NLP gone mainstream. It's large language models designed to generate text and create fluent conversations. There's a couple of definitions on the right side of the the slide here, because we'll be talking about LLMs and generative AI throughout the session. Now there are some concerns that we have with this type of technology. You know, there are there are results that look good, but, it ends up with hallucinations, and those hallucinations are just things that are simply not true. There are multiple biases and and no reasoning abilities that you need to take into consideration. And, of course, finite knowledge and training costs to make sure, you know, we need to to ensure data freshness. So these are some some of the the high level kind of takeaways and concerns with ChatGPT and this type of technology. But I think what's more interesting than the technology is really how people are hoping to use this. So, Tom, I'm going to start with you here, and you've been speaking with many service leaders on this topic over the past few months. What are leaders most excited about, and what are some of these use cases that we can leverage this type of technology for? Yeah. Great. You know, the beautiful thing about ChatGPT is that it it gives us, you know, a chance to try it, to actually experience what a large language model and generative capabilities can do. It's not just academic write ups. It's let's try it. Let's see. So, you know, being able to type a query and get a structured response that is generated almost like a human created it, you know, I think the first use case is, hey. Can we use this for our knowledge base? You know? So the ability to actually engage in knowledge exchange, to ask questions, and to get the answer and not just a list of answers is probably the most intriguing initial use case for for service leaders. And what's what's giving them pause on this technology? Well, you know, it's it's access. You know, how to implement it. It's it's not as if, you know, teams have, you know, data analysts and engineers and, you know, machine learning experts that can actually define and integrate their content into their these language models. It's, you know, it's this is this is cutting edge technology. It's been around for a long time, but it's not just indexing documents and creating documents and throwing it into this model. It requires a lot of training to build these models so that they're they're tuned to your specific domain. So it's a matter of effort, at least initially, and accessibility of tools. Yeah. That makes sense. And, Neil, on your side, you know, as as you're you're working with technology, what are some of the use cases that that you're coming across? And then, you know, after we talk about use cases, I'd be curious to know, you know, what you're seeing and how LMs are being used today. Yeah. So, I mean, you know, there's a lot of different uses. What we're really talking about and and, you know, you hit it right on the head, Tom. It's it's exciting and it's accessible. Right? These models have existed for for quite some time, and this is kind of the first opportunity where I think everybody in my friend circle, my family circle, just about everybody I know, yeah, knows something about ChatGPT and now, you know, talking about LLMs. And they don't even understand the technology behind it, but they've been able to, like, touch it and get hands on it. And so that makes it real. You know, there's lots of use cases, obviously. Like you said, you know, generating answers that are well put together across multiple, context and and information. There is the ability to to, you know, generate and help people get started with an initial draft of content, that they can then iterate on or even ask for, you know, updates to, you know, different types of ways of writing it. It's it's pretty interesting. And, you know, from a a search perspective, obviously, it's it's very interesting for for us as well. You know, how models are being used today, like large language models. You know, we've been using, models in production since twenty twenty. One for what we call smart snippets, which is really around, extracting answers out of a document, and returning those as a snippet. You know, you're very familiar with seeing that kind of experience in a in a Google like, experience. And then classification, you know, where we're actually using large language models to classify, based on previous cases, actually classify certain field values to help remove some of the the, effort that's required of a customer as they're submitting a case to to support. So these are just a couple examples, but large language models have, like you said, been around for a long time. There's lots of applications, and I think the important part is there are specific applications for specific types of models. So so, yeah. Yeah. That makes sense. And so so what I'm hearing, you know, with this type of technology is there there's a use case for creating knowledge. There's a use case for, providing answers, you know, and you're using it from a classification perspective. And so, really, when we we think about this technology, it's what are those large sets of data, and what are you trying to do with that data? Are you trying to maybe summarize a bunch of content into into something simple? Are you trying to, you know, make sense of of information and provide an answer? So I think this really kind of leads to you know, we're talking about what is ChatGPT, you know, and and how can we use it, you know, before you you think of implementing it, think about those use cases that that you wanna use it for, and then you can really start planning on on on how to meet that. So, Tom, you know, when we think about timelines for productivity, when do you, service leaders, hope to see returns? What are those returns? Yep. It it it indulge me for a moment, but, you know, it it it occurs to me that when we think about chat GPT, this isn't one application. Large language models, generative AI, these these are the foundational elements that will enable what comes next. And whether, you know, we're gonna see actual commercialization of chat GPT or the underlying GPT large language model applied to create, you know, customer specific, domain specific, you know, applications is likely longer term. But there's there's something that is important about these technologies. You know, this is the first time that we have an analytical tool can that can work and understand the unstructured world. You know? We're we're so used to using databases to, you know, pull data and run reports. And all of a sudden, you know, Neil, the the the classification, the extraction of, you know, passages, the the formulation of the answer in response to a customer question, I mean, these are big, profound, and sweeping changes to to how the support paradigm is even gonna work and the self help, you know, tools of today. So when you ask about a time line, I'd like to optimistically say that I mean, already, there are, you know, applications. There are experiments. There are things that people are doing today that are incorporating the the GPT API, m o u s e. Right. You know? So we may see and we will see, you know, applications come out soon. But we're also talking about decades of producing content, sharing it with customers, sharing it with agents, and those behaviors and all the things that actually change the way that we use this really impressive technology is gonna take time and not just months, not just quarters. But, I mean, we might be talking quite a long time, and we need to talk about what that transformation might look like. But I'm not I'm not well, yeah. So the timeline is variable. The more sophisticated applications will take a little longer. Yeah. Yeah. And and we're not at the the start of the timeline. Right? We're we're in the midst of this timeline, because it's not new, but it is becoming more accessible like you said, Neil. So let's take a look at one of the biggest reasons narrative AI has become such a hype technology, and it gets to the heart of one of services' most perpetuating problems. You know, customers want answers. The organizations have been investing in self help for decades ever since the first company web pages went up. Customers have longed to find answers to problems or execute changes on their own, and this kind of presented a best of both worlds scenario for support leaders. The thing customers wanted is happens to be what's most cost effective for the company. So while self help has certainly been a huge impact on our cost base, the nirvana that many CFOs have been dreaming of where customers never have to call has not been realized. Right? So, Tom, we've got some data here that you've been collecting on what you call the deflection gap. Can you walk us through what we're looking at on the page? Yeah. So so we've been we've been monitoring practices and performance of self help initiatives for for quite a while. And, you know, so the good news here is as an industry, we're doing a really good job of getting people to at least initially attempt to self resolve. You know, so over seventy percent of support demand is initially serviced through some sort of a self help channel. But on average, only about twenty two percent is getting fully resolved. And, you know, so so there's a forty nine percent gap. Why are people trying it, and why isn't it, you know, solving more issues? You know, there are probably three reasons. You know, certainly one, if somebody asks a question of a, you know, of a knowledge base and there's no content, then they're not gonna get an answer. But it's also, you know, we're creating authoring content, and we're trying to serve multiple audiences. So some audiences may be very technical. Some users, you know, may be mixed. You know? So it could be a matter of comprehension that I found something. It looks like it's right, but I don't really know if it applies to my use case and the way I need to apply it, or I just simply don't understand the terminology. And then the third piece is, well, jeez. I found the right answer. Yeah. I'm pretty sure that's the right one, but I wanna be sure. I want I want the confidence. So I want validation from a human. You know? So, frankly, I think when I looked at, you know, sort of working with ChatGPT, I was thinking, it can address these issues. It can help us close this gap. Mhmm. And, Neil, you know, we've we've seen some pretty big gains when it comes to self-service success. So what are some of the most helpful things that support leaders have been utilizing to encourage this in a way that doesn't require human intervention? Yeah. Great great question. I mean, first of all, you really need to understand your customers and and look at what is the problem that you're trying to solve. I think it's, like, the very first thing is, like, technology, as exciting as it is, it's not always just technology as the answer. So make sure you understand, you know, what your what what are you trying to solve for your customers. And then I would say there's there's a few key things. You know? First of all, you know, hire smart people that are knowledge experts that are obsessed with really understanding the the problem, you know, understanding how to solve the problem and that are are thinking about that. They don't need to necessarily have all the knowledge, but are able to break down and think through. And this is gonna help you to then establish a knowledge culture, which is really about sharing expertise, sharing knowledge, and creating, you know, the content on which, you know, a a technology like this can then help you capitalize on. Like, if you don't have that basis, if you don't have the people that understand the problem that are writing it down and giving you the context that then a model can then leverage and help provide those answers, like, that's the first thing. And then I think also having a really, a great digital self-service strategy. And in order to do that, you need to have, you know, support all in one place. You need to be proactive. It needs to be contextual. You need to think about how you're actually gonna serve your customers and where you're gonna serve them. I think these are you know, when I look at digital, experience leaders that are providing, you know, a service solution that is, you know, top notch, they're really, you know, firing on all cylinders on on these things. Great. If we go back to I'm I'm sorry. Go ahead, Tom. I was gonna say if we go back to your timeline, you know, the digital experience is something that may be indicative of, you know, where we're gonna see people progress and advance. You know, if you think about a digital experience today, there are examples where you can go to a site that has a beautifully federated search that spans all the content repositories that exist and also go to, you know, digital properties that have four search boxes, one for the learning curriculum, one for the community, one for the support knowledge base, and then one for the corp I mean, these are things that structurally we need to we need to fix. And I'm not saying that large language models and generative AI are the solution, but they're gonna be a catalyst to to pushing us that way. And there's there's one more thing that that's kind of interesting about this this comprehension gap. You know, a language model can actually understand, wow. I think that, you know, it it it it can infer, perhaps, from the words that are being asked. This person is not an expert. So I'm gonna structure a response that actually is going to be more novice than somebody who clearly knows what they're talking about. So there's really interesting things just in the not only in the capturing collection of information, but the way that it gets created and then presented back to the user. Those are the exciting developments that I think are possible with this technology. Mhmm. Absolutely. One last thing I'd I just end on add on the end of that is what's really cool is even if it hasn't inferred, you know, being able to just go back to it and say, you know, just give it that give it that to me a little simpler or, you know, add a little more detail. I think it's you know, these are the kinds of things that, you know, tend to come from this kind of experience. Mhmm. And you don't have to chat the program, the chatbot to be able to converse with the customer. The language model actually enables that dialogue. So Yeah. Exactly. Exactly. Now it's not just self-service that's being impacted in this way, in the way that we seek information as a whole, but let's talk about this paradigm shift underway for support organizations. So one thing that's become abundantly clear as humans have interacted with the good old search box is that people often don't know what to ask or how to ask it. So let's turn our conversation over to the fundamental shift at play here. Customers have gone from seeking information to seeking answers. It's a subtle difference, but there is a sweet science to giving someone an answer, one that satisfies them and doesn't leave them picking up the phone to make sure that they have the right answer. Right? So looking at this slide, you know, at the top, you know, there was some research conducted by Gartner a few years back that found that customers who fully resolve their questions using a self-service experience shared three common factors. First is clarity and information. So, you know, it's making sure that they can understand it. Second is credibility from the source of information, and third is confirmation that it was the best thing to solve their problem. And so together, these three factors bubble up to give customers a feeling of confidence. And so when companies want to give answers to customers, one that they'll believe, we need to focus on building customer confidence. Now, Neil, I know that you've been focusing on, you know, understanding how support leaders are hoping to use generative AI and what customers need. So what are you what are going to be the core tenants of getting customers to be confident in the answers they're using with this technology? Well, I mean, you you've got it here, really. Right? It's, you know, number one, you need to show your work. And what do we mean by that? It's, you know, it's it's easy to go to ChatChiggy, ask a question, and get it really you know, an answer that looks extremely accurate, confident, and then you could just go and share with the masses, post it on your your, you know, your Facebook page or wherever your LinkedIn. But, you know, where did that answer come from, and are all of the points validated? And, you know, I've seen some crazy things. I've seen articles, you know, being written just to basically take a stance and say, like, I I forget the publication, but there are, you know, references in other articles that are pointing to publications, you know, on these these, you know, online, news sources that just don't exist. There's the person, is actually, you know, there. They're a journalist. They've they're writing articles for the publication, but what has been actually referenced is a piece of work that has never existed. And so this is the kind of thing where you need to show your work. You need to provide the the attribution. You need to point point the user back to where did this come from so that they can go. And if they want to, like myself, they wanna understand the inner workings behind it. Like, it's great to get an answer, but I wanna know how was this assembled. Can I go and find a little more context behind it? That's number one. Number two is providing guidance. You know, how do you use this thing? What are the ways to, you know, like again, I mentioned earlier, you know, being able to ask for more detail or less detail. There there is a little bit of a learning curve of, like, how do I interact with this thing? It's it's something entirely new. So, you know, giving guidance so that someone understands, what they're you know, how they interact with it. And and, you know, the example I gave earlier around, like, providing guidance so that someone can, you know, fill in a form, you know, giving them visual cues, those sorts of things. And then I I can't say it enough, but, you know, content is, you know, the most important thing. It's what all of this is is on top of. If you put a large language model with no information about what the, you know, the question is about, and and this is, you know, this is the the challenge that we're trying to tackle, which is, you know, our clients have their enterprise content that, you know, knows about their particular products and their challenges and and, you know, solutions related to the services that they offer. And and there's, you know, even security and things around that. But, you know, content is the basis. So it's really important to, again, show your work, provide guidance on how to interact, and have the content to be able to, to provide these experiences on. Okay. And, Tom, you know, how is this going to change the overall approach that companies are taking towards search? Well, you know, if we're moving to a paradigm where we're allowing a customer ask a question and giving them the answer, the best answer, then, you know, that might boost initially the confidence that, hey. They knew what I asked them. They gave me an answer instead of a list of documents that might contain the answer. But, you know, I really love this this show you work in particular because where did it come from? I mean, we're talking about creating potentially language models that now include a whole lot of information beyond just the curated knowledge base. What if we start putting in user generated content? I I mean, we need to be able to understand and not only train a language model to to give the best possible answer from the content we have. We have to have confidence in the the quality of the content, the source of the content in order to instill confidence in customers that what they get is the answer. No hallucination. No made up stuff. You know, we need credibility. Yeah. And and yeah. So Okay. Well, when we get excited about the, the promise of this generative AI capabilities, one of the big questions first for support becomes, will we need our support agents anymore? And, you know, a lot has been made about the death of the support agent and has, you know, generative AI being kind of the latest, you you know, technology that's holding the knife here. But believe it or not, we've been here before. So many years ago, websites were seen as as agent killers. Turns out, they only increased contact volumes as customers called in for password resets, and and with help using the site. And then we have the great chatbot. They were supposed to be able to answer any questions possible and and make agents irrelevant. And, you know, obviously, I think we all know how that worked out. So what does the future of the support agent look like, and how will this technology support them? I'll start with you, Tom. You know, how do you see the role of support agent changing as a result of generative AI? Yeah. Well, you know, this point about augmentation, not automation, not replacement of the the actual role is important because the same way that customers wanna access the accrued knowledge of the organization, so do agents. Somebody always knows something that you don't. So if we can get that into this, you know, and digitized into our language model, then, you know, we have this amazing tool where we can spread the knowledge. But, you know, no language model, no matter how good it is, is gonna be fresh enough or up to date enough to know all the answers. So the unknown, the unfamiliar issues are gonna have to be solved by somebody. Somebody's gonna have to do that, and it's going to be the agents. And, you know, it may not be agents, but we have to have domain experts that can train language models to actually be able to teach it what the language means in our domain. So they're you know, sure, we're gonna have efficiencies, but it doesn't necessarily mean that we're gonna, you know, lose forty percent of our our our staff. It just means they're gonna have to do different smarter things. Mhmm. Yeah. And it sounds like, you know, the the hope or desire with this technology is that it's it's a magic bullet that we can kind of plug in and hands off, but, really, it's not. There's a lot of, you know, actual human work involved in making this type of technology successful. Neil, what do you think the biggest, you know, benefits to the support agents will be? Yeah. I mean, it's it's really like you mentioned, Tom, it's it's allowing them to be able to do different work, different better work. I think that's you know, who wants to spend the time on a on on a password reset over and over and over and over again. You know? So, you know, giving them the ability to to, you know, kind of focus on more challenging problems, more complex problems, and using, you know, not using the AI as a crutch, but actually as an enabler. So it's really giving them insights and summarizing things and helping them to get to a better first draft or a better understanding quicker. And I think you're you're always gonna need, I don't think, you know, everyone is really ready to just talk to, an AI model, you know, or to a chatbot or or even to a, you know, let's let's, god forbid, a voice, you know, chat version where you're actually speaking to the model through a voice channel. I mean, you know, speaking to a real person who you can build, trust with, you know, a relationship with and that represents the the company, I think is it's still something that's gonna be around. You're gonna have you know, you have to have a digital channel. You need to have channels in in all areas, and, you know, you need to be able to offer a great experience through all of those. And there's always gonna be, you know, a channel to call in and speak to someone. You know, urgent issues, critical issues, issues where there's lots of nuance that just simply can't be addressed by, you know, existing knowledge. And so I think, yeah, agents aren't going anywhere. Their role is gonna become even more important. I think it's gonna give them the time back also to be able to actually focus on, you know, creating content and not worrying so much about it being absolutely written perfect because the model is actually the thing that's gonna help assemble that into more, you know, well written content. It's really just the knowledge, the actual expertise about the problem with this the services and and things like that that'll make it a lot easier. Right? I've heard a lot of agents, you know, talk about not having time to create content because they're just too busy, you know, being on the phone and talking with the client. So now, you know, really giving them some ability to leverage this to to get to that better knowledge solution. Yeah. Give them give them a jump start. Yeah. And as as you're talking, there are a couple of of use cases that that I can think of where it would be really cool to be able to use this technology. So, you know, if they're practicing knowledge centered service, for example, you know, that's creating content in the flow of their work. And so if you can can start an article that already is prepopulated with some helpful information, and so you're just, you know, cleaning it up. And and as you're troubleshooting, you already have something to work with that can make that process more efficient. And even from a swarming perspective, you're doing intelligence swarming or case swarming and you're you're working with groups of people and maybe you all have different pieces of content. You know? Can all that content that that end discussions that happen in that form, can not all be kind of summarized into an article. So these are just, like, some things that that come to mind for me. Anything else on this point, Tom? Well, but one, one point that, you know, we we assume that there's efficiency gains to be made, you know, from the automation and but, you know, the the actual task of resolving issues, the agents you know, the core metrics that we measure, time to resolve and first contact resolution, things like that are gonna go up. You know, it's going to take longer and require more effort by smart people. So the cost of assisted support per case might actually go up. Not maybe not overall, but there's some interesting things that are gonna happen that are not just we're gonna save money. You know, we should be looking at this. We're gonna deliver a higher value experience for our customers, and we're gonna have deeper, longer lasting relationships so we can retain and grow those relationships. So this is a this is a long game. Yeah. And I think that's a really great point, Tom, because a lot of times, because support is it has traditionally be been seen as a cost center. It's been all about what can we do to cut costs, but sometimes adding that overall value can actually, you know, cut costs in the long run because those customers are sticking around longer. They're happier. They're referring more clients. They're coming back and buying more. So really thinking about not just the cost perspective, but the value perspective as well. Hear. Hear. Alright. So, you know, generative AI isn't necessarily going to take over the world, but it's very clear that it's become it will become a critical part of the support organization in the near future. And the faster we can make, you know, our customer's pain go away, the better the experience will be for them and for us. Now just to kind of take a look at the way that we've been addressing the path toward generative future here at Coveo, you know, Neil, I know you you're you're working on some things, with this technology now. So, I'll go ahead and turn it over to you so you can kind of talk about, you know, what this means for us. Absolutely. So like I mentioned, earlier, you know, we've been using large language models in production since twenty twenty. Case classification, smart snippets, these features are available, and our customers are already seeing a lot of value and and great success with this. And so how do we integrate search and question answering together? How do we bring this, generative capability into our experience? Well, you know, again, we're we're not using this live in prod just yet. We've we've communicated our, our plans, and there is a beta program available. We've had a a number of customers signing up for that. But it's really bringing both of these things together into a single search box. You know, providing an experience that is built on top of, you know, what you see here are secured connectivity, which is able to bring content from all of the different sources into a secured unified index and then actually, you know, pulling out the parts, the paragraphs from that content, being able to put that into the index in a way that can then be leveraged by a large language model to assemble an answer based on your secured enterprise content. So, you know, the point is, like, we're not gonna throw away everything that we've already built and that we know about providing this this kind of experience, this unified search experience. And I think, you know, I wanted to mention earlier, but I didn't wanna jump in. But we were talking about, you know, showing your work, you know, and, like, things that applied in the previous, you know, space still apply. And so if you think of when you go to the search page, you know, you go to the search page, in Google right now and you perform a a search, you know, your confidence in what you're going to click on is influenced by the showing of work. What where is this coming from? What site? What domain? What, you know, what's the source? So, again, this all applies in in a generative world as well. So, again, building on top, bringing this together, and and, also considering that that point around, like, generative isn't going to solve every single problem that you know, when you have a hammer, everything looks like a nail. And in this case, you know, if you have the answer you know, if it's if someone comes in and is asking for a user guide, are you going to generate an answer to explain to them what a user guide is and where to find it, or are you just gonna give them the link to the user guide? If they ask a question that's answered by a specific, you know, phrase inside one document, do you need to go and leverage a generative model to create that and and write it back up? Or can you just, you know, split, you know, sub second, show them that answer? So, you know, I think there's a lot of opportunity here. We're we're working on this, and we will be, very excited to get this on our customer's hands, this year, and we'll be, you know, phasing that beta program into a production release. So super excited. More to come on this. But, yeah, I I think, you know, it's really just bringing it into our current suite of of capabilities in a way that is not intrusive and that really amplifies the experience. So we see, you know, generative answering not necessarily as a replacement of search, but something that can be used in conjunction with search to make it, make it easier for the user. Now there we talked about some of those initial concerns at the beginning of this, the session around hallucinations, around security. We've seen some some crazy stories about, you know, private information and data being leaked due to use of, you know, chat g p t technology. How how does this, kind of protect the organization from some of those risks? Yeah. I mean yeah. Yeah. We we we see it every day. I think just yesterday, I saw, an article around, engineer. I think it was at Samsung. You know? And and you if you're interacting directly with the, you know, the chat GBT open AI model, like, there are very clearly stated. They've been very clear about when what you put in here is, you know, it's an open system and it's used to retrain and, you know, so, like, there's, I think, again, a lot of education and awareness. This is new technology. It's it's no different than any other technology that's come before. There is risk involved. There is concerns you need to be aware of. You need to educate yourself and know what you're getting into. You know, if if you unleash the Internet, you know, back in the day and just put your entire life onto the Internet, all your private documents because, hey. I can put this stuff on here. Like, you need to understand the technology, what it's for, and what risk exists. And so I think this is, you know, it's something that we'll continue to hear. And as Tom mentioned, it's it's going to take time. It's gonna take time to land on the the really good use cases and and up applications that are going to persist and that are gonna provide a lot of value. So, yeah, I think this this, you know, this is something we'll continue to hear. And and as far as our approach, again, we're we're using large language models to assemble the answer, and we're relying on the the corporate enterprise content that you're producing, you know, in your knowledge base, in your product documentation, your help content. So, you know, reducing the risk by leveraging all of your your existing content. And, Tom, what jumps out at jumps out to you about the approach here, and and how do you see this as beneficial to support an IT leader? Well, you know, it's a wrapper. It's a wrapper around technology. Large language models, chat gbt, generative AI. Wow. That's really cool. Let's deploy tomorrow. Well, no. We need we need maturity. We need, you know, an administrative wrapper. We need tools to be able to train and define a large language model. These are not things you just go to the web and pull down some code and bang, you've got it. So, I mean, this provides that wrapper. And we're talking about deployment of, you know, an enterprise knowledge system that may have some very secure, very proprietary information. I don't think there's a lot of IT departments that are let you gonna let you just download some code, throw it up on the, you know, on this corporate site and say go at it. So this provides that it makes it enterprise ready. Mhmm. And I I I think that is gonna help immensely. It's probably also gonna help with the timeline. There will be people who will roll their own, and good for them. But this makes it accessible. So Sounds great. Now, you know, as we get close to wrapping here, I do want to, you know, highlight these four actions that organizations can begin taking when it comes to working with generative AI. So we'll we'll take it, you know, one at a time here. And and, really, this first one about developing a business case, this is where we talk about, you know, you're not just using chat GPT and and this type of technology just to use it. They you need to have reasons for why you're using it, how you're going to use it. So what should leaders be putting into their business case for investing in generative AI, and what outcomes should we be looking for? I'll start with you, Tom. Well, that that's really that's really the the key question. What problem are you trying to solve? It's great technology and, you know, I'm always impressed with, you know, support being on a bleeding edge of adopting new new technologies, but we've got we've got to focus on solving problems. There's so much going on. So are you trying to improve your self-service experience, your overall digital experience, or you're trying to just share content, internally across the organization? And if you do that and you apply these capabilities and practices and tools, what's the net benefit to the organization? How do we quantify that? You know? So bill building the use case and then justifying it absolutely is step one. Mhmm. Now for the second second tip, Neil, I'll kick that over to you. What does this mean, you know, amplifying the knowledge strategy, what does this mean practically for support leaders today? I mean, knowledge is is again, I probably can't say it enough time, but it's it's the foundation of this. Right? And, actually, before my time at Coveo, I was actually involved in in a knowledge program and managing that. And and so I'm, you know, big believer in KCS. So for the KCS folks out there, I mean, it's a great time to lean into your knowledge program or to start one. It's really the basis of your of your answers. Right? So without that that knowledge culture and the content, you know, where are your answers coming from? Right? Understanding language isn't enough. You need to have the actual content to draw on. And, you know, so, again, large language models are there to help assemble that that content. But having a knowledge practice, having a a, culture around sharing and, you know, actually bringing everyone to the same level where the knowledge base isn't just a place where, you know, there's an FAQ and whatever. It's it's actually the place where you evolve what you know about the problems and the questions that your customers are are having about your products or services. And I think also, you know, a lot of you know, it really KCS really is, you know, in, you know, technical, type of organizations. But I think there's applications for this going beyond that, and I think it's still very, you know, immature in those other industries where, you know, yes, there are are regulated areas where it's a bit more challenging. I think there's gonna be some, you know, some risk around assembling answers, you know, in those types of areas. But for sure, there's there's opportunity to leverage and create a knowledge culture to actually be able to take advantage of this. You know, I think that's something that organizations need to think about. They they if you don't have content, if you don't have a knowledge base, if you don't have a practice around creating new knowledge, you know, you're gonna be stuck in time with with what you have. And and that's actually one of the things that people have talked a lot about is, you know, the current knowledge of ChatGPT is is limited based on time. And so having a knowledge practice where you're actively evolving, creating new content, and, you know, merging content and and, you know, basically keeping it fresh, that's gonna be, huge. Now, Tom, you know, the the third tip here is really about listening to your customers. Where should, companies be listening, and what should they be listening for? Yeah. You know, so so there's two ways to to to parse this in terms of listening to customers. What is it that they're willing to use? Do they want a chatbot? Do they wanna interact with humans? What do they you know, so understanding what their tolerances are and their acceptance of any kind of a new technology or new way of engaging them is something to understand. But also intrigued by, you know, this thought of large language models do something really well. They're able to make sense of lots of text. So we have a way of listening to customers like we never did before. When they're chatting on the community, when they're submitting feedback through, you know, our a voice of the customer, we have the ability to actually derive meaning from that in ways that we probably could never do before. So listening to customers, one, make sure that if you're developing something new based on a new technology or a new way of doing business, understand if customers will accept it. Number two, listen to them because they're talking to you now, and now we have a tool that can actually allow you to understand what they're saying. Great. And then finally, you know, when we think about search, you know, as we said, it's kind of that wrapper that can make, you know, this type of technology more enterprise ready. So, Tom, let's start with you. You know, what what exactly should people be, investing in when it comes to search and and what capability they're most important to get right first? Yeah. So, I think we all need to take a step back and and and define this use case, so that we know what problem we're trying to solve. Then making investments in search, it means that we are investing in content creation. We're investing in developing a language model that we're train training to understand our domain. So invest in search is not just go out and throw down some money for a technology. It means that you're committing to creating, you know, a a new way of interacting with customers through new enabling technologies, and you're putting a more investment and effort into that. Wouldn't it be nice if we could shift the entire cost structure of a support organization from lots of bodies delivering answers to lots of bodies creating, curating, and developing large language models to do most of the heavy lifting? Mhmm. And, Neil, you've been, working tirelessly on Coveo's search product. What must companies get right in their search product foundationally in order to future proof their digital engagement strategy? I mean, it's you know, first of all, it's having that content. I know I've said that enough, but it's getting that content unified, secured, getting getting the experience integrated. It's you know, I think, Tom, you mentioned, you know, it's there's probably lots of organizations that are gonna try and just, you know, send their entire knowledge base at a model and try and do something, you know, build build their own. But there are a lot of things that need to be thought about in order to make this enterprise ready. And, you know, you wanna bring this into your support portal. You wanna bring this on your website. You wanna bring this into your SaaS product. How do you, you know, how do you go about doing this and scaling at a at a level that's, you know, secure enough, scalable enough, and that's repeatable enough? I think, like, these are areas where you really need to think about. And you need to think about the user experience. You know, if if you get all of that sorted and then you get it integrated in these systems, and then do you need to manage each each of those experiences individually? And how do they work together? Do they work together? Does the first you know, just does an individual get a similar experience when they go to each of these different touch points? It's it's super important to think about how your customers wanna interact with you and to make sure you're evolving it and you're keeping some consistency and building that trust as you start to to dig into using this kind of, technology. Mhmm. So I'm hearing there's no silver bullet here. A lot of things to take into consideration. We've got about ten minutes left to answer questions. So I do wanna just pull this slide up here. I won't walk through it all, but, you know, a lot of things to consider when you're looking at LLM technology, you know, not reinventing the wheel, performance and operational efficiency, product integration, ease of use, scientific evaluation. So I'll leave this as a leave behind. But now I wanna go ahead and we'll get to some some questions. So I'll pull up the question here. The first question is, is there a way to have a threshold confidence score that would have to be met before a response is given? So the model would ask clarifying questions until it reaches that threshold. So what I would say today is is, you know, we're working on, you know, that experience right now, and there is definitely a lot of, you know, parameters and options around how to use this. There is ways to prevent or reduce the amount of hallucination. I think the thing that's most important to us as we are thinking about how to productize, really, you know, this and anything is always, how do we make this approachable for the person that's going to actually manage it? You know, we're not necessarily talking about data scientists that are deploying this. We're not talking about engineers even. We wanna make this successful to business users. And so how much, you know, how much is enough, you know, ability to have some influence on the output while not putting too much effort and, you know, administration onto the person that's trying to get this thing into their support experience. So I think, yes, there's definitely some some options around confidence and hallucination and things like that. And we'll definitely be as part of our beta program working with customers to gather that kind of feedback and find the the sweet spot, for those types of things. Anybody gather, Tom? Sure. Always. Yeah. No. This is the essence of large language models that the training and the tuning is what makes it what differentiates it from being good to being wow. You know? So if we don't if we don't train and tune the language model correctly, it's gonna garbage in, garbage out. And that's just not the source content. It's the way in which it is you know, the model is designed to interpret what it has access to. So, you know, to to Neil's point, I mean, we need tools so that, you know, somebody other than, you know, PhD level scientists can actually, you know, tune and optimize our our knowledge repositories. This is a huge challenge. It's not insurmountable, but it's it's big. No pressure again. Just, you know, give us a shot. Yeah. I mean, the the the thing that gives me confidence is that we're doing it today. Right? We're we're with a minimal amount of clicks, we're able to get someone to create a, a model that can do answering, like I mentioned, smart snippets. Right? We're we're already doing this. It's not a generated answer. It's not coming from multiple, multiple phrases from multiple documents. But, you know, we're empowering business users to be able to create these kinds of experiences. So I think, you know, doesn't mean that it's the exact same, you know, set of steps. It's a different model. It's a different application of that model. But, you know, the the mindset around how to do this is is definitely there. And we've, you know, we've been we've been through this before, and we'll you know, well, I think we're well positioned to do it again. Great. Right. The second question, how do we know when this technology is enterprise ready? A salesman will call to tell you. I mean, this goes I'm sorry. I didn't mean to jump in. But, Bonnie, this goes to your your your timeline. You know, enterprise ready is when a company says, you know, yeah. We can deploy this. We we we think that it's it's good enough. But now we need to take a step back and say, well, we need to administer it. We need to, you know, have all these capabilities to tune and opt. So there's a lot. And I guess, you know, we already see an example of an enterprise ready, you know, solution that has, like, large language models today. There will be more, but, I don't know the answer to that, but that sounds like a Neil question, though. Yeah. I mean, it's the, you know, it's the foundation that's enterprise that's already enterprise ready. Right? So, again, we're not reinventing the wheel and creating an entirely new and separate solution here. We're applying this as a as a, you know, tool in our belt that is on top of an already enterprise ready platform, and I think that's, you know, that's gonna help you, make that kind of decision, whether it's Coveo or whether it's, you know, any other solution. You you need to be confident in the enterprise readiness of the entire solution, of the connectivity, of the integration. You know, there's there's more to it than just the generative piece. Mhmm. Yep. And, you know, I guess my two cents would be, you know, there there's enterprise ready technology, and there's enterprise ready within the organization. So is your organization ready to, you know, deal with this technology? So, Tom, like you were mentioning, do we have the the right content, the right people, the right, you know, processes to be able to make sure that we're we're leveraging this technology, appropriately? And then, you know, from the technology side, you know, those those concerns around security and accuracy and things like that, you know, and we have that combination of the two. I think that's when we you know, that's kind of that that goal, right, for enterprise ready. We have another question. Tom, you're getting a shout out from Jerry Selick. Hey, Jerry. The question the question is for Neil. So, Neil, are you seeing companies deploying some kind of model internally before rolling it out to their customers or just go live? I would say I've seen a bit of both. You know, we we have a production ready answering model that's called smart snippets. We have it live with a number of customers. Some customers will take the approach of deploying it, to their agents. So they'll start with one use case, which is a contact center, and they'll, you know, basically use that as the test bed and and get feedback from the agents on how that's performing. And then they'll, you know, go live into their customer portal or or website or wherever, external properties. But, I mean, it's it's, you know, it's very easy to create a model, try it out even in a test page. And I think the really cool thing about this is because it's not actually dependent on the actual usage analytics. So a lot of our models for, let's say, automatic relevance tuning or query suggestions in past, they're they're focused on learning from what people type and how they interact with content. So you need usage before you can actually even get to an outcome of of prediction of a query or prediction of, you know, tuning a result. But with, with the approach with smart snippets, we can build a model on your content and have those answers, and you can see them instantaneously. So you can very, very quickly validate the type of answers you're going to get, and that can be enough to, you know, go straight live into production in your self-service site. So combination of both. But, you know, it's again, it's it's is the organization ready? Are you confident in it? And it's completely within your control how you go about deploying it. So Okay. Internal makes makes more sense, though. It it's it's safe. Yeah. Yeah. Exactly. It's it's, and, you know, we even have customers just speaking about actually deploying the solution as a whole, you know, who will take the same approach. They'll deploy to contact center, see the the relevance, do some tuning, get it to a point where they feel confident, and then they'll do the self-service approach. Because you again, you've built trust with your customers. You wanna maintain it. So super important to to take that approach. Now that, that is the time that we have today, so I wanna go ahead and start wrapping up here. If you have additional questions, we're we're we'll be, responding to you via email. And, again, you will receive the recording within twenty four hours after the session wraps. Thank you so much for joining us today. Neil, you as well appreciate your time. Everyone have a good rest of your day. Thank you. Bye. Thank you. Thanks, Bonnie. Thanks, Tom.
S'inscrire pour regarder la vidéo

Scaling Customer Service in the Era of ChatGPT

an On-Demand Webinars video
Neil Kostecki
Directeur, Gestion de produit chez Coveo, Coveo
Bonnie Chase
Gestionnaire senior, marketing chez Coveo, Coveo