Hello, everyone, and welcome to today's CRM Magazine Web Event brought to you by Coveo. Above Fernickeys and of the publisher of CRM Mag and I'll be the moderator for today's broadcast. Our presentation is titled The The Truth about GPT in Enterprise. Which is a fantastic topic for these topic for these times. But before we start, I just wanna explain how you can participate in this live broadcast At the end of the event, we will have a question and answer session. So, if you have any questions during the presentation, just type them into the question box hit submit and we'll get to them at the end. If for some reason we can't get to yours, don't worry, we'll follow-up within a couple days. Plus, if you'd like a copy of the presentation, you could download a PDF from the handouts tab on the console once the event is archived. And just for participating in today's event, you could win a one hundred dollars Amazon gift card. So, now to introduce our speakers for today, we've got Neil Casteki, a VP of product Service at Coveo. Welcome Neil, and Cabil Savonathan, marketing manager services at Coveo. Welcome Kabil. So now I'm gonna pass the event over to Neil. Take it away, Neil. Thanks very much for having us. So quick intro. My name is Neil Kaseki. Like you mentioned, I am VP of product at Koveo for our service line of business. Been here for just over six years. And we're super excited about the promise and the excitement around generative AI. So we're, again, going to talk to you about some of our some concepts around this and some thoughts and concerns and and maybe go through those in a bit. Kamble? Yeah. Awesome. Thanks for having me guys, Maya. Like Bob mentioned, my name is Cabo Sullivan. I am a marketing manager here at Coveo focused on the service line of business. And really excited today to talk to you all about the brutal truth about generative AI in the enterprise. And what we'll actually do is we'll talk to you guys through five part takes of what we are kind of seeing in the media today. And so I know some of you in the audience, you have might have played with Chat GPT or similar programs like Bing or Barden. Now some of you, like myself, might have used it on a personal side of things to figure out do some of your mundane task or make your life a lot more efficient or make processes a lot more faster But there also might be a couple of you in the audience who have used it on the workplace side as well. So hopefully, not entering too much sensitive data in there, but some of the things that you might have done to make your life easier at work. And and there's a lot of excitement around it, and I feel like every day I'm learning something new about it, It's all over social media channels, popping up on main pages, and my Explorer feeds. And there's all these tools that are specializing in different areas, but that's what we wanted to cover today. We want to talk about some of the main ideas, being discussed in medias, I'll go over some of the risks with Neil and what writers or some other people are saying about the impact that it actually has on the future of business. So for those who aren't really familiar with Coveo, we are an AI powered relevance platform focused on search recommend recommendation and personalization. But I just want to to level set here and kind of give everybody a basic understanding of what exactly we're gonna be talking about and and what Chad GPT is. I won't go through all the text that's actually on the screen, but pretty much what Chat GPT is for those who aren't really familiar with it. It's pretty much a a chatbot that leverages large language models to produce generative text. And these large language models, which we might call out as LLM, for short, or the abbreviation, and GenAI are sort of the tools that power, chat, GPT, Bing, all these other AI programs. So am I missing anything there, Neil, before we jump into our first hot take? No, that's a great, great summary. You know, the best way to think about it is kind of break it down into those two simple things. When you're talking chat GPT, you're talking you know, an interface, a chat interface, which has actually given everybody the ability to interact with a large language model and really understand the power of what possible with them. So it's really the user experience that's made possible, that's really, you know, created this giant wave of excitement and and engagement with it. Awesome. So let's jump into our first thought take. So, what our first hot take is the era of assisted support is over. Now, there's a lot of fear in the media where they discuss the idea of AI replacing contact center or other support workers. I'll jump into the next slide. You guys will see a little bit screens captures that we have on specific articles that solely talk about the idea that AI could replace these workers. So, Neil, I know people are probably tired of hearing a lot of these go on and on about how it could replace call center agents. I would love to hear your hot take on this. Yeah. I mean, it it seems like an obvious one. Right? AI is here to replace us. It's taking over. I would just chatting with my wife yesterday and we were joking about that. You know, it's always it's always like this doomsday scenario where the AI takes over and suddenly it it's controlling everything. Well, you know, not the case here. We see a lot of articles, a lot of writing, a lot of analysis around that. But of course, you know, agents are here to stay. And in fact, what we think and what we, you know, what our research is telling us is that it's really going to augment the agent's role. And to be honest, that's what AI has been doing even in the sense of self-service is providing the ability to deflect smaller, easier, cases that are repeated for various simple issues. And we think that generative AI will, again, will help solve some of those more complex issues, engage with help engage with a user on your self-service site and solve maybe even some more issues, but you're still gonna have this need for a support center because there's always going to be new issues. When you're talking about, you know, technical support or any kind of customer care or anything along those lines, there's kind of an endless type of evolving service that you're providing, which means you're going to have new information, which means you're always gonna need to have that, you know, exploration around, well, creating the content on which the large language model is gonna learn. And I think that's important to note here is that large language models are trained on text. And therefore, you need to feed the model with information and accurate information. And actually at my background, I come from my previous company, managing PCS, the knowledge management process. And so you know, capturing knowledge, which is what your agents are there to do to actually capture in context, you know, while they're helping someone, how do they describe the issue? How did it occur? What are the symptoms and things that they ran into along the way? What have they tried? It's their job to capture this information. And then the large language model is then able to leverage that, use that information, combine it across multiple sources, and provide a generated answer. So, again, I think this really just changes the way we think about how agents work. Know, knowledge creation has always been this, you know, kind of aim for perfection to create the perfect knowledge article that has all of the information is super well structured, and And I think, you know, one thing that this does is it actually allows them to, you know, think of it in a different way. So, yeah, new issues are always gonna occur. They're always gonna need to be documentation. The other thing to think about is, in a SaaS business, the supplier owns infrastructure, so the customer has no access to actually resolving those back end issues. So you're always gonna have a need for contacting the support center. And what's interesting actually, we presented about this at a previous Dreamforce, at Salesforce Dreamforce, was the fact that, you know, service organizations ninety four percent of them still have a voice channel. And actually, both customers and agents often prefer to deal with the point of voice voice channel for complex cases. So, you know, needing to be able to actually communicate with a human that has that ability to reason more so than just a, you know, a large language model that's kind of just summarizing pre existing information. The other thing too, I think, is is tiered support. So, being able to actually what would call swarm, intelligence swarming. So you call into a call center or you submit a case, you deal with one agent who maybe doesn't have all of the expertise across all the products or services, they're actually able to engage with other experts within the company, Swarm, discuss and find a resolution or find information that's gonna help them address the problem with the customer. And then actually get back to the customer, create knowledge so that that can be reused, but basically allow the customer to have one interaction with one agent who's able to get all the information they need and really remove the need for escalation and kind of that, you know, consult type experience where you're gonna consult with a higher tier and then maybe transfer over the customer. So really trying to improve the customer experience. By rethinking the way the agents work together and how they collaborate. Yeah. Awesome. I I definitely agree with everything that you said. You know, you touched upon a lot of great words, like, and keywords in in your explanation. One of the things that we kind of use a lot in this service line of business is the idea of augmentation, not automation when it comes to it comes to AI, where we know AI has the capability to augment a lot of these rules, but it won't necessarily fully a hundred percent be able to automate it. And like you mentioned, LLMs, know when they they understand text, but it's not an actual individual with actual reasoning who can go on a more of a human to human approach rather than just talking to a chatbot or something that can spit out text as an output to the actual user as a problem, which kind of leads me to my next hot, hot take, and the idea of will knowledge workers be replaced. So very similar, not just the contact center, but other knowledge workers. And I know Neil, you touched upon this a little bit in your previous explanation. I know you come from a background with where you've helped work with knowledge workers a lot, but wanted to touch upon this a little bit more. And get your ideas, because as we know, it's still something that we see in the media every day like how many U. S. Workers will be replaced by chat GPT or how many of the main jobs there or main knowledge workers that Chad QPT might be able to replace. So it's something that's definitely popping a lot over in the media and something that see every day what we love to get your input on that. Yeah, absolutely. I mean, you know, I touch it a bit, you need your agent to be able to actually capture that information. If you don't have agent, you don't have knowledge creation. And therefore, You also don't have people to review and confirm, you know, like, you create a knowledge article. The idea is to create it as soon as you learn the new information. But there's also still some validation required there, right? Just because one person noted a particular problem or symptom doesn't necessarily make it something that should be just instantly published out to the, you know, the knowledge base there's a, you know, there's kind of a period of validation of reusing the knowledge, you know, refreshing that knowledge as you learn new information And again, if you're leveraging large language models to then summarize that accurately, you know, validated information, that's great. But if you remove, if you don't have your knowledge workers, how are you creating new knowledge? Are you creating new knowledge from how customers are interacting with the chat on your support site? Well, the information that they enter might not be validated. It might be biased, it might be inaccurate. They might have incorrectly summarized the problem that they're having. You need to kind of learn over time. Right? And so this is really important. I think the other thing to realize is that GenerveAI can actually also make it easier to start to start a draft. I think that's one of the greatest things about that we've seen about generative AI is the content creation, right, from existing information. So you might have a bunch of information in front of you, and you might have a daunting task as an agent to kind of take a number of different sources of information and start to compile a new draft, a new summarized document. And so it's really there to help again augment your ability to get started quickly to start a first draft, but then then to go back validate and rewrite some pieces, add in additional context that was maybe left out. So it's really, you know, it's an AI assistant in way to be able to help you start from a draft, reformat in a certain way, add additional detail, I think it's really a tool that's gonna help and and become more and more this kind of side by side assistance that helps you do your work more efficiently, but doesn't completely replace the need for someone to be an agent or a real person to be in the loop. And of course, being able to identify content gaps and being able to identify where there's a gap in the knowledge base. This is something as well that I think is you still need that ability to kind of work with the Genative AI to see where there are opportunities to improve, to merge, to to create more content. I think that you're still gonna need this kind of analysis and oversight to be able to, again, work hand in hand provide and create that that streamlined content, look at the journey that people are going through the content, I think it's important to, you know, one of the things that I always saw is, you end up creating a lot of knowledge articles as you solve issues. And then you end up with a large swath of them that are all around a particular topic that might be a really big call driver And it's important to step back and look at that and go, okay, how can we improve the number of documents that we have, flow the documents from where you're gonna land and what you should read first. That's that's still important. People are still going to leverage the source content. I don't think that you're solely going to interact with knowledge through a conversational experience, there's gonna be a need to actually visit the source and look through documents. So you still need to a very strong knowledge management practice. Yeah. Awesome. I I completely agree. I think the main thing you mentioned that there was a being AI assisted, where it's the great first step, it's the great research base to help us summarize information as much as we possibly can and potentially resolve any issues. But someone needs to be there to make sure content is accurate. Content is up to date, putting it all together, and that's exactly what the end user is going to be looking for. The most up to date accurate piece of information, which kind of leads me kind of to my next point, the hot take number three. And I've heard this multiple times, from friends, from from other people say, Chat GT is the end of search, meaning that, hey, I don't need to use search anymore. I'm just gonna let Chat GPG do all the work for me. And it's been mentioned a lot of times in the media, where it's like, hey, like, does this mean We won't need Google Search anymore or Bing Search or or whatever search platform that we actually use. Because chat GPT, I could just input it or whatever AI system that I'm actually using. I could input my question, and then I'll get my answer right away. So what are your thoughts on this, Neil? What are your thoughts on chat GPT being the end of search. For sure. This is probably the most interesting one, and I like the statement. Search that it's obviously quite a hot take. I mean, we think that search is the beginning of the journey, and it will continue to be. It is the place that people go when you when you have a question, what's the first thing that you do? You think of opening up Chrome and typing in something in the search box. Right? And sure, that that that interface might become more conversational, but it's really, you know, it is really embedded into the way that we think about finding information. And so, you know, obviously, we're, like you mentioned, a search recommendation to personalization platform. And you know, putting search, a unified search across all your content, across all of your digital experiences, and adding generative capability to that means that now you can have conversational experience no matter where you are that's personalized to you that has access to the content across all of the, you know, the different repositories that might be relevant for your your business and for for the problem that you're trying to solve. So we think that, you know, everything that we've done about capturing intent, understanding the context of the user, bringing all of the content together in a secure and unified way, it all needs to connect with the search experience. And if you look at what Google's doing, and what Bing's doing, and like just the fact that we're talking about those brands and what they're, you know, what those things are, linked to, you know, it's really just an evolving evolving of the search experience. Right? So they've brought it into into their search experience. Again, we've talked touched a bit on, like, content creation. They've done something there as well. But it's definitely it's definitely an evolution of search. So search doesn't disappear. And in fact, you know, when you think about what, you know, what these solutions are doing, they're searching for content. They're automating the search for you. So it might look like you're talking to, you know, a page through a search, you know, through a through an input, let's say, and you're getting a generated answer, but what's happening in the background is it's doing a search for you. It's reaching out, finding the most relevant information, and then bringing that back to you in a in a summarized way. So I think it's just It's a changing of the interface. It's a changing of the way people interact with search. People still also wanna know where the information is coming from, right, being able to actually go back. And like I mentioned, look at the source ten. You know, it's great that you get a summarized answer that gives you gives you this information in one flow, but you might want to also go back and see additional context around some of the information that's provided. So where did it come from? Like, when's the last time you read a, you know, a Wikipedia article and saw something and saw a reference that you might wanna dig into, like that's why they include that information because if you just wrote it, And you didn't indicate where it came from or that there's any kind of name behind that or valid source behind that, well, it's just It's just text, right? And so you really need to build trust and security, and you need to understand where that information's coming from. And I think that's always gonna be really at the at the heart of these kinds of experiences. Yeah. Absolutely. And I think one other thing that kind of helps out in the search is that search is probably the predominant section when it comes to data. It's the -- probably a lot of the decisions that we make in business today are around the idea of data and large sets a bit like the data is everywhere, and the people who use the search bar, that is, yes, they're looking for a potential answer, but it also tells you what the user is looking for rather than If you're not asking the right sort of prompts into GPT, you might not be getting the right answer that you're actually looking for, but that search box is gonna give you a lot of powerful information and data that helps us make a lot of the business decisions that we make today, like if we take a look back at everything that's being searched in the actual search bar and we dissect it for for a bigger picture. It's what are the what are our users searching for most often. Should we be creating a FAQ or a blog post or a content or some piece of content white page or whatever that might be about what our users are mostly searching for in our actual website or whatever we might be using. But it's definitely not the end of search. It tells us a little bit more of what our users are going to use, and that entails helps us tailor and create more of a personalized experience. With our actual our users. I'd say I think the other thing too is, like, it's extremely low friction, right? I think And that's why I said when you have a question, you just, you're either gonna open your Chrome and it in the search bar, you're gonna hit, you know, the the voice button and speak it out. But it's it's that low friction. It's like the the way that people interact and go looking for information why am I gonna go navigating around? Why am I going to open a chat bot? I think that's another thing is like if you had a search box, an empty search box in front of you and you know what you're looking for versus starting up a chatbot experience where you typically are gonna have to type in some information. It's gonna ask you some questions. There's gonna be some additional workflows and things that are going on in there. You know, when people are looking for information, it's extremely important to give, you know, it's why Google has this giant search box that it's in the middle of the page. So why you go there and that's it. Right? That's they make it as easy as possible for people to get to. So I think that's extremely important. We do a lot of a lot of research around the actual design of search experiences, explaining why you might wanna have only one search box and you know how contrast between the background and that. So it's very clear that it's an open, you know, open place to add information to ask your question, having only one in the experience. So we we do a lot of research around the actual the experience with engaging with that. And, you know, generative is a whole different set of things around how do you how do you take that experience, provide a generated answer along with surge, and then also allow, start allowing follow-up questions or other suggested questions that help lead and give guidance to the user about what, you know, what else they should be because I think that's the one thing I notice, you know, when I go to kind of a chat GPT experience is it's an empty box with no guidance at all. And I think that's why merging kind of the search experience you're used to, which is type ahead, giving you information in the search box to help you actually formulate your question. I think bringing those things together is a much better experience than here's an empty box. Type something, and you don't really know what the right way to ask that question is. So I think we've been doing that with the search box with just query suggestion. And I think the same applies in generative. You need to help, kind of lead people down the right path to help them know how to have the right types of questions. It makes the right prompts, if you will, when it comes to generative. Also, that that leads me to the next hot take that we have, whereas, can we live with hallucinations? So The idea here is that, and for those who are wondering actually what hallucinations are, it's definitely not something that you would happen to an individual at a major festival or or something like that. That's that's not exactly what I'm talking about here. It's a hallucination in layman's term is pretty much when the large language model produces a false answer or false information that might just be outdated or it could be completely flat out wrong, but the model is very confident in it. So when you put your prompt in, it's spitting out an answer, but that's answer is incorrect. So, Neil, what's your hot take and we know we see it a lot in the in the media. There's a couple of screen grabs there, where it's just what's exactly happening with with a hallucination in chat QPT or other AI platforms. What are your thoughts on that? Yeah. I mean, it's definitely a concern when you're talking about these really massive models that are trained on all sorts of text data, yes, they know language really, really well. So well that they can actually start making things up. You know, we've asked questions like who founded Caveo and learned names that we've never heard of And so, you know, the information is trained on, potentially the information that it isn't trained on And therefore its job is just to return text and generate something that looks like a well written, there is a well written answer, but not necessarily the most accurate, correct and well informed one. And I think that's where we be a great opportunity, which is our customers are actually indexing their content, their own knowledge, their own documentation, their own website. And so having the ability to kind of put guardrails around what information you're actually generating an answer from is really key. And self-service is extremely important. If you give a bad self-service experience, you might as well not even have one. So and then this is something actually, you know, we have a relevance report on this is if you have a bad experience with the brand, it's very likely that you're going to to walk away from that. And you're probably gonna tell the other people about that bad experience. So it's really important that you're getting the right information, you know, in a personalized way, and you're able to summarize that right information. And so, yeah, I mean, hallucination is really just you need to, you need to have a solution that is purpose built, that is taking that into consideration. The right information, the right relevancy so that you're training the answer on we know the content that is most relevant to actually what you're asking and not just any content. So I think that's incredibly important. It has a big impact on your customer satisfaction, on your customer's customer effort score. So it's all all connected, and I think extremely important, not only for customers, but also for agents, you know, if you're if you're hallucinating answers to your agents who are giving ins to your customers, there's nothing worse than, you know, a customer who's been advised to do something that is completely inaccurate, is ill informed, and they walk away with a really bad experience from speaking to your customer support. So I think that that's all so something that's really important, you know, and that's gonna avoid generating callbacks or reopen cases. It's gonna avoid any kind of like escalations with your your different tiers, And generally, it's gonna increase well, it's gonna decrease your cost of serve, but it's it's, you know, you don't want to You don't wanna introduce any kind of potential negative impact to the service, to the experience, to your your customer satisfaction. So I think those are all really important. You know, it's fun to, you know, get some answers that are wrong in context of, like you said, personal experience. It's not the end of the world. We're not talking like mission critical answers here. We're asking, you know, Where should I take my family on a vacation for in the summer that has a beach and like that these kind of questions that it's okay to be to be a little bit loose on the answer. But if you're asking how should I go about reformatting a device upgrading a key service in my SaaS software or some kind of mission critical, end customer affecting device or experience. You really can't have any hallucination or inaccuracy there. You're really gonna have a really massive negative impact potentially on, you know, multiple multiple people. So very, very important. Yeah. I I think the best you hit the the Amazon on mail right there. I think it's making sure that the right information is feeding into the model. For for everybody that's listening, like, one thing that I would suggest is try when you have a prompt in using chat, for example, let's just use chat QPT and Bing, putting the same prompt into two different AIs and seeing the outputs that you get from either or I can give you the perfect example. I I was invited to to say a speech at one of my buddy's weddings and I'm no math expert, so I was like, hey, I need to figure out, you know, who the percentage of a statistical probability, and I put the exact same prompts into both Chat GPT and Bing AI, and they both gave me two completely different answers. And my prompts were, you know, I wanted them to show me all their work, explain why they took x y z steps, to get me the answers, and both answers were completely off off from each other. So that's also one thing that, you know, you could test as well on personal side, if there are stuff that, you know, you're not a little sure of trying to see if there's actually, you know, which one is right, which one is more accurate, which one is it hallucinating as much there. But you'll see it all the time, like Neil mentioned, with the Koveo we we put it in, who's the actual founders of Koveo and I think it was three times out of the foray, and it looks fitting the wrong -- the wrong answer. So just making sure that when we are looking for something that is mission critical, that actually pulling from the right sources of information there. So this will move on to our last hot take, and this hot take is can we build it on our own? And I'll pass it to you, Neil. Is this something that we could build on our own? And we have a couple points on the next slide that that touch upon some of the reasonings behind this. Yeah. I mean, I've I've heard it. I've seen it. I've seen people trying to build it. And I've heard people talking through it. I think we've been doing this for years. We've been building this, you know, very advanced unified AI powered search platform. And we take privacy very, very seriously. We're HIPAA compliant, SOC two, ISO. So, I mean, it's, you know, this is something we're dealing with our customers' data, our customers' content, their end users interactions. This is all, you know, something that baked into what we do. And we build it for scale. And so I think that's the thing that is worth thinking about is if you're building a point solution, if you decide to try and build this out your own, Are you gonna commit the amount of resources that are really needed to not only build it the first time, which is still gonna be quite significant. But then to keep it running, keep it up to date, ensure content freshness, versions of new content that's coming out. It's not okay to say, well, the model that only knows about three months ago, you know, information changes daily all the time, and and there's new information that you might wanna add. And controlling like we mentioned around hallucinations, providing the right access to the right people, internal content, external content, percent, maybe even more granular permissions, but based on tiers of different support or departments and immunization. And so it might seem like I've seen a lot of blogs and things about, hey, just Here's a quick example of how you can put together, you know, objective experience, pointed out a site map and here there you go, you've got a generative question answering experience. But there's much, much more to think about and to build that at scale, and to evolve it over time. Like today, there's the current large language models and the current capabilities but literally tomorrow. And day by day, it changes. And so as an organization that is building for scale, building for our global client, you know, being able to serve models, thousands of models, and retrain them daily, being able to serve at a query per second level. So this is something that really we're doing. And so we're well prepared to be able to build a solution that you can then go and implement and integrate into any content source, any site that and not have to think about each one of those as a point solution. Great. You've lived a generative search into one page in your experience, What now if you wanna put it into your into your product, into your SaaS product? What if you wanna put it into your website? All of these become additional pieces that you need to build out and maintain, and they become possible, you know, forks in the road where you now have like different experiences that are managed by different teams that are now costing the company a lot of time and resource just to keep those up and running. So I think depending on a platform that builds for scale that's integrated into all these different systems with low code, and also code options. Yeah. It really means that you can kind of take those resources and focus them on, things that are critical to your business. And so I I I think that's if the answer is, can you build it? You can, you can certainly try. But I think very quickly, like a lot of things, you can learn that it really requires significant investment to to do it and to keep it going. And is that really what, you know, you wanna focus your your most critical resources on? Yeah. Awesome. So, those were our five hot takes. And now when we talk about getting started, with generative AI. I think everybody who attended this webinar or is watching it is at least wondering about that idea. How can I get started with generative AI? But I think we need to look at it from an -- from an enterprise perspective. There are so many different use cases and programs out there that can tackle very, very niche areas. You can have AI involved in pretty much all aspects of your working life now, but -- but our -- and that's our recommendation out there too is to look at the use cases you need it for and actually why, like develop that business case. For for example, the use case of a support agent in a contact center and someone who is might be completely different than someone in like a com and customer experience section. So, finding out where exactly is you're going to be using the use case, use case, develop that business case for it, and then use that as your ultimate guiding point into figuring out what sort of generative AI platform do you need? And how is that going to impact and how like we mentioned augment your overall business. The second thing I want to touch upon is something called our knowledge strategy. And you heard Neil and myself talk about that often. And and we mentioned it in a in a hallucination point. If the source of information is inaccurate or sometimes it's just out of date, like we mentioned, like I mentioned, pulling two different answers from two different chat GPT experiences, or I could even say, for example, A, who won the home run derby, for example, it might say wants Sotto might have wanted it because he wanted it two years ago because the information hasn't been updated to to say that bloody Guerrero junior had wanted, shout out to my BlueJ's, because the information is not up to date. So, but that's what knowledge strategy is really important. Again, your workers aren't being replaced, but the roles will be augmented, and they could reduce the amount of time to make sure that it takes to support the end user. And having the right knowledge strategy in place, we'll definitely help that. And the next point is to actually listen to your customers. Are your customers ready for this? Is this something that your customers will be excited for, that they'll take, that's they will start using right away in in your platform's gear. And then the second part is do your customers trust the answer? Who is your target audience? Are they individuals who will continue to use this experience and continue to trust the output that it's getting up. And hey, sometimes you're target audience might not be ready for this, but maybe your internal team is. So figuring out who your ideal audience will be for this. And will they actually trust the output that these systems are spinning out. And the last thing is investing in search, investing in the tool that goes hand in hand. Because the data and the results from search we pushed upon earlier is ultimately going to help you create that knowledge that you need in order to be successful, but I think the idea starts with investing in your search and then having that play where you can use this large language models to kind of augment the overall experience. You have anything to add there, Neil. I know I touched a lot on all of those points. Yeah. I mean, one thing I realized I didn't speak too much, but, you know, by investing in that last point, investing your search, you know, one other thing that you get by kind of combining these experiences is insight to what people are actually looking for and what they're maybe not finding. Or what they're, you know, asking questions about continuously, having analytics on top of that experience to understand, you know, what what questions people have, so that you can create better content for them. You know, this is this is definitely key. I think that's another reason. You know, even if you create a kind of secondary point solution, you know, now you're have two different places where you're not unifying the kind of information so that you can understand how to how to provide answers and kind of plug the hole in your self-service knowledge base very easily. Yeah. Awesome. I'll kick it back over to you, Neil, is there There's a bunch of text on this slide, but it's stuff that what would you want to sitter with these LMS from, like, not reinventing the wheel from their performance, scientific evaluation, and product integration, ease of use. There there's a ton of questions here, but what would you call out would be like the most top things to consider with with LOMs. Well, I think, I mean, you know, leveraging pre trained model, there's obviously, you know, a value here, you know, using best of breed. To be able to to do the actual general piece is key. But you can see mentioned around like fine tuning for of use cases and domain adaptation. There's obviously some advantages for this. And, you know, the approach that we're looking at here which we call a, you know, to be a relevant, generative answering. You know, it was this kind of retrieval augmented generation where we're able to actually take all the benefit of how we've ranked content, actually send that out and return an answer that's summarized based on that relevant and accurate content. We're looking, like I said, we have a huge R and D team, massive amount of R and D resources in natural language processing that are very familiar with all the latest and greatest. And constantly doing, not only doing research, but also doing writing. There's lots of, you know, we have research papers out. From our team. So we're, you know, extremely experienced in this building it for scale. So we're aware of the latest and greatest technology and always looking that what's coming out from day to day and evaluating it, making sure that in fact what is said to be the greatest is how actually evaluating based on our particular use cases. I think one thing that's that's really important is kind of mentioned like listen to your customers. I think it's a combination to listen to your customers and also know the problem that you're trying to solve for. Understand the constraints that exist around you, like how much content do you have, where are people actually asking that question and the period designed in a way that's gonna work well for this this type. You know, instead of just taking a technology and just slapping it on everything, really thinking about and determining how to leverage it in the best way and to solve a specific problem. But yeah, I mean there's lots to consider. And of course, we're always looking at different avenues of how to do this the right way for our customers. Yeah. Awesome. And just moving towards the the last section here before we we kind of open up to questions and answering periods with the time that we have left. So pretty much, New York, this is pretty much our integrated search and question answering. NeoD just want to walk him through this slide and kind of how it actually looks out. Yep. So we connect to any of your content no matter where it exists using our secure connectivity be able to bring it into a unified index. And that's what we already do today. We bake all the permissions security into the index, which means when you perform a search, so I see what I have access to. Cabel sees what he's had access to. And that allows us to get very relevant and accurate search to start with. Adding question answering on top of this, we're actually able to extract out the relevant the relevant pieces of information from those documents, store them in a vector inside the index so that we can actually and the relationship between the queries that's being performed, the intent, and the keywords that are being entered, and the actual content in the index And therefore, when we generate the answer, we're actually taking those most relevant snippets of information from the content. Sending it over to large language model and along with the prompt, which is the questions. And we're basically saying Here's the question that a user is asking. Here's all the most relevant and accurate information from various documents and phrases. And what we'd like, you know, the LM to do is summarize this information concisely for the user, and then we'll return that back directly into the search experience. And so like I said, you get all the benefit of the analytics, the easy administration, implementation, the integrations into all of those systems and into front ends, so that you can integrate it into various platforms like Salesforce, ServiceNow, Zendesk, or other web applications. So this is the advantage of kind of baking it into the system instead of creating something entirely separate. Also, that wraps up our presentation. I'll I'll now pass it back to Bob to help us moderate sort of the be question and answer sessions. Great job, guys. Really interesting, and I love the way you just kind of worked right through you know, the typical questions that people are thinking about, which is why I'm gonna start with Eddie's question. To start with, because we're right at the beginning of the this huge hype cycle, and people wanna know, you know, separate truth from the chaff. Eddie's question is, do you feel that chat GBT will create more disparity among organizations making some highly efficient and others becoming non performers I I always you know, what it seems like to me is that we're going through a major transformation right now. With generative AI, and it could be on the order of when things moved to the cloud in their early two thousands or smartphones mobile, you know, after the iPhone was launched and and kind of developed. Where are we at right now? Is this really gonna be a a separator amongst organizations, enterprises? Well, Louis gets you our our CEO likes to say AI or die. So Okay. You keep it really blunt. You know, you you have to adopt AI. It is the only way forward. It is the only way that you can actually, you know, scale and provide relevance and personalization at the level that you need to. And I completely agree. Yeah. It's going to force it's going to it's going to create disparity between organizations. Anyone who thinks that they don't need to consider this, that they don't need to adapt, and and, you know, listen to their users. And when, you know, when my mom's talking to me about chat GPT, it's it hit the masses, you know. So I think it's it's definitely it's definitely something that's gonna create disparity. And I think, yeah, you have to you have to be thinking about AI. You need to under what genitive is. I've been telling people, you know, they're asking me, how do I how do I convince my executives that we need to have a strategy around this? And I said, well, you need to not only convince them, you need to convince everybody in your organization. Everybody should be thinking about and trying to understand better because this is This is fundamentally changing a lot of things. It's changing the way technology is is really operating. So so yeah. Completely agree. AI or die. Okay. I could tell you that we're we're in the process of completing our own annual reader survey, which we do every year. And it seems like generative AI is kind of along the timeline where a similar question we asked about digital transformation was about five or six years ago, where twenty five percent of respondents so far have said that currently their organization doesn't have coherent strategy. So there's a lot to learn between now and in the future. Here's a fantastic question for Dan. From Dan, and Okay. I did not realize that the hallucination problem was as compelling, where it seems that AI can actually tell a pretty good story without delivering the proper answer. And Dan's question is, when it comes to whether an answer is accurate or not, who do people trust more? Humans or the machine. And I'm gonna kinda add to that a little bit and ask you what are how are you gonna convince people to be confident in the answers that they're, you know, receiving? What are the core tenants of trust that you have to build amongst customers to be confident in the output. For sure. I'd take it in two pieces. So, one is building that trust with our customers. So, our customers that are using technology to serve their customers. And I think by to do that, it's really showing them all of the the security features that we're building into the system. Right? So showing them the ability to have, you know, security permissions in the index, being able to ensure that we're when we're sending a prompt, that we're removing that prompt, having zero retention, So there's a number of different ways that we're, you know, building security into the system, so that they feel comfortable with the security and privacy that's built into actual solution. And then there's the end user who's actually engaging with an answer, like you said. How do we show them, you know, how we build trust with them? And I think it's gonna be the ability to see where the answers are coming from. And that, you know, you essentially have to trust the information. Right? It's I don't even know if it's trusting the the human or trusting the the the machine, it's trusting the information that the human or the machine is basing their answer on. Right? And so it's, I think, being able to look into, so when we return an answer showing the actual sources where they where they got the information from, That's how you build trust. It's giving ability to kinda go back and say, okay, here's additional information and, you know, kinda correlating them together. But in in all reality, I mean, it's, you know, when you ask a question to chat GPU team, you get a pretty conceiving answer for sure, there's a lot of people that just run with it, and they're okay with it, you know. And again, if it's mission critical, I think the question and the context around what you're doing also influences how much how much trust you need to have in it. But yeah, I think this this is something that is complicated. It's a it's a complex answer, and I think it's just a matter of listening to our customers, understanding what their concerns are, and then finding creative ways to be able to solve for those so that they can feel like we've built something that is meeting their needs. And that they can have the right amount of trust. And so at some point you have you have to trust. Right? I I had a question that one of our last conferences about, you know, if you're leveraging one of these larger models, then there's, you know, then you have to trust so they're doing the right thing with your data. Well, at some point, you you have to trust someone along the way. Right? So Right. Well, we have some more questions about the large language models, which hopefully we'll get to in the next five minutes. But just another great question from Eddie. And, basically, his question is, and we may be using chat GPT. Or generative AI interchangeably. This question is, Will Chat GPT force or compel? Some search engines to improve or streamline their searching capabilities. Interesting question. Yeah. I mean, I think we're seeing it. I think it's happening on a on a daily basis. I think and it's not only chat GPT, you know, I've seen others as well. U dot com is another example. You know, where they're providing a conversational experience for for the open, you know, public search engine web. And, yeah, I mean, you know, case in point, our customers' expectations have changed. Their customers' expectations have changed. You know, that's we've been on this journey for the last, you know, two or three years. We've got a feature called smart snippets that does question answering, not generative, answering, but using a large language model to extract a paragraph, a section out of an article or content So, yeah, I mean, it's definitely it's it's forcing it's a forcing function. Right? And I think the fact that they made it so easy to sign up and so easy to try is what's got it on everybody's mind and wanna change the expectation. So for sure, it's definitely a forcing function. Great. Okay. So I'm gonna just ask this next question, but I'm gonna I'm gonna try and squeeze into questions. So just bear in mind over the, you know, in the next four minutes. The the next question is, what are the biggest benefits to support agents do you think will what will they be? I know you've covered this pretty good in your presentation, but if you could just concisely summarize That'd be great. Yeah. I think it's it's allowing them to concentrate on more complex work. And maybe focus more on the customer. You know, it's it's taking a lot of load of having to Think about, geez, I need to I need to summarize everything that's going on in the case and create a knowledge article about this. Watch so they can just focus on being, you know, having empathy for the customer, listening, and, you know, finding the right number of articles isn't the you know, they're able to ask a question and actually get something that summarizes a bunch of things. It means that they don't have to put the customer on hold to do a lot of digging. So I think it'll just it'll take a lot of pressure off. I think they're gonna feel like they have you know, a side kick assistant here that's able to help take some of the load, and I think it'll I think if anything, it'll lead to better support experiences and and better agent satisfaction overall. Fantastic. Okay. So last question, and Sean, I'm going to ask your question as well. So how is Coveo using concepts like large language models today? And what are your And what's your team building right now? And just if you could just shot in s, and how do you filter out that information from the large language model If you could kinda wrap that up, that'd be great. For sure. So we are we are building, like I said, relevance, generative answering. You saw kind of the the flow that we show a little bit earlier, so we are building generative directly into the search experience. So you will see it in you know, self-service search page where you ask a question in the search box. And if we have a good amount of content to answer that, we'll actually generate an answer directly there. Again, on top of all, the security. We're working with a number of customers in our beta program right now, so we've got customers that are are, you know, in the midst of limitation, getting feedback from them, working with them as design partners to understand how they would like this to evolve over time, So we're we're very excited and actually we'll be going live with this on our own documentation site very soon. So, you know, how do we filter out the bad information? Again, it's that unified index. You know, our customers training their indexing only their content. So when we send the prompt out to the large language model, we're really saying ignore everything that you know about the outside web, All I want you to do is take this relevant information and this question, and I want you to summarize it in that context. And so there's a combination of that and also parameters in the models that allow you to kind of tune for how much, you know, how much Nuance or how much flexibility the model is given. And so we're able to really, you know, fine tune that for our customers. And and that's that's basically how we're filtering out the bad information. Fantastic. I know you covered that, but it certainly helped me here the second time and summarize as well. Absolutely. There's a lot going on today. And we had a huge group. Thanks everyone for coming We're we're actually at the top of the hour, so we do have to wrap things up. That's all the time we have for questions. I think there were some might have been some other questions, but don't worry, we'll follow via email. I'd like to thank everyone that joined us today again. It was a huge group. Thank everybody that asked questions as well. And especially our speakers and sponsors, Neil Casteky, again, VP of product service at Kovea. And Cable, Sullivan marathon. Hopefully, I said that correctly, Cabbel. Marketing manager services at Caveo as well. And if you'd like a copy of the presentation, you could download it once the event is archived. And if you like to review the event, or send it to a colleague. You can use the same web address that you used for today's event. It'll be archived for ninety days, and don't worry. We'll follow-up with an email once they are this archive is posted. And just for participating in today's event, you could win a one hundred dollars Amazon gift card. The winner will be announced on July thirty first, and we'll reach out to you via email if you're selected as this month's winner. So that concludes our broadcast for today. Thanks everybody for joining us.

5 Hot Takes - The Brutal Truth about GPT in Enterprise

With so much buzz in the market around ChatGPT and generative AI technology, service leaders are excited to jump in quickly. But where do you start, and how do you go from hype to reality? In this session, Coveo breaks down 5 hot takes on the topic so that you can avoid common pitfalls that may hinder enterprise adoption of Generative AI.

What you will learn:

  • We will take a deep dive into Generative AI concerning customer service automation, security, and jobs.
  • Search is an amazing tool, but we will take a deep dive into automation vs augmentation.
  • Are you capable of building Generative AI Tools in house, or should you leverage existing tools already in the market?
drift close

Hey 👋! Any questions? I can have a teammate jump in on chat right now!

drift bot
1