Here we go. Thanks, everyone. Okay. We're we're getting ready. So I'll provide some context for sure. So we are here today to talk about JennyI, and the usage across your websites. Right? We're talking to both Coveo customers, also some prospects. So we want everyone to be able to kinda just share where they're at with Genai, what we're thinking, or just AI in general. A little bit of context. So we know that Genai is two years old now. Right? We know that companies all over the world are really racing to understand, how and where it can be applied not only as it sorry. One second. Companies all over the world are really racing to understand how and where it can be applied as well as the impact that it has on your business and also on the digital customer experience. Right? And that's because JennyI is truly changing the way that customers are interacting with your site. It's raising the bar for personalized, contextual, and seamless interactions across sites. It's even changing the way that they're interacting with your sites. We're seeing that they're using more natural language, and we expect to have that in return as well. So there's truly so much potential with Gen AI and its ability to transform the digital customer experience, but you need to be able to harness it in the right way. And that's why we really wanna talk a little bit about that today. So before we get into the conversation, I'll show you a really quick example of how one of our customers is using generative answering just to kinda ground the conversation so we all know what it is that we're talking about, and then I'll open up the floor. Thank you so much everyone for joining. Really appreciate that. And, please be ready to answer some some questions and just share your thoughts on this. So let's, show a quick demo, shall we? Okay. Perfect. So, this is one of our customers, United. You may have seen, this website before. You may have even tried to interact with the chat. There's actually generative answering that's coming on in the back end here. So one example here is, can I fly with my pet? You'll see that it's given in more of, a natural language. It's talking more how a human would speak. And then if you go to the next slide, the answer that's going to be generated is also going to sound like a human. Right? And it's also going to be based solely on their content, which is really important, because you can get some pretty crazy answers if you're gonna be, like, scraping everything that's on the Internet. So it's definitely grounded in their own content. You can also summarize the most relevant sections of different content across sources, which is really great. So it's really gonna scale your own count content output. You also see, on the slide right before this actually that there's also links of where the content came from. So if they wanna go and they wanna learn more, that's right there as well. You can also decide the format that you wanna see it in best. So maybe it's a summary. Maybe you want it to be in a listicle. So then if we go on to the next slide, we also have, you know, the search results that are still being there. So if it's something that you actually want a verbatim answer, you don't want generated generated answers to come up, you can also have this choice as well. And then on the next slide, we also have search results. So you're still gonna see the most relevant searches to come up. So here, it's showing international travel requirements or things about pets because they're asking if they can fly with their pets. And there's also going to be recommendations so you can show, you know, other questions that people have asked that they think would be useful in this section. But with generative answering, there's also times that you don't want to generate a response. Right? So we have this kind of silly example of can I fly with my kids in a checked bag? So, hopefully, none of you have put this into a chat before. They probably have content on flying with kids, probably have content on checking bags. But in this case, you do not want generated, Gen AI to get creative and put those things together. So in this case, they are not going to generate an answer, because they have control over that. Right? They didn't find an answer that matched it, specifically, so they're able to control that in in that way and not try to, you know, come up with something. Also, if it was scraping the Internet, maybe there is an answer that could've come up there that would be a little bit wild. Paul, did you wanna add something there? Yeah. Happy to do so. So, as you were saying, Carrie Anne, the the the key here is, control over the way in which, both the content that's being used to generate an answer and also, the sort of thresholds of confidence for which we'll actually generate an answer. So the the the goal here, United's got a relatively small amount of information, a few hundred customer service pages that are well crafted, that are approved, and so on, and using that as the grounding for generating an answer no matter no matter up to a point how the person phrases their question. The goal here is to be able to answer a question accurately, not answer a question if it's not appropriate, if it's not a certain level of confidence in that answer. That's exactly what you're seeing here. You are still seeing results, you know, down here around baggage and flying with children and so on. But, you know, because there's not a level of confidence in this semantic embeddings from within these documents related to this particular question, There's not an answer being generated. So that's certainly key for, for somebody like United who, you know, has to has to provide accurate information. Yes. Thanks, Paul. So that's the United example. There's also another one, that if we go to the next slide with Dell. And this is similar example, but here we have a more complex use case. Right? So they have so many different products, so much documentation, for a a variety of things. So you could see a little bit of other things are added here. We have filtering and facets to, you know, narrow down the scope of what it is that you're talking about. We have recommendations that are built in. We're still seeing this generated answer that's coming in, you know, natural language. You have it here as kind of like a listicle. It's able to answer these more complex types of questions. This one, you know, is a little bit more in the terms of self-service, in the hopes of case deflection so that they can find all the answers that they need right here without having to submit a case. But it could really be applied across websites even if it's not, having even if it doesn't relate back to service, in that case. Yeah. Certainly. You know, an informational kind of question, it it's not explicitly necessarily, you know, self-service. You may have a question about a particular Dell product, but you wanna ask that question in a natural kind of way, and you'd like to get a natural kind of answer clearly, again, grounded on officially approved information. You might notice if you look closely at the screen here, there's a lot of different kinds of content that Dell needs to have searchable on their site, including, for example, discussion forums. But that's not necessarily an appropriate source for a generated answer. My feeling, and it'd be interesting to hear your feelings as well, is that content needs to be searchable, but, you know, the way in which a generated answer looks and appears to us is something that we tend to trust more for better or worse. So it's important that those generated answers be be accurate, be approved, be interesting as well, I guess, as well, useful. I guess, is another important word in there as well. So, again, you know, you have the you have and should have, no matter what kind of technology you're using on the back end. Obviously, Dell and United are both using Kaveos technology. But no matter what kind of generative technology you're using, you need to make sure that the content that you're sending to it is is trustworthy, accurate, and interesting, useful. Yeah. Exactly. I think Jenny has really showcased the importance of the quality of your content. Right? So you'd might have heard kind of garbage in garbage out. It's kind of has everyone rethinking their content. Also, you know, its ability to be read by AI, how digestible it is. Do you have content that's specifically answering the questions that you think are gonna be coming up? There are all these components that you have to think through when you're implementing Gen AI, which is some things I'd like to get into. So that is all for, you know, kind of a short demo just to ground everyone on what it is that we're talking about. So let's get into the discussion. I would love to hear from everyone if we can do a little bit of a roundtable here of just, you know, introducing yourself, your role at your organization, and your area of ownership. So which part of your website do you own? How are you thinking about it? If you want to share also some initiatives that you have with your website coming into the next year. We have already, done the Coveo introduction, so, I am going to go by who I see on my screen. Andrew, you were the the the brave first person to come on to the screen. Would you care to introduce yourself? I guess that's me. I don't know if you guys can see it or not. Maybe I'm the the phantom voice right now. So at my organization, I handle the nuts and bolts implementation of AI. At least as far as web search, we are very much in in exploration phase right now. We did the same thing that we're doing right now a year ago, and we opted to purchase some AI solutions for internal agents. We were very hesitant to, make something customer facing. Right? It was all it was all very new. Hallucinations was just as buzzwordy as AI itself or GenAI. So, you know, a year ago, we opted to do some some internal, purchases to solve those use cases. But we're here again, today, and, we're looking to make it customer pacing. We've got some comfort with how it works. Hallucinations are less of a concern for us. You know, we're we're familiar with it internally. So yeah. I mean, we're we're we don't have anything right now. We're currently we have customers, at least for a traditional, I guess, you would call, like, machine learning search or whatever how you're gonna classify it. But, we're we're just exploring. We're seeing what's out there and what the capabilities are and how much the landscape has changed in the past year. That's great. That's definitely not an uncommon approach. Right? We see a lot of people testing it internally first. It just kinda feels like a little bit safer. You can get comfortable with it as you as you mentioned. So thank you so much for sharing. I'm sure there's a lot of people who are in a very similar position to that, and we can dive a little bit more after. I see Chelsea is, here to listen primarily. Cynthia, would you, be open to introducing yourself and a little bit about your role? No. I think Cynthia has dropped off. How about Fred? I don't believe or here. How about oh, there we go. Sorry. I didn't realize I still had her on mute. No worries. We're a consultative group, and a large percentage of our customers are asking us to help them with AI. And it's we find it a significant challenge into which AI quote tools or organizations are have valid solutions for end use applications. And as probably everybody stated, it's more of an internal tool in our experience so far rather than, customer facing or external organizational fate facing. One of our customers is a major aerospace and defense, client, and they're trying to determine if it can help with their supply chain. And if it's a supply chain, it's relatively easier to implement something like this. So that's our focus right now is on supply chain and how we can help various customers use AI. In my perspective, it's all ROI. If we get an ROI out of it, great. If we don't, why bother? Indeed. It's not just the latest shiny object. Right. Yeah. Okay. So it sounds like your customers are looking at AI written writ large, if you will, not necessarily generative solutions per se, but but AI, you know, in the broad sense. Is that accurate to say? Yeah. We've interviewed, I think, about ten of the supply chain, targets. And by and large, most of them are looking at very simple automation. Mhmm. Very simple. And, you know, you you really don't need AI for that, but I think it's sort of like a a safe stepping point and then will enlarge the capability. A real simple example is how many x, parts are going to which plant for this customer, and they've always known that, but somebody would have to take a phone call. Mhmm. K. And so now, we're teaching the supply chain, provider how to do that with AI. And so far, mixed results, you know, the Mhmm. So we have to go back to the, prime and say, look. Your your people are still calling on this, and we've agreed with AI. So we need an internal training program Mhmm. To tell them to use the AI solution. There's a lot of trust issues. Sure. Right? And so but but it I think, ultimately, probably in twenty twenty five, we'll see we should see a stronger adoption Mhmm. Simply from the spreadsheet aspect and then other words, reducing cost. Yeah. Okay. So that's what we anticipate. Adoption and measurement of any new, technology is is always a challenge. Yeah. It is. K. Thank you. Yeah. Thank you so much for sharing, Fred. We really appreciate that. Gretchen, would you want to introduce yourself and, a little bit about your role? Or, you know, what what you're looking to to get out of this as well, maybe it can kinda spark some discussions. Can you hear me? We can. Okay. So I work at a membership association. So it's a nonprofit. It's a professional society. My role is knowledge management, and I kind of roll up into the, larger web applications group. So my colleagues are, you know, developers, front end and back end, and things like that. We are long time Coveo customers, and our instance of Coveo is Coveo for Sitecore, which I think is now Coveo for website. And so I sort of function as the, like, the business owner of the Coveo platform, and sort of the search experience across our website. And I don't know. I think we've been with Coveo for, I don't know, since twenty sixteen or something. Sorry. My dogs, of course, are choosing not to participate. One is real interested in smelling a corner of a room. I don't really know what's going on there, but I'm a little worried. Oh my gosh. If you need to go deal with that, we're just No. Like, I like, is there I don't know. So, yeah, of course, I have lots to say. So in terms of AI, we are, I guess, sort of working on, like, a I don't know. Like, early proof of concepts, kind of getting our toes in the water. We I guess the vision would be having, like, a, you know, Gen AI sort of a smart chatbot for, website users, but more so focused less on, like, like, an agent or self help kind of thing, but more, sort of answering questions related to our content. Our content is pretty extensive. It goes back for decades and decades and decades as part of a a membership association. You know, we've published our own books, and we've published our own journals and magazines and conference proceedings over the years. So we have decades and decades and decades of content. So there so it's it's it's a lot of it is on the website. It sort of exists in kinda different formats. You know, a lot of it is text based. Some of it is is video content. But it's all kinda there. So but the quality of the data kind of varies wildly. Mhmm. The older stuff, the data isn't that great. The newer stuff, it's in, you know, much better content, but or much better form rather. But we've had, you know, dozens of different staff people over decades have been the ones sort of adding the metadata behind the scenes and publishing to the website. So there's been a lot of inconsistency. So, you know, looking for, I guess, like, a so I guess if anybody on the site is maybe, like, a Coveo user already and specifically a Coveo for websites. That would be I'd be interested to hear what their experience is and if they are you know, if they've kind of released some AI tools in their environments and how that's gone. So as we kind of noodle through some of that. Yeah. That's great. Thank you for for sharing. You mentioned, Jenny and I, not really for, like, the the self-service part. It is talked a lot in the self-service context just because there's such a clear ROI, right, of, like, case deflection and being able to tie it back in that sense. But there's still absolutely a case just for websites and for content findability and discoverability. I always kinda say, like, with self-service, there's kinda two things you're trying to do. You're trying to help customers or users self serve and find info easily, and then there's kind of agent deflection. And, that self-service part is is definitely a website use case, and generative answering can be really great there. Again, like I said, one of the cool parts is being able to summarize different pieces, like, take different parts of content across sources and put it into another answer. And that's just kind of a really great way to kinda maximize the output that you the content that you have because you can't possibly think of content for every single search that could ever be created. Mhmm. So, yeah, it's definitely a a good use case, especially when you have so much content. And then, obviously, it kinda goes back to, you know, trying to put some parameters around what do you feed generative answering. Because to your point, you have a lot of content. It goes back a long time. Probably a a good opportunity to kind of look back and and do some, like, data cleansing type of things, which is something I think we're continuously looking at is how to help users and customers keep their data and their content clean or relevant. Yeah. So, like, one thing we've we have done is we, earlier this year, we kind of did a sweep and evaluated or audited a lot of the content, and we did kind of move some of the stuff that, you know, we had established criteria. You know, and there was a variety of different areas that we were kind of doing evaluations on. And, we did kind of put some stuff into an archive state, so it's not it's you know, it it it exists, but it's just not out for publish public consumption anymore. And then, certainly, I think we've we've had a lot of or worse I don't know. We need to have we've started, and we need to continue to have more discussions about, you know, what is fed to, you know, whatever kind of, AI tool we end up end up moving forward with to ensure that, what it's getting is the best, most relevant stuff. You know, some of our content is very evergreen where, you know, like, statistics might not change much over time. But if you're talking about best practices in an industry related to, you know, like, someone else is speaking about supply chain and things like that, you know, that might change over time. Right? So are we giving the best, most current practices, things like that? So I think for us, maybe one of our challenges is ensuring that the content used to kind of generate answers, generate information is the best that we can offer up in those Mhmm. In those particular instances. But we have but we've also been engaging throughout the course of this year, like, a lot of it's, you know, like, cleaning up your data and improving data quality and dealing with metadata is not the most exciting or glamorous work. Right? Like, no one gets really excited. Like, oh, I'm so thrilled that this metadata project was completed, but it is also the foundation of so much that it's like I don't know. If you're you can't just go remodel your kitchen without making sure that your basement has a strong foundation. Right? And, like, that's sort of you know? Because I I always kind of want them, like, oh, well, let's let's, you know, whatever, install a a a new sink and marble countertop or whatever. But it's like, well, no. You have to, like, get all the garbage out of your basement and make sure it's not leaky first. So that's kind of where we've been is in, like, this ongoing, data cleanup process, which, like I said, is not super exciting or glamorous or, you know, doesn't really get people's hearts racing necessarily. But if that is in good state, right, it enables you to do a lot more that is kind of more exciting and more impactful, for customers later on down the road. For sure. I think that's so important is getting your foundations right, on anything, but especially here. The content's definitely the the starting point. We actually say that about search kind of. Right? Like, because we're going into this conversation of Gen AI and all these exciting things. And if you're foundational, just like classic search is not where it needs to be, everything else is gonna be built off of that initial experience. And if customers can't even find what it is that they're looking for, which is the first thing they're coming to you for, you're not gonna be able to move past it. So I'm sure that's, something that everyone else here is looking into as well. I see there's some conversations happening in the chat. I do see one from Matt. I don't mean to single you out. Would you care to introduce yourself and then maybe, you we can get into your question that I see Paul's responding to? Or in the meantime, Paul, would you want to kinda just double into this question here that we have in the chat about the the AI model? Yeah. I see Matt, you're on a PC with no mic or camera. Sorry to hear it. But, yeah, you're asking an interesting question about, essentially, you know, what we've often talked about is bring your own large language model. And we are, in early release mode right now with a new feature that I called in the chat, passage retrieval API, where, effectively, if you think of generative one approach to generative, AI as being, RAG or retrieval augmented, generation. The this approach of passage retrieval API effectively uses Coveo as the r in RAG. So we're retrieving from your index well, we're creating in your index also vectors representing key passages within documents, which allows us to do a combination of lexical and semantic search and return not just a list of results, but the key vectors, embedding, snippets, whatever you'd like to call them, from those documents. We can then pass those off, via this API to a large language model of your choice or, frankly, anything else that you would like. You would then be responsible for managing, well, managing the large language model, of course, managing any compliance aspects to it, managing the way in which it's prompted with, and, by the way, this is very much supposed to be a roundtable. I don't want to go too too much into selling conveyor. I just wanna explain a little bit about this this kind of approach. The idea here being that, you know, we we would return to you then those most relevant snippets. You could do with them what you wish. Pass them to a large language model along with a prompt control on how to use that and what the tone of a response would be. That would be stuff that in the Coveo CRGA model, we take care of. So in in this approach, you would, well, you would be responsible for taking care of that and then return the answer generated by your LLM. Happy to, reconnect another time to talk further about this, Matt. There's some interesting demonstrations. We're in, as I said, early access mode, with a number of customers on this too. And it's, to my understanding, proving pretty effective as well. But it's always an interesting decision. You know? Do I want to use a a very particularly trained large language model to generate, to generate answers, you know, if it's been trained on very specialized information, or am I just trying to answer more general, questions? I've got that's an interesting aspect of the whole conversation. You can also apply system prompt. And, yes, you you would be responsible for applying that system prompt. Hope that helps. Yes. So sorry. For those, who didn't see in the chat, this was the question this is a response to, with Coveris AI. Can the underlying LMB changed, or is it static? So for example, can you swap between chat g p t, and other and other models? Yeah. Very helpful. Yay. Okay. Great. We got a thumbs up Mhmm. From Matt. Kathleen, would you be, able to introduce yourself and a little bit about your role and responsibilities, how you're looking at Gen AI? I realize I think there's, like, a little bit of time it takes to also come off mute, and all of these things. Or the next person I have on my list is Nathan, And I believe there's only one Nathan. Mhmm. Okay. No worries. So I think this is everyone. We're all introduced. Oh, Jen, were you gonna mention something? I was just going to go back to, another question that I think Andrew had, in the chat, that I thought maybe we could talk about. He was asking, just curious if if or or anyone had done any surveys on customer's preferences for, wanting traditional list of links and search results or, the newer Gen AI type of answers. So I don't know that we have done any surveys, Carrie, unless you know of others that, that we have. I know I I can speak just from my experience and and with, many customers at Cave. I have a large, book of customers, that, what we're wanting in our day to day life, we're wanting also when we're in business. And so I know myself when I go to Google, and it's not because I'm lazy. It's not because I don't wanna read, but I'm very time busy, and I'm in a rush. And if I don't get a generated answer on Google, sometimes I'll rephrase it so that I do get a generated answer. And it doesn't mean that I still don't, in my own personal life, go to those links below, but I sometimes wanna see that generated answer, and then I wanna dig deeper into a link. And so I know, I've had that conversation with many of my customers that are also, having that debate, especially, when it comes maybe their marketing websites, and the need for it. And, you know, what we always kind of talk about is how much do we expect in our day to day lives. And then I spend more time at work than I do in my day to day life I expected in my work. So that's just my perspective and and when I talk about customers sometimes and when we have those those chats, when they are trying to decide, on the direction for their site. But Yeah. Interested to hear what others think and have heard from, these type of, conversations with, colleagues or, in the industry. If anybody has any other thoughts on that. I'd be I'd be happy to share one as well, and but I'd be most interested to hear from, from our our friends on the on the attendee side. I find, and I'm coming from a a slightly not not a lot, but a slightly more technical aspect, I think, than Jen. When I'm looking for something in, let's say, Coveo's part of the product documentation, I I I generally am looking for the actual document, you know, and I I I actually, we we have the option to generate an answer when you go and search your documentation. I actually turn that off, because, I'm not looking for a generated answer. I'm looking for the actual documentation on a particular feature, let's say. So my feeling is, and I don't know if this is more broadly true, that when you're in kind of a discovery mode, you're going to a website, you're looking for information about a company in a in a broader sense. You're this is your early stage, interaction with them. But a generated answer is a great first step. And and, Jenna, I thought your comment was really interesting about it. Like, I'll use that as a first step and then dig into the actual If citations yeah. Because the citations are showing me kinda where to go to dig deeper into those links. So it's a nice way to highlight to me those citations. I don't know if anybody else has similar or different experiences. It'd be really interesting to hear. I think but I think that's also, you know, an interesting part of the generative answers, especially the way that we do it. It is that way on Google as well. You know, with those citations linked of what made up that answer, it's a quick way to get to those documents and to get to that right document. So it's a combination of the generated answer and the links to me in in one. That's it. I think it's it's really the experience of all of it together. Right? And when you start to layer in all of these components, that's really the the kind of the richest experience that you're able to give Barca, because everyone wants to search and receive answers in different ways, and I think that's kind of more and more where we're moving. Like, Jen, you mentioned Google. Like, we're all without knowing it, our last interaction on any site is kind of what forms our new bar. And I think Gen AI is almost teaching us the way that we should be searching, and it's, like, teaching us to search in more natural language. So, it's interesting. I see we also have, Pat who just joined. Pat, would you care to introduce yourself? We're talking, well, all things Gen AI, really, but if you wanna share a little bit about, where you're coming from, what part of the website you own, and how you're looking at Gen AI. Yeah. I'm I'm with Ping Identity. We went live in June with Coveo's generative AI. Mhmm. So far, pretty good. I'm I'm in charge of our enterprise search, and I'm and I help on our tools team with publishing tools for for the doc team as well. But my main Barca is is the enterprise search, integrations with Salesforce. And so, I think we I think it's been successful so far for us. Like any place, you have anti AI people. And Mhmm. So we we have some support agents who just try to find where it's wrong. I got it. It's wrong. But other than that, it's it's been very good. We've had, good success. Good. Hardly anybody turns it off. If anyone turns it off, it's usually one of us working, trying to troubleshoot results, and we just want it out of our way temporarily. You know? Yeah. So, yeah, it's been Curious to understand, what you measure to, to determine, you know, quote, unquote success in this sort of context. Mostly word-of-mouth and complaints and non complaints. Yeah. Yeah. As far as internal. External, it's a little bit harder. Some metrics seem to go down, some go up. Mhmm. We've noticed the biggest thing I've noticed since we've gone live with it is session lengths of searchers are much shorter. Interesting. A lot shorter, like half of what they were pre Gen AI. So Yeah. What does that mean? Who knows? It seems like it might mean they're being more successful. But And and this this is an internal audience then? This is, this is not directed to your customers? This is for your support agents? Both. Both. Oh, okay. No. Both. Yeah. Yeah. So that's external mostly, those those session lengths. Yeah. Yeah. We've internally, with the with the support agents, we have some struggles in that. We have a lot of different products. Mhmm. And and right now, I'm trying to tackle where our Salesforce products families don't match our metadata in the real world. Yeah. And so Yeah. So, so sometimes when the agents get a generated answer on an empty before they do a query, they get the wrong product, and they get frustrated. Yeah. Yeah. That's an interesting one. I I'm actually working with another, customer who has a very similar situation. Their their product names are all and they have a a a basic product name, and then there's, added on to that is is another aspect of the product name. And sometimes the customer won't give the full product name. And so there's an ambiguity, in the context, I guess, really, and, hence, answers can be somewhat ambiguous too. It's wow. Yeah. Interesting challenges. Ours is a little we have we ping one in front of half of our products. Right. Yeah. Okay. Somewhat. At least at least in Salesforce, they've already selected a product family. So we're just creating a third metadata value where we'll map them together and then pass that as the context. So I had a I had a question from someone today, and I'm wondering what your experience is. Do you see, a lot of explicit feedback, you know, thumbs up, thumbs down on generated answers? Do you have a feeling for what percentage of users internal or external actually provide that kind of feedback? It's it's, like, under ten percent. It's really low. Really? Oh, interesting. Hopefully, we oh. Sorry. Go ahead. We just started using the new implementation because we're on JS UI. So we're Right. A little bit behind implementing new stuff. And maybe that'll get better now that they can give textual feedback on any reply. Yeah. That might that might help. The other customer I was talking to was saying that they see something along the lines of sixty percent of users give some sort of feedback, plus ten minutes. Yeah. I was shocked. But, even ten percent is far higher than people would give on traditional search results, I think, really. But, yeah, interesting. The the variability, I I expect, the the other customer I was dealing with, was more of a consumer facing, less, you know, well, less less technical audience, I I guess. I it's still it's still such an evolving space, I think. Well, we actually in, we did an industry report, and one of the findings was that people are not very likely at least, like, customers are not very likely to, share when they have negative experiences. They'll just kind of, like, ghost you or prospects and users. Oh, really? Which is why it's important to be looking at the data and kind of deciphering what it is that that means. Like, Pat, you're mentioning the session time. I think it it really would depend on what your ultimate goal is. Right? So if you're doing it for case deflection, you want them to find an answer quickly, it's probably a good thing if it's if it's shorter because they found it and they went on their way. But then maybe I think, like, Gretchen was saying that you want people to be interacting more with your content and you want people to be discovering your content. So then maybe for you, longer sessions would be a better measurement. So I think it's it's interesting, and it's difficult to nail down, like, what is it that we're trying to measure and what shows what shows success? Like, there's a lot of conversations we had about that. Pat, if you can answer one more question for me. Thank you so much for for sharing all that. What what was the or what is the use case that you have implemented Gen AI for, and did you have, like, initiatives around it from your organization? Like, what pushed you to to implement generative answering? So we have it for we have it on our general search page. We have it for agents and inside Salesforce. We do not have it on case deflection yet because we are in another project kinda redoing that. So that's on the road map to add it there. Probably the biggest reason was that, we were already a Coveo customer, had everything in Coveo. Coveo had a good what we thought was a really good solution, and we're gonna be out first with it. So those are some of the driving factors of why why we did it. Everyone was clamoring for whatever we wanted it. Y'all had a good solution, and and it was gonna be available. So Fair enough. Very good. Very good reason. And and kinda to see what happens. We're still kind of in the, you know Oh, yeah. Learning and and and, you know, facets, how facets interact and people you know? So Mhmm. It's still and I think you're getting rid of is it it did happen yet or it's about to happen? Getting rid of the sorry, I couldn't answer your question, is gonna be wonderful. Yes. I believe that is, that is gone now. You just get, you just basically don't yeah. You either get an answer or you don't get an answer. Yeah. I think it happened while I was on PTO. So Okay. Yeah. I think it has been implemented. So those of you that don't know, that's when, you know, an answer isn't generated. It says, Barca. An answer could not be generated, and, we made the decision, also from feedback from our customers for a beta program, to eliminate that message, that, you just get the relevant search results, but no sorry message. Yeah. It's it's wonderful. Especially to support people really appreciate it. Yeah. Yeah. Oh, I see. Oh, absolutely. In in that in that UI. Yeah. I I arguably, like, I think, I think it's interesting to I think the story message was interesting in in that it indicates to the user that maybe you change your query around a little bit and you might get a generated answer, whereas, you know, simply not generating not not providing that message, you know, maybe doesn't guide the user to reformulate their their query. But, yeah, I think it's And then it and then they were confused because there was three options, no message, a sorry message, or an answer. That you're absolutely correct on that. Yeah. Yeah. Yeah. It's it's an evolving space, which is which is exciting. Right? That's just we're all just kind of in this together and and learning in real time. And that's also why we wanna have these types of round tables so, you know, everyone can kind of ask questions and see how other people are looking at it because for sure, you know, most things are not super unique to everyone. There's a lot of commonality that can be shared, so there's definitely strength, in that. I actually had a question, Andrew. You kind of talked a little bit at the beginning about, implementing Gen AI internally first, because of some of the, I guess, reservations with with going public facing. And I think reservations around Gen AI is also a really important topic. Right? Because as we mentioned before, it's definitely not something you just go into haphazardly. It's not set and forget. You're definitely testing and iterating. Could you maybe share a little bit of those reservations that you have or your organization has around GenAI in general? And, maybe, like, someone like Pat who has already implemented or anyone else here has, potential answers for that. Yeah. So it really goes back to the hallucinations. As simple as it is, you know, we all saw, like, the news articles of the car company agreed to sell a car for a penny or whatever. Like, you know, all sorts of things that that just came out of it. But I think what we have come to realize is that properly configured, the only hallucinations that would exist are from the sources that are in that are being fed to the l l m. Right? So if you if you clean up your knowledge, resources, your repos, and whatnot, that that's the only place that the hallucinations could come from. So it all starts with the data, you know, at the base layer. That that's kinda what we're coming to realize. And the hallucinations that a lot of people were experiencing early on was no fault of anyone, you know, either due to just misconfiguration or not really understanding because it's just everything was so new for everyone, including the company selling it. Really great point. One thing that came up that was interesting with one of my customers, that was one of their their peers. They, have a lot of, medical contents, even though they're public organization. But they have, for example, like, how to do CPR and how, it's been done in different ways over the years. And so what we were talking about was, you know, they certainly wouldn't want the wrong or I wouldn't say wrong, but a previous method, right, to do to do CPR, let's say, as an example, to come up in the generated answer. And so what we talked about was that, what's nice about the uniqueness of the generative answering is that you can actually select what content you want to, push to GenAI. It doesn't have to be your entire index of content. So you can actually be quite selective of, that content. For example, like, you maybe don't wanna index, community posts because you're not authoring it. You know, you can't control necessarily what somebody says in a comment or a a post on a community page. And so maybe that's an area that you don't wanna use in generative answering, because you don't want it to be pulling from that where you don't control the content. So that was something that just then came up with one of my customers of that they could really silo down the specific content to target certain questions. But then other questions I get asked, it just goes to the regular search results because that's what they felt comfortable with. So it just plays into into that point that you made, Andrew. That was really interesting. Yeah. And do you so, Andrew, it kinda sounds like you were able to come to these conclusions by testing it out internally. Is that kind of where this learning happened? I would say so. Yeah. And, you know, just using just seeing understanding the difference that, like, chatting with Jack chat GPT, you know, for free on the website isn't the same as using, like, an enterprise, you know, solution. So just, you know, testing out internally, using it more as a consumer, at least has given me comfort and understanding that, you know, the again, the hallucinations are kinda gonna be the result of the data that you feed it. So it's it's your data's fault, you know, if you if you get those. Yeah. We'll just say that. It's actually your fault. So Yeah. Yeah. No. It's a it's a fair point. I'm curious actually, Gretchen, you were talking about looking to Gen AI as well. Are there do you share similar concerns or or other ones around generative answering? Is this all helpful, in the way that you look at it? Yes. Okay. You guys can hear me. Right? Yeah. Yes. Hopefully, dog is okay. I I don't I really don't know what's going on. It's something really weird. I don't know. I probably have a mouse in the house or something, and I'm hoping that's not what it is. But, anyway, yeah. I mean, one thing I was thinking about, you know, I know on, Coveo, on your site, I think you guys have when you do when you provide generative answers at the top of the page, there's, I think, some sort of label or indicator that says this was, this is a generated answer kinda thing. And I was I was curious about if if that is something you recommend as a best practice in those instances, and if other people who already have these types of experiences on their sites, if that's something that they are also doing because that's something that I think we were thinking about. You know, should we do this? Should we not do this? And, of course, yes. Like, I think everybody worries about hallucinations and getting some just random stuff in there and and things like that. I I know Paul feels really passionately about, the transparency of Gen AI. So I'm gonna let him in. Yeah. Yeah. I mean, I'd I'd love to hear from folks who've actually done this as well, but, yeah, very much so. I think transparency is is utmost importance. And, also, different customers might have different wording that they wanna put there. In some cases, it's kind of a disclaimer. Hey. You know, this is a generated answer. It may not be a hundred percent accurate. You know? It depends whether you're trying to encourage people to actually reach out to you or dig further on your website or if you wanna discourage that. I think of the the customer service you or atypical customer service use case being, hey. We'd like to deflect. You know? We we want less calls to our call center. Okay. I am also dealing with, say, financial services organizations, wealth management and such, where they they want you to call. They they feel like their value is in, in the in the brains of the people who answer the phone. Absolutely is. So, you know, like, phrasing that disclaimer a little bit differently. Like, this is a high level summary of what we provide. Please call us for more information. That, that sort of thing. I I I think the wording of that is really important and and and should be in larger font than than it is in our, our default box, frankly. Yeah. So, like, I am by education and for many years a librarian. So for me, I think, like, you know, I feel sort of strongly about understanding where your information comes from and being able to evaluate your sources for credibility. So I do think having, like, having references and pointing people very clearly, to where they can where the information came from, I think that stuff is so important. For my organization, we are fortunate because a lot of our content goes through, like, a peer review process. Some of it is very academic. So, you know, we do have maybe a different level of, trust in the validity and accuracy of our content because it has been really vetted by subject matter experts and everything. So there is a lot of trust there, so I think we're fortunate in that regard, where some organizations might not have that. But I think giving as much information to users, visitors, customers, members about where the information comes from and its source and, all that, I think, has a lot of value. Yeah. Yeah. Couldn't agree more. And I will take my librarian hat off and web person. It makes sense that you're in knowledge management. I know. Oh, yeah. That's a that's a great trajectory. And, honestly, knowledge management is so important. I think and I think it's it's really in the it's it's in everyone's mind right now. Right? Like, I think Gen AI has actually probably brought more of a spotlight on knowledge management in general. As we only have five minutes left, I'd like to end, by just asking those of you who are implementing Gen AI, so I'm gonna single you out, Pat, Andrew. If, we'll start with Pat. If you had advice for, you know, other people here who are looking into Gen AI internally, externally, what would your advice be to them? You know? We've heard just kinda get started, start and scale, look at your inventory. What what would you tell someone after now being on the journey that you've been on? Probably to get your metadata, in shape first because ours isn't still, but that's one of our biggest struggles struggles. And I heard earlier someone said it wasn't an exciting, job to do, but we're we're still fighting it. And and and the Gen AI has exposed our issues a lot more too. So Interesting. That's I think that'd be the biggest thing. Thank you. And, Andrew, what about you? Or possibly I think Nathan, you said you are from the same company. Either either of you have any parting words of wisdom? If not, I'll hand it to Paul. Seems like a safe bet. I'm not sure this one size fits all in my experience yet, in terms of a first step. But experimentation does seem to be a good idea. Experimenting with safe information, whether that means your audience is internal or, you know, again, unless we're discussing with Gretchen, you know, having appropriate disclaimers or at least clarity of information around a generated answer, and, you know, perhaps noncritical information. I'm I'm working with a few, customers, for example, who, it's all about just discoverability of the services that they offer or the information that they have, on their site. They they would just like people to be able to ask a plain language, natural language type question and and get some sort of an interesting answer. You know, obviously, it can't be horrendously wrong. But, just starting to to implement in that way, with relatively safe information. It's not mission critical. It's not pricing of a car. And and, you know, again, having these appropriate disclaimers that this is, you know, this is an experiment here, on on this site. Please, you know, do not take any price that the robot offers you and and think you can be able to buy a car for a dollar, and that sort of thing, as Andrew was pointing out. I think that's that's certainly one approach. You know? And the other one, as a few folks have said, is, you know, start internal. But, again, you need to have the same kind of disclaimers around that, and, you know, for your internal users as well. And figuring out, is this useful? Is this measuring is this actually improving your experience? I I think, be it, internal for HR, internal for customer service agents. You know, is this helping, rather than and and, you know, think about ROI. I think, I think, I forgot. I think it was Pat who was talking about ROI. Understanding how to measure some of that stuff. Anyway, fascinating discussion. I I should I should wrap up. We probably talk about this all day. We have to jump to another call. Really great to meet all of you. Cheers. Yes. Thank you so much, everyone. Really appreciate you taking the time. Pat, Cynthia, Matt, Nathan, Chelsea, Kathleen, Andrew, Jen, of course, thank you for being here. If you have any other questions, please feel free to follow-up with us. As you can see, we are more than happy to go on and on and on about all of this. So thanks so much for being here. Have a great day, everyone. Thanks. Yeah, everyone.