So we will get started now then. So hello, everyone, and welcome to this month's learning series webinar. My name is Jasmine Oraz, and I work on the marketing team here at Coveo. For those of you who have been to one of these before, welcome back. And for those of you who are attending a learning series webinar for the first time, this monthly webinar program is where we get more hands on with how to enable Coveo's features that can help you to create more relevant experiences. So this is the first session in our two part series on Coveo platform primers. And today, we'll be focusing on core Coveo plat Oh, did did did we lose jazz? I think you sure. Am I back? Alright, guys. Sorry about that. I'm back. So, like I was saying, this is the first session in our two part series on Coveo platform primers. So today, we'll be focusing on, the core platform functionalities as well as key features to maximize your Coveo solution experience and how to get the most out of your product. So before we get started, I'll just, cover a few housekeeping items quickly. So during the webinar, we do encourage you to, ask questions as we go along either using the q and a box or the chat box in the right hand panel on your screen. And our experts, that we have here today will be answering them, throughout the session and at the end of the session as well. And as usual, today's session is also being recorded, So you can expect to receive the presentation in your inbox in the next couple of days, so stay tuned for that as well. And I'm also super excited to be welcoming back our speakers. Jesse and Jazz, our two customer onboarding specialists today, as well as Jason Milnick, our director of customer onboarding. They're pretty much gonna be your new favorite Cobio platform experts at least for this quarter. So without further ado, I will pass it off to Jesse to take us away. Alright. Thanks for that, Jasmine. Let's go ahead and get started. So welcome to today's webinar. Today, we're gonna be going over Coveo's relevance cloud platform. This is essentially where you're gonna be able to cover a lot of ground in understanding your relevance and improving on it. Now just before we get started, I'd love to do actually a quick poll to essentially gauge the audience's familiarity with this platform. So whether you've used it just a little bit, whether you're, very familiar or you're kind of in between. So go ahead and fill this out. I'll give, everybody maybe thirty seconds, a minute to see, how this comes along. This is the fastest, fastest poll I've I've ever seen complete. K. Just a few more people. I'll give it about another ten, fifteen seconds. Okay. Let's call it there. Perfect. Okay. So we can see the results here. So, really interesting, actually. I can see here that, probably about ninety five percent of everybody is either not familiar or just somewhat familiar. So you've started to get your feet wet, in this case. So that's great. In any case, this is gonna be perfectly suited then. You can see the results right here. So with that out of the way, let's go ahead and let's get into this. Okay. So we have a a pretty packed agenda. What we're gonna be going through is we'll start with a bit of Coveo one zero one. So some commonly used terms. We're gonna go over the architecture. We'll take a high level look at the relevance platform. Then we'll get into a quick overview of the process of a relevance tuning. So tuning it to make a more relevant experience. At that point, let's get hands on, and we'll jump right into the platform. You'll also be receiving the slide deck after the webinar where I've left some good resources at the bottom that cover what we'll be talking about today in more detail. And I'll just ask that we keep any questions at the end. We'll end off with a q and a section, so that we can have any questions properly addressed. Alright. So commonly used terms. First off, I'm gonna go ahead and start, honestly, right from the beginning just to make sure that it's nailed in and solid with the Coveo unified index. So this is where Coveo gathers all of the items from your sources. So think Sitecore, SharePoint, in other cases, Salesforce, Zendesk, essentially, wherever we hold search items that we want to index and then make relevant when people query for them. Then we have our search hubs. Simply put, this is a search page that's powered by Coveo. It's really just that, and this is where we're gonna be sending these indexed items to. Lastly, we have query pipelines. So we're gonna be focusing on query pipelines in the demo itself, but simply put, this is a book of rules that impacts the results that we give when people search for them. So to give some context, query pipelines include machine learning, and they also include rules that impose on the search that we're gonna be going over. I'm a visual learner, so here's a better way to digest this information. I have to give, a little shout out to Jason Milnick, who is on today's call and came up with the analogy. But let's think of this as a wheel. You have your unified index in the middle, which is where you're pulling all your items from your sources. So if anything, help center articles, documents, ecommerce products, whatever your use case, all of these items are stored right here in the index. An end user, so an end user can be just somebody that's searching on this Covio powered search page. They head to a search hub, they query for something, so they're essentially asking for something. We answer that question with items from the index. Now before we answer, and to make sure that that answer is relevant, we are gonna go ahead and subject it to our machine learning models and to our query pipelines. At an honestly extremely high level, this is in a nutshell how we give relevant experiences. Now big question we get often is should we have multiple pipelines? In fact, how many? So there's no solid answer, but there are a couple of factors that are gonna be able to help us in deciding. Do we have distinct users? Are they searching for different reasons, so say with a different intent? And is there a need for a unique set of pipeline rules? In layman's terms, like, what this boils down to is are there cert multiple search experiences that we need and that we need to cater to? So, again, on kind of a visual note, let's use the company Apple as an example. So you have the Apple Store, which is where you can buy, you know, devices, so computers, tablets, iPhones. Then you have the App Store, which is where you can say download the Zoom app or the Facebook app. Now same company, completely different search experiences. Because they're different, we wanna have essentially different pipelines and be able to cater to those uniquely so that we can give them different search experiences. Now let's look over the platform, so the Relevance platform itself. We'll look at a couple of different aspects. I'll do this at a high level because, honestly, I could be talking about this all day otherwise. Couple of key areas. So content. This is where you're gonna view everything to do with your sources, with your items, everything, again, that we're pulling in to that unified index. The search section, this is where we're gonna be spending most of the demo and will let you manage your query pipelines and view any of your search hubs or your powered search pages. Machine learning, so fairly straightforward. This is where you can set up, where you can enable your machine learning models and view them. And finally, analytics. So this section, the platform and analytics, is where you can look at, create, and customize your reports. We're gonna see if we have the time to get into some of the reporting within today's webinar, but I do wanna emphasize that this is an important part of understanding your relevance, and let me explain why. So relevance tuning. Now looking at relevance tuning, this is an ongoing process. And to explain relevance tuning, I'll explain the three steps involved in that process. So for one, diagnosing search issues. So this would be through reporting. Once we've found an issue in search performance, then we're actually gonna try and tune relevance. So we're gonna try and improve that search experience for your end users. Again, that's done through query pipeline rules, and that's done through machine learning models. Finally, we have ongoing revision. So the point that I wanna hammer in here is that relevance isn't set it and forget it. It's living and breathing. It's longer term, and we even have tools to kind of see if our efforts are actually hitting the mark, if we're actually moving the needle on some of these things. What I do wanna emphasize here is the AB testing feature that we're gonna get into during the live demo. Now looking at the process of relevant tuning, the one thing I have no problem sounding like a broken record player about is step number one, diagnosing issues. It's easy to have hunches on what we can do to maybe improve relevance, but, honestly, it's pretty critical to be able to spot and really confirm relevancy issues through reporting. Our reporting system has a lot of tools that are gonna help you do this, so you're not alone in being able to understand where those gaps might be. What it boils down to is that over time, machine learning is gonna be able to understand better than you and I how we can deliver relevant experiences. Now what that means is that the more rules that we add, the more that we're gonna impact machine learning, so the more that we're gonna get in its way. Honestly, it's best to keep as few rules as possible and only add them and interject when completely necessary. That way, as machine learning gets more information and more data, it'll only become more and more effective at understanding what constitutes a relevant search. That being said, sometimes we do have edge cases where it is important to go ahead and add a rule, some of which we'll be covering during the demo. Now just before we jump into the platform, one big question is, okay. Well, what metrics actually point to successful search? You know, how do we see if this thing is actually working? There could be honestly, like, a hundred different ways to prove successful search. It depends on your use case, you know, your industry, how you're actually tracking metrics right now, but we use four particular metrics as benchmarks to start indicating what successful search looks like. So we call these the Coveo big four, and I'll just breeze through them quickly. Especially within our reporting, you'll be seeing these metrics a lot. Visit click through speaks to when somebody goes on a Coveo powered search page, that constitutes one visit. Now they might search for five things. They might search for ten things. The only thing that we're looking for is what percentage of visits had at least one click. Now the reason why we measure clicks is that when you query for something, you're looking for an answer. You would only click on something if it's gonna save you time, if it's gonna answer your question, if it's essentially gonna solve the need that you searched for in the first place. Looking at query click through, this is looking for every time that somebody searches, what percentage of time are they clicking on something? Is it one to one? Is it for every two searches they click on something, and what does that metric look like? Now visit click through and query click through are very fairly similar. So you might ask, why are we tracking both? The answer would be that they might spot different issues in the search experience. Let me explain. With visit click through, this might a low visit click through might point to an engagement issue with your website. If you have entire visits with, you know, five, ten searches and not one single click, it might be the way that somebody's interacting with your search page. So think about the layout, think about your facets. Overall, how are people engaging with it, and this might spot a problem in that. Query click through, on the other hand, simply points to a relevancy issue. People aren't finding the answers that they need if this metric is too low. Now that's where we're gonna, over time, try and improve on relevancy and always give them exactly what they're looking for and bring this number up. I'll make the next ones fairly quickly. Average click rank, when they click on something, was it the first thing that they clicked on? Was it the tenth thing? You know, god forbid in twenty twenty one, we don't want somebody going to the second or third page to be able to find the results. So we try and keep this under three. And lastly, content gap. This in and of itself is when we have no searches or no results to give at all. So this is one of the worst search experiences, but that being said, we actually have reports that are able to single out what queries your end users, your customers, your employees, whoever those might be, what queries are leading to having no results. And then the question is what can we do about it. With that out of the way, let's answer the what can we do about it and jump right into the console. Okay. So here we go. We're in a query pipeline already. I'll have you know, as of right now, this is a test organization, but in any case, all the rules are gonna work the same. So we're on the overview tab, but let's jump right into some rules. Now in this case, I'll have you know that there are a lot of rules that we can go through, but I only wanna go over the ones that are likely the most valuable and likely the most common. So that way, when you're in the situation of trying to troubleshoot, trying to tune your relevance, you have some of the most commonly effective tools in your back pocket to do exactly that. Let's start with thesaurus rules. So a thesaurus rule in and of itself is when you search for one thing, let's also throw in another search term. So this is great for acronyms. Sometimes things are really commonly misspelled, so that can be another use case. Let's just say, you know, hypothetically, we're a Salesforce consulting agency. So we help and we work on everything Salesforce related for our clients. But we're noticing that a lot of people are searching for SF, and our documentation has the keyword Salesforce not as much as SF. This in turn is creating a relevancy issue. What we can do here is we can add Salesforce so that when it searches for SF, it's also gonna search for Salesforce. Now as a synonym rule, this is bidirectional, meaning it goes both ways. When you search for Salesforce, it's also gonna search for SF. Maybe that's not what you're looking for. Maybe you only want it to go one way because the documentation doesn't contain s f in the keywords. That's where the one way synonym comes into play. So here, this is where you can look at let's just say again, s f. And you know what? I've noticed that people are also searching for Salesforce, which is weird. I've never heard of any kind of Salesforce. So in that case, when we search for one or the other, it's also gonna throw in Salesforce, but this is only one way. If we search for Salesforce, it's not gonna throw in Salesforce. Our relevance is safe. Just to really hit the nail on the head, what this is gonna look like essentially is it's gonna look like oh, it's gonna look like as if we are also searching for Salesforce both times. This is essentially what the rule is doing. Let's move on to result ranking. Now this is really interesting, especially on the note of, you know, I guess if you want gamifying something because in this case, we're working in a point system. So what a ranking expression does, it essentially lets you add or remove points based on the criteria. You can see right here. Now in terms of the points, I'll just preface this by saying that the score we might add or take away is one factor out of many. You know, in terms of relevancy, you have machine learning. How adept are your machine learning models? How much data do they have under their belt? How close together are the keywords? There's also just a bit of a formula in terms of calculating relevancy. So this is just one piece of the puzzle. That being said, you can choose how many points you wanna add or detract. Okay. So what's in it for me? Let's get right to the actual, you know, applied setting of how and why we would use this. Let's just say that we're, you know, maybe a a community or a help center, and we're constantly moving quickly. There's new products coming in. We're always writing new documentation for our clients. Things are changing a lot really quickly. So what we can do here is we can go ahead and add something like if the date is greater than twenty twenty one January first, meaning if the item that we're indexing is from January first or newer, Let's go ahead and add two hundred and fifty points. What this is gonna do is this is gonna give a boost to all of our newer articles. So that way, if we have, you know, a a heavy amount of new items that we're indexing that we know are more relevant than the older ones, we can at least give it a slight boost here. Now a few things that I wanna preface. For one, this is, of course, you know, a a line of code, but you don't necessarily need to be a coder or understand the coding. What we can do is we can go ahead and open our filter expression editor. Let's take a look. Within here, we have advanced mode, which is where I've entered the code. We also have basic mode, which means that you can essentially use fields to be able to come up with the same conclusion. So date, date of the document since last year. You can see what the resulting code was, but just to show you that it is, UI friendly. You don't need dev experience. This is something that you can really just kind of maneuver and figure out based on the fields, based on how you're trying to troubleshoot relevancy. The last thing I wanna mention here are, two things I should say. So in terms of the points, now how did we get to two fifty? I would have to say that two fifty is a bit of a magic bullet or is a bit of a magic number in terms of covet. So, generally, we'll advise to not sway more than two hundred and fifty or in the exact same way less. Now, again, a big part of this thinking is we don't wanna overburden machine learning. We don't wanna get in its way too much. In fact, if you go too far on either side, it's gonna go ahead and tell you, this is a lot of points. Just so you know, this is very much gonna affect the ranking score based on this criteria. Just be careful. You can see the the the font is red as a result. And what's cool is you can actually try this out right here. So now, of course, this is a test organization, but based on what I've added, this will go ahead and give us, a slight priority in things that are from this year compared to things that aren't. The next rule that I wanna go ahead and focus on is looking at featured results. So let's take another mise en scene. We're do we're doing a lot of improv today. So let's just say that, you know, recently, we've given a webinar on machine learning. But let's just pretend that that machine learning webinar is coming next week. Now we wanna get as much attendance. We wanna notify our fan base, you know, our clientele that this webinar is coming up. So what we can do is we can add this as a featured result. Now what that means is that means that we're essentially gonna give one or multiple items of our choice a thousand points or even a million points, I should say. What that implies is that regardless of keywords, relevancy, other query pipeline rules, this is always gonna come to the top because we've supercharged it with so many points. We can even test it out. Let's just say if the query contains machine learning, then what we wanna do is we wanna add maybe the item for our machine learning webinar. Go ahead and add it. Now we can actually again, we can go ahead and try this out. So let's go ahead and try this out. So we'll type in machine learning, and there it is. Regardless of any factors of relevancy, it doesn't matter. This is always gonna pop up at the top as long as the query contains this. So just to further prove the point, maybe you're looking for a little bit of help, maybe best practices. So best practices on machine learning. Doesn't matter. This is still gonna pop up at the top because it contains this. The last thing that I wanna mention here is there are ways to add a little bit of code. It is a bit more on the developer side, but we can, for example, add a little bit of UI, I can be, if you will. We can put a box around a featured result. We can say that it's a featured result and put a little flag. So that way, it doesn't feel like it's being imposed on us. You know, we want you to know that this is something that's coming up. You know, we have this event next week, and it's it's intended. One final thing that I'll mention here is in certain cases, there might be articles that are constantly referred to. So maybe like a one zero one, you know, a certain kind of methodology, something that, you know, our end users keep referring back to because it's a bit of a playbook as to how they're working. That would be another good reason to go ahead and include this as a featured result so that you don't need to search. It's always just gonna pop up on the top right there for you. Again, the last thing that I'll preface here is definitely be sure to use some of the reports to be able to diagnose that, okay, we do see issues in performance with this query. Let's go ahead and add a featured result. Let's go ahead and add a thesaurus rule to try and sway that in the right direction. One final thing that I'll mention is the option of conditions. So simply put, this is a filter that we can add at the rule level. So there's a lot of different ways that we can do this. You can do this based on location device. You can get pretty, clever with it, I should say. You can also create new conditions. So let's just say, again, let's just keep it, in the actual scenario. If you have an in person event taking place in Montreal, then you can go ahead and add a condition for the featured rule to only show at that locale. We've added some rules. Now how can we tell if this is working? How can we tell if this is actually going in the direction that we want when there's a lot of numbers coming in? There's a lot of factors. This is where we can go ahead and include an AB test. Now what you'll see here is, of course, take the numbers with a grain of salt. This is a test organization. But what we can do is we can essentially create an AB test. What's nice here is that we can change the amount of traffic that we have going to our original pipeline or the new pipeline that has this new rule. So we can add, let's just say, a thesaurus rule, but you know what? We have maybe, like, thousands of queries coming in every day. We don't wanna potentially break something. You know, everything's going well. We just wanted to see if things could potentially improve. In that case, let's go ahead and add, say, eighty percent, maybe seventy five percent to the original pipeline, and only twenty percent of the people that search are gonna be subject to that new thesaurus rule that we've added in. So it'll might take you a little bit longer to get the data that you need, but your Coveo ecosystem is preserved, is safe, and that's the gist of it. Looking at the rules that you can actually add, it's really just as simple as clicking edit, and you're right back in the pipeline. So you can add whatever you would like. Maybe we'll go ahead and add the actual search term. And from there, what it will show, I'll just hit confirm. It'll tell you based out of again, here's those Kaveo big four. So click rank or result rank, click through, average number of results, and query without results. It'll show you which one is performing better. Finally, you have a couple of quick quick options. Let's keep the original configuration. Maybe the rule didn't work. Let's stop the a b test completely, or let's go ahead and extract the test scenario. So you know what? That thesaurus rule actually really did improve. We can see that the numbers are trending on the test scenario side. Let's go ahead and keep that rule. Yeah. In terms of, relevancy, in terms of some of the tips, we do have a lot of other rules. But, again, I definitely recommend you check out some of the resources that I've left at the bottom. I wanted to keep this, as short to the point and make sure that it's as value packed as possible. From here, I'll go ahead and see, if Jason has any insight to go ahead and add from this point within, the webinar. Hey. Thanks a lot, Jesse. Yeah. It's probably a few, additional, items we well, we can we can dive into at this point. I do wanna actually address, one of the questions that came in through the chat. It's the chat's been active, so thanks everybody for for, your participation there. So so Sham has asked a really good question that's not completely something that we we don't see. I mean, it's it's kinda rare, but it happens. So his question is, yeah, our click through rate right now is seventy percent, which is really good. When we just had Coveo automatic relevance tuning, we were just at around thirty percent. Then we used the featured result to take it up to seventy percent. So that featured result, as Jesse was showing, can help you push certain results up to the top of your, list when you know that they're right that they're right results. So he says the problem is maintaining this because it he's right. It's it's a lot of ongoing, maintenance that it takes to control something like this. So it's kind of a complicated situation. Certainly, we would hope that Coveo machine learning models are gonna be there to almost be like the easy button. Like, you know, you you turn it on, and it's gonna adapt over time. And that's that's the objective here. But sometimes customers have scenarios like this. So there's a few different, and it's actually Don had a question as well as before I move on from this, which was around how do you, you know, push results and and ranking rules in place for a limited point in time. So I think this these two questions kinda go hand in hand. So first of all, if you're having relevance concerns like this, if you're coming out of the gates and, you know, your click through rate is far below what we would expect to be with machine learning, no substitute to just reaching out to your customer success manager and getting some time with them to kind of diagnose the issue. Every every client's, build is a little bit different. You're gonna have different, you know, factors that come into play. For a a good, strategy that I've used with customers before is to, put some some rules in place that allow you to train the model out of the gates. And so feature results is a good one. What what we're what we're doing with feature results is you're adding points to the overall relevance score. So if you think about it like this, if you have, like if you search for three keywords and all those keywords match, your relevance score for any given result might be, like, you know, twenty five hundred or three thousand or whatever whatever the result comes out to be. If you had a ranking expression, you can control how much additional points you wanna add to that. A featured result is adding a million points to it. So when you set something as a featured result, you're adding a million points to whatever the out of the box algorithm gives you, which ensures that no matter what that result is gonna be at the top, it's not gonna be dethroned. Well, when you have automatic relevance tuning turned on, anytime you search for something and you have an associated click, you are training that model. And so the model can learn from your featured results because your featured results or even those results that you're boosting with ranking expressions, even though it may not be quite as extreme, if you're artificially pushing results up to the top, the machine learning model will pick those things up and learn from them over time, which enables you to, over time, remove the manual rules that you put in place and now that your machine learning models have learned from that behavior. And so that's a tactic that we've used before. It's, like, starting off, like, if you've got, like, maybe fifty queries that you know your top queries and you know that these are the results that you wanna have promoted for those, let's build out a bunch of ranking expressions into your query pipeline to prime the pump, if you will. Once you've gotten some sufficient traffic, so maybe ten thousand queries, five thousand clicks, something like this, you can start to back those rules down. So that slider that Jesse was showing, maybe you set that up to a hundred. And then a month down the road, you slide the sliders down to fifty, and then you slide them down to twenty five until eventually you don't need the training wheels anymore. You can take the results all the way out or the the result ranking rules all the way out, and now machine learning models have sufficiently learned from your top content. That's something you can do. Historically speaking, we have done this manually. It was going into the query pipeline and creating these rules and then looking at them again a month later and then going into there and manually editing them all. But a really nice new feature that we have coming out was is called groups and campaigns, and it actually allows you to set some time bound rules around such things. So to give you an idea of what that looks like, who can share? Hold on a second. I'm gonna try I'm trying to share my screen. Here we go. So, you probably won't see this in your query pipelines just yet, but this is something that we we have available internally right now that we're using for testing. It's called groups and campaigns, and it allows you to set up a book of rules in your query pipeline that can be time bound. So you can add a group. You can give it a name. This works great for, like, marketing campaigns or commerce campaigns or anytime you want to just set a set of rules in place for a limited time. Give it a name. Set your time period for, you know, when you want these rules to be in place for. So let's say I want these to go, in place next week throughout the end of the month. And, you can set a condition error on there. So if you wanted to operate under, you know, any specific queries or any specific user context or tabs or whatever, you can go from there. And then what you'll be able to do from this point, once you've created your group actually, we've put an example here. Now when you add a new rule or if you wanna look at one of any of your existing rules, you have the option to associate that group or that rule to one of your groups. So you can go ahead and create a whole bunch of different ranking expressions, featured results, tools, and so far, and you can associate them to this group. And once they're associated with that group, it's gonna they're going to be governed by the rules that, are that that that are on that group and campaign. So so when this, you know, campaign expires, those rules will be shut off. And you can roll them from one group to the next. You have a lot of different flexibility that you can use here. And one thing that's really interesting about it is you can edit a whole group at once. So if you have, like, you know, several rules and they're all set to boost content by, like, a hundred and you wanna have that running for, like, a month. Maybe next month, you can create a new group that's all the same rules, but you back it down from a hundred to fifty. And then you can kinda set it and forget it, put these rules in place. That way, you know, you only have to go through the the, manual administrative tasks of this process once, and then let the let the, application do the rest. So that was a really good question. And I know it's a tough one. Again, it's this is not meant to take away all the advice that you're getting for your CSM. I would definitely talk to your CSM about your situation, and we can work with you, one on one to help improve that. Alright. So that was, I guess, one additional topic. Yeah. Go ahead. Oh, I'll also just, quickly jump in. I see that Janina had a question. I answered it quickly in the chat, but in terms of, the actual when's a good time to start checking in on your reports and when is a sizable amount of data that you might have? Generally, it it depends on the amount of traffic and the amount of queries that you're receiving. But a couple of weeks is usually a pretty good time to start checking in and to start seeing good amount of data. I would even say from here on to be able to start checking so that you can get more insight as those numbers progress. And, And, of course, don't be shy to reach out to your CSM, or onboarding manager if you're in onboarding if you have any question about the functionality of your reports themselves. And that's a really good segue, Jesse, because I I feel like since we have a little bit of time left over here, there's multiple things we can talk about. If you guys have questions, keep asking in them in the chat and and and, you know, you know, you can, shift the course of the remainder of this webinar. But I thought it might be interesting to go through, like, what are some of the, the standard reports that we can build for you. And so if you have been onboarded by, one of the fine members of this team here, you probably are familiar already with the search details report. This is, one that we'll work on for you. There's also the option you can hit add if you're in the reporting section here. You're gonna find different dashboards that are available, as templates. And this one is very similar to what we call the detailed summary, template. You can add it here, and it has virtually all the same information that you would get, in the search details report. When you open up your reports, you can go through a lot of different things, but the search details slash detailed summary is gonna have, like, really all the information that you're gonna need to get a good idea of where you are from basic search metrics in a nutshell. An important note here is that you can use this period selector. So this, option up here at the top right hand corner allows you to select the date range at which you want to evaluate, performance. And on your report, the first tab you're gonna find here is called summary, and it's gonna provide just a really good overall summary, if you will, of all the information that's happening in your search interfaces. So this, little time series graph here is graphing out a few activity metrics. So you're able to look at search event count over time. So when it and it's happening search wise, click event account is showing up here as well, unique visitors and unique visits. Now this is absolutely a test environment you're looking at here. So when you start looking at some of the results, don't be, you know, turned turned down away by this. Hopefully, your results are gonna be much better from the ones that we're just, tooling around with here in our test org. One nice little piece of, you know, a nice little report that we have up here is the visits by hub. So if you have an org that has multiple use cases and multiple search hubs there was a question asked earlier about, like, you know, do you have a different query pipeline for different audiences of b to b versus b to c? And the answer is yes. Anytime you have a different audience, you're likely gonna have a different search interface for that, and that search interface will have a different query pipeline. And this little this little pie graph here will show you approximately what kind of traffic is coming from each one of your individual search hubs. And the reason why I love pie graphs, I'm sure my team here loves them too, and hopefully you you like this, is that like pretty much all the reports that you'll find in the Coveo, reporting dashboards, if you hover over one of these pieces of the pie, you can actually click on it. And when you do click on this, it'll add this as a filter to the top of your dashboard. Once that filter is applied, it rerenders not just the tab you're looking at, but all of the tabs across your entire dashboard. And now you're looking at it through the lens of a single search hub. So this is one way that if you wanna, like, drill down into any specific information, you could do so. Not only can you do this with pie graphs, but you can even do it with pretty much anything. So in this case, I'm looking at queries down here, and I see a top query in this Coveo full search hub, is platform. If I clicked on this, it's also gonna add the user query of platform to the top of my dashboard, and now I'm getting it even further drilled down, perspective. Whenever you add filters to your dashboard like this, you have some additional options. So with the three dots, you can easily remove it. You can disable it, which grays it out. So, essentially, it'll remain here on the top of your dashboard, but you'll just have it temporarily disabled. So that's a nice feature. You can enable it and disable it and toggle it on and off. Hey, Jason. Yeah. Yes. Go ahead. I see that we've gotten a question. I know that we're, we're dealing with a test organization here, but there was a question of, spotting a relevancy issue and just showing what that looks like in practice. I don't know if there might be any good examples, whenever you're ready in the content gap section. Yeah. Absolutely. Absolutely. So we'll jump right to this. One more little thing I wanted to let you know. If you do have some nice combinations of filters that you want to apply or one that you know, like, this, is something you wanna use regularly, you can save your filters, and you create a name filter. So I can just give this one a name. Once I do so, it adds it to your bank of named filters. And you could see it renders a different view here. And now if I go ahead and remove that altogether, when I go back to my filters list, anything embedded as a name filter will show up right here at the very top so you can have quick access to it. So this is really nice. Among all of these tabs in here, the content gap tab, especially when you're just getting started out with Coveo is one of the most impactful places to get started to really address relevance issues. When we talk about those performance metrics, and Jesse mentioned the big four, content gap is related to many of them. So we have query click through, for example, which is the percentage of queries that have a direct click. Well, if that query has no results, you can bet there's not gonna be any clicks. And so by closing the content gaps, you're actually having a positive impact on a related metric, which is query click through. So that's why we'll often start here and take a look at where your where our content gaps are. So as I scroll down, you know, you can see actually just the way this report is built. You've got a view of your queries overall over time And then superimposed sort of is your queries without results over time. They should be relatively correlated. At the end of the day, we wanna see these things moving in opposite directions. You can take a look over here to see which of your search hubs are driving, you know, the most, content gaps. And then as you scroll down, you can actually see what your content gaps are. So these are what things people what people have searched for. You can see by how many searches had no results and then throughout how many unique visits, that these searches have no results. And so to diagnose these things, you know, you can start by, like, okay. Well, maybe it's easy enough, but maybe they spelled this wrong. Or maybe you know what they're looking for here, and it's content that resides in a source that we haven't indexed yet. Well, that's, you know, a big issue. Let's make sure that if we we identify what people are searching for, if it's not indexed with Coveo, first step is get it in the index. Maybe it's indexed, but for whatever reason, it's not the content's not showing up on the search interface. There could be some other issues happening here. Maybe you have a filter expression built into one of your query pipelines around the user interface itself that's prohibiting that content from being displayed. That's something that you might want to look into. It could be a permissions waiting thing, or it could be more simple. Maybe they just spelled it wrong or they're using the outdated product name, and we've recently rebranded all of our products with a different name. That would be a good opportunity to leverage the thesaurus rule. So thesaurus rules are pretty good for closing content gaps. Another really good, effective tool for closing content gaps is using featured results. Because unlike a regular ranking expression or normal query, if you use a featured result, it will actually boost content, based on a query regardless of whether or not the keyword is actually in the document. So you can actually use featured results to, put content there where it normally wouldn't have been. And finally, our automatic relevance tuning model over a time organically closes content gaps. And the way that works is, you know, so say you search for a certain keyword and there are no results for it. If if in that same session, the user rephrases their query in in a different way, and now results pull up and the user clicks on a given result for the refined, query. The next time somebody goes in and searches for that original query that had no results, you might actually have a result being promoted by the automatic relevance tuning model, because of similar query or in previous, you know, visit sessions. You know, people who've looked for this also refine their query to look for something different, and now that result's been learned. So that can happen organically through ART. K. So these are your, content gaps. So those are easy to diagnose. There's lots of different tactics we can take to diagnosing content gaps. Maybe one more area I'll show you here is in the search event performance and document performance tabs in this report. So these are gonna show you basically all of your search activity. So as you scroll down, you're gonna find, basically, here's a list of all of your queries and all the different metrics associated with this. So if you really wanna get granular and take a look at what people are searching for and what those results look like, well, here you go. So you can see the queries, and you're gonna see, in this table the number of results on average that are displayed when someone searches for this query. You're gonna see the search event click through the average click rank. So when that query does get a click, where are those results posted, how many times it's been searched for, how many times it's been clicked, how many unique visits, and unique visitor IDs. And And there's this thing called relevance index. So if you ever needed to have one metric to rule them all with Coveo, this is sort of what relevance index does. It looks at a lot of these other, key metrics such as the search event click through, average click rank, number of results, and so forth. And it kinda aggregates it down to a single metric. And it's, the best you can get is one. So it's kinda like a like a percentile scale. Usually, as you can see here, there's a lot of, low results here that are shown in red. If it shows in red, it means that just some problem area that we want to address. If you ever see an a metric show up in green, it means that it's performing well and it's within benchmark range. And if it shows up as black, it just means that it's sort of the middle of the road. And so as you're looking through these things, you might wanna sort it by event count or by, unique visits so that you're getting your most popular queries and and most common activities up here at first because they're gonna have the highest overall impact, on your experience. And then you can start to look at relevance index, for to identify the very poor results. And, again, this is all test data, but some things you can look at, for example, with this query of euro twenty twenty one, it was searched for many times. There were many results for it, but it was hardly ever clicked on. You have you have less than a one percent query click through rate. So we might wanna start to investigate what word they're clicking on when they did click on this, what word they may be looking for, and then you can use a few different tactics to fix it. And then, similarly, you have machine learning here, which has a little bit better click through rate, fewer results, but your average click rank is always about fifteen. So people are digging down pretty low in the list to find the results associated with this. So that's one thing that machine learning, by the way, will help over time. So that's what automatic relevance tuning does. It's, you know, people who are looking for the you know, searching for machine learning are digging down this page three to find results. Over time, that result is gonna start to move its way up the result list until it's, number one and number two. It takes a little bit of time for that to happen. It takes a lot of, you know, gathering, activity, and and, you know, gathering data for this to happen. So you might wanna go in there and, you know, help it along a little bit. So if you figure out what people are actually clicking on when they're searching for this more often and that particular result is showing up lowering a list, you can use a ranking expression to boost it up a little bit to give it a little bit of acceleration. Yes. And, this has been our internal, our internal testing org. So, yeah, we've been doing a lot of searches around, soccer and, the euro and all that kind of stuff. Good times. And then, you know, maybe one more thing is this document performance tab. It kinda works hand in hand with the search event performance tab, so I would always recommend to customers to use them together. And, with the document performance tab, it's very similar, but now it we're shifting our focus to what the clicks are. And so now you're gonna be able to see what are your top clicked documents, how many times were they clicked, and then you can see the trend. So if you have any documents that are trending upwards in case this all this all of these are. By the way, the way the trend analysis works is it uses whatever period you've selected here, and then it compares it against the previous period of the same time. So that's how the trending works. Then you could see associated queries over here. So for example, this academy orientation sessions has been gaining popularity. If I click on this, and remember that's gonna add that as a filter at the top of the dashboard, you'd be able to see some additional metrics around that such as, like, other user queries that are associated with it. In this case, little bit of an anomaly. There weren't there there were no queries. There were just people clicking on this document, so that was probably not a really good example to use. I'll remove it. But here's another here's an example with machine learning. Right? So if I click on machine learning as the query, what you're gonna see in the top documents are the actual documents that were clicked on when someone searched for machine learning. So this will give you an idea for what the best documents might be, with the most popular ones based on trending and so forth. So that's your, search details report in a nutshell. Again, there's lots of tabs in here. This is something that your onboarding manager or your CSM will be able to take you through in detail, but nothing beats going in and exploring on your own. So if you go into your org and, you know, you can, actually add that detailed summary report to your org if it's not already in there. And then take a little perusal through all of the different, tabs in here, so you can kinda look at your performance across several different vectors, and and get some great insights into what's happening within your Covalent implementation. So I'll stop here and see if we have any, any questions. Perfect. Yeah. So we have a few minutes left. So if anybody has any other questions, don't be shy. I can see, two that I want to address that have come in specifically. One of them would be, how can I tell if my machine learning is working? So really good question. And I'll just share my screen really quickly to show one means of being able to do this. The machine learning tab, we've gone over a lot of the tabs in terms of the reporting itself. But then that being said, we do have this tab, which will show you. So right here, percentages of visits with a click on ART. ART is automatic relevance tuning and is our machine learning model that will essentially learn from every single behavior. So every single click and every single interaction. And then based on what's worked, you know, literally the last thousand times, we've noticed that people have been clicking on this. So let's go ahead and inject this result to the top. So that's exactly what it'll do. And in theory, of course, this is like a test organization, but you would be able to see this number go up over time. And that's as it's developing more queries, you would be able to see it gain more weight, understand how to give those relevant ex experiences more quickly, and you would essentially see that percentage of what percentage of people clicked on something because it was recommended by machine learning. Okay. Do we have any other questions? One thing, maybe I can show you here too as well is for, machine learning validation in your org. If you go into the the models section here, you can, like, pinpoint a specific model. So maybe you want to test out and see how one is performing. You'll be able to kinda see that right over here, how many different queries are learned. But, basically, certain models and, I mean, like, query suggestions, automatic relevance tuning and recommendations, they they learn from user based activity. And so over time, the more activity happens, the more data it collects, the more candidates it comes up with. And so you can take a look at this if you actually drill down. You get some examples of what your candidates are. So in this particular ART model, you can see that it's, there's been eighty three, events that were generated from it. So that's been promoted there. These are how many different search events have actually trained or I'm sorry. Click events that that these are these are clicks that train the model. These are search events that train the model. And down here, you can see some of the candidates. So there's eighty three different words that's kinda learned from, and these are some of those queries down here that it's actually learned from. Also, there are some filters that are on your model. So we talked a little bit about search hubs already. So imagine the spoken wheel thing again. So you got your index in the middle, and you got your different search hubs. It's kind of like query pipelines. So those individual search hubs can be actually used as filters in the model. So something that you can use the same machine learning model in three different query pipelines that are serving three different audiences. But because they're they're segregated by a search hub, certain queries that are happening in hub number one and the the related results are gonna be learned, and that's gonna make that correlation for that search hub. But somebody over on this search hub, which might be a different audience altogether searching for the same query but with a different context, may may be clicking on different documents, and the machine learning model will be able to discern the difference between when someone is on this search hub looking for, you know, a particular query versus someone who's at this other context. And that's, defined by your origin, context filter here in the machine learning models. You can actually look down here, and you can see it by, origin one, which is your search hub, and then even by tab. So within a search page itself, if you have all content tab and the docs tab and maybe, like, a support tab, even those individual tabs will train the model a little bit differently, and it sort of segregates or subdivides the the learnings of that model. And you can visualize that all here in the model section in your in your in your cloud org. Perfect. Thank you for that, Jason. I can see, just a few more questions. We'll have to wrap this quickly because we're almost at time. Are there any go to rules or rules that we should prioritize? The one thing that I'll go ahead and mention here is, unfortunately, there wouldn't be any specific go tos. There are ones that are more common, but there are always use cases in which it might be one or the other. One important thing that I do wanna hammer in is thinking about the actual, end user, their journey, and the kind of the the the search journey that they take. So what is their intent? What's their purpose for searching? What's important to them and what are they looking for? And actually testing out what they searched and seeing the results, thinking about if that actually adds up or not can go a really long way in terms of understanding what kind of rule might need to be added or if it might need to be content that can be built out. So essentially putting on a little bit of an investigator hat can go a long way with being able to understand the root of those issues, and then what rule might be the right trick in our pocket to go ahead and let up. Cool. And there's maybe a time for one more question. I we went to spinning because it was a good one is how do you know how much influence machine learning is having compared to other rules? When you're in your query pipeline, if you go into machine learning and, like, look at automatic relevance tuning, for example, when you configure the model, you have a same slider. So you can see based on the way that this works, by default, you're boosting everything by two hundred and fifty as your as your top learn result. So you can compare that directly against the slider that you're using when you're building a result ranking rule. So that's how you can tell. And the next session that we go through is gonna be all about looking at the basic Coveo search algorithm, and then we're gonna look at, like, how do how do you how do you know how much boosting is is affecting your results, both for manual as well as ART. So we're gonna dig really deep into that topic in our next session. So I hope you can join us for that one. That was it for for I think for today, we're at the top of the hour. Alright. Thanks, thanks, Jesse, Jason, and Jazz for staying with us and everyone who stayed with us on the line today. We hope you enjoyed that. And like Jason mentioned, part two is coming in the next couple of weeks, so stay tuned for the invite for that one. Should be coming to you soon. It will be on December second. And, like we mentioned at the beginning of the session, you'll receive the recording and any associated documentation following this session. So on behalf of the team, thanks everyone for for joining today. Bye. Okay.
Learning Series: Coveo Platform Primer
New technologies are emerging fast. And the lines are blurring on where your growing enterprise should focus.
If you want to maximize your profits, you need clarity.
Imagine a platform that automatically understands your customers so intimately, you know exactly where to look for new ways to grow.
This is what Coveo’s Relevance Platform can do for you.
This is part one of our Learning Series that covers the essentials of the Coveo Cloud Platform.
Watch our experts give a walkthrough of the Coveo Cloud Platform and explain the higher-level solution architecture and other key features.
This webinar provides a primer on core Coveo functionalities
- Learn about Coveo terminology
- See how Coveo solutions and high-level cloud architecture work
- Tuning relevance with learning-machine models
- Ongoing reporting and revisions through A/B TESTING and more
The Coveo Cloud machine learning platforms can work with your specific industry and make predictions based on accurate real-time data rather than vague predictions.
Tune into our webinar Learning Series to expand on how Coveo Cloud can help Coveo Platform Primer.


Make every experience relevant with Coveo

Hey 👋! Any questions? I can have a teammate jump in on chat right now!
