I guess, you know, we are we're seeing a few people tricking in already. So I'll just take a few minute a a few seconds and, yeah, and and sort of, get started. One thing I will note is that, this is meant to be a little bit more interactive. What this means is that, you know, as we're going through this, you should already have a, you should already have a an organization, under your name. You should receive the invite this morning. It should be called, GenAI workshop and then with a number, that that is specific to you, and and you should get, the invite. So I sent it around, depending on, you know, it was, incrementally, but it was it would be around ten AM eastern time. So seven AM, Pacific and about and four PM, European time, Central European time, I believe, unless I'm mistaken in my numbers. So you should I've received that already. If you didn't, you'll have to follow along, but you wouldn't be able to play with RGA directly, which just sort of defeats the purpose. Let me know. We can we can get that set up. But yes. So you should already have an organization that is working if you were registered for this, for this webinar. First part of this, of this webinar is gonna be a little more theoretical. I'll talk about the the the technical work, workings of the, you know, of the model and and how it works behind the scenes. But then after that, we'll jump into, actually getting our hands into it. So I guess I'll start sharing, talking about this. So, obviously, the disclaimer. Right? We're a publicly traded company. So, you know, keep that in mind and and don't make, don't start buying or share or or selling stocks based on what you see today. Yes. Anyway, yes. So talking about the, the partner hour series. So we had the art of the possible last week. The this today's day one of the builder workshop. Reminder that next week, we have a seller enablement session that's gonna be hosted by Liz, who's gonna be talking about how to build a practice, how what's the build buy versus build, how to work through that. So a little bit more commercial and a little bit more business oriented. Today is being more for technical side of things. So just understanding how do you build it, what does it, you know, how can you get there, and and how do you you know, once you have already sold once you've sold Gen AI, once you you wanna build a demo of Gen AI, how do you get that working? So, it should be straightforward. So I'll start by reviewing a little bit of the oh, and by the way, if you have questions as well as I'm going through, we'll have a q and a session at the end, but I'll also try to keep an eye on the q and a that we have, in the chat. So please, if you have questions, you are more than welcome to ask them in the chat. I'll get to them as quickly as I can. And but yeah. So I'll just I'll but but, you know, we'll we're this is meant to be a little bit interactive. So this is just an overview of the of a very simplified version of the, of the Coveo architecture. So the way that Coveo works, and for pretty much everything and every way that it works is that we have, you know, on the, on the left side, we have the websites, the applications, the sources of content. So everything lives outside of Coveo. Whatever you want to be able to to search against, they exist outside of Coveo. So we have a way to get those into the index. So using those connectors, creating sources in Coveo, you know, we can we can get to to any type of connect, be it your your product catalog, your Salesforce content, your SharePoint, your your site map, your everything. We were able to get it into inside of Coveo. Then if I look on the other side of the screen, we have the website where you actually search for that content. So either through a full on search box, through recommendation, panels, or through page navigation. And so when you make a query, what it does is it goes through a query pipeline where your business rules and your machine learning are gonna change, either the query directly or the results that come back from the index, and then we return the results back to the to the the end user on the front end. And then, similarly, we have when when the user interacts with Coveo components, we send usage analytics events, which is then used to feed the machine learning in a in a loop, to make sure that we're able to to learn from those events and and understand and then, you know, give better, you know, give better results over time. So this is the the main gist of Coveo. You should already be be familiar with that. If If you're not, I highly recommend you look into Coveo, level up. So level up, Coveo dot com. There's a lot of courses you can take that are the fundamentals being one of the main ones that I think you're interested in, but you will start from there. The what it how it works with RGA is that we have sort of this extra piece, that sits a little bit outside. It you know, normally, the mesh well, it sits a little bit inside as well. But one of the main differences of that RGA has compared to a lot of our other machine learning models is that it does not learn directly or it rather, it is not, build directly to learn from the behavioral analytics. And what I mean by that is that its main source of learning is directly in your index. And and so that's the reason why why today and and, you know, this week, we're gonna be able to build a fully working RGA model is that it does not require, it does not require user analytics in order to be able to provide answers. It what it needs is content in your index that it can learn from and and and understand how that's being you know, how what your content is about, be able to make it chunks out of it, be be able to create vectors. And so then from that, it's able to then generate answers to your questions. So RGA, I put it there to as it sits a little bit outside of it, but, essentially and we'll talk a little bit deeper, and and right there. But it it it yeah. It when you make a query, it asks RGA a few questions, but we'll talk a little bit, deeper in in a few seconds. So when we're talking about implementing RGA, there's three key moments that are important to understand. The first one is just, you know, creating the model or building the model. And that and, you know, when when we are tasked with doing that, we need to think about, okay. How do I scope my content? How do I make sure that my that the content that RGA is learning from is factually accurate and up to date? And and, you know, and because that that is really and I think this is something that's that's the key most important aspect of RGA is that it is truly garbage in garbage out. If you give us wrong content and tell us it's accurate, which is what RGA assumes is that if you give us content, it assumes that's accurate. But if it isn't accurate, it'll start giving answers that are not accurate. And I think that's obvious, but it becomes even more evident with RGA that that you when you ask questions, it might start spewing out wrong answers, and that is most often caused by the content that you're indexing not having the right, information. And so when we're building the model, we wanna make sure that we're scoping the content, to only what we know is factually accurate and and up to date. And then, you know, once we have figured that out, we'll be able to create embeddings based on that, and then and then we'll tell RGA that this is the content that you need to use. That is not to say that you only need to index that type of content in Coveo for it to be important. You can definitely index other content. So for example, let's say you're you're in a self-service, system. You have, you know, you have your cases indexed. You have a forum, where where people ask questions and and you give answers, and you have your technical documentation that you know is accurate. For the sake of RGA, it might be useful to only scope it down to the documentation so that we know it's the accurate information that's going to be learned from. But on your search page, getting responses from what's on the forum, which might not be fully accurate. You know, sometimes people on forums ends it's the community that answers itself, and it might not it might be very out of date, especially if it's a four year old post, then and your product is evolving quickly. And so you still wanna have it available in search page, but we're telling RGA only to learn from the documentation. That's one thing that's important. I see we have a question in the q and a. Is there any guidance around size of the document for RGA? I mean, there is, but it really comes down to, to to the specifics of it. There's a a maximum number of documents that Coveo can process, though that is becoming higher, and it typically is based on your license. But but, normally, you know, for example, if you tell me I have a million documents that I wanna that I wanna learn from for RGA, my first instinct would be to tell you why do you have a million documents that are factually correct. That feels like you might need to narrow that down. I don't think that you fully have, the a grasp of that. Sorry. You meant to open document the size of the documents. There is I I forget exactly the number, though, but it is it is quite high. You know, if you have a hundred page PDF, we we will be able to to to parse through a lot of it. But it is a matter of of, you know, getting there. There's there are ways to approach this, honestly. You know, for most of the content you have on your public web pages or on your web pages or on most PDFs that are actually useful, you know, they're they're gonna be within this, a range that is small enough that we're gonna be able to to handle those. And then, you know, the worst case scenarios, we'll end up splitting those large documents. But thank you for that question. So we'll yeah. Let's continue on the retrieval aspect. So that was just the model building. For the retrieval, the second key moment, and that that is the part where, you know, you have your model that's already built. It learns from, from the content. Let me hang on. I just, I just gonna send it so it because it still says that I have three questions in the chat. Alright. There we go. So for the alright. Perfect. For the retrieval, that is when you've built your RGA model fully and you are, you're asking a question. And and and so the user is typing a question. In order to know, you know, what piece of content is useful, we need to retrieve this information, to an to the LLM and get what's you know, the third point, get the chunks. Chunks are your documents in Coveo when you index with RGA. They are put into they're separated into chunks, and those chunks are and and are sort of individually vectorized. And, and so a document can have hundreds of chunks. And and and in that regard, when Coveo when you're making a query to retrieve those chunks, we're we're finding which specific chunks of your document of your documents are are useful to that answer. So we're doing sort of a a a relevant search, in that regard. We're also using, of course, the Coveo machine learning, other models, things like ART, for example, or DNE, that are going forward to make sure that we have the most relevant content at the top of the page. And we also have semantic search being, highly, highly leveraged here. Semantic search being the the model that's able to vectorize the words, in in a in a an efficient manner so that we are able to understand, you know, which words mean what in in that context and and then, return you the proper content. By the way, semantic search, I mentioned it last week, but I I wanna stress it again because it is vital. If you implement Gen AI without semantic search, you're going to see poor relevance in in a poor answer rate when it should be higher. But don't worry. Semantic search, as a license comes with Jennyi. So if you have a Jennyi license, you have a semantic search license built into it. So don't worry too much about it. Just be aware that these are two models that you need to individually add to the query backline. We'll do that today. And, actually, we'll we'll ask you how to build those. So, yeah, this so that that's the the document retrieval stage. And then the last stage is a generation. So that's when Coveo is going to take all those chunks, ask the question, and then send it to the LLM. I mentioned the passage retrieval API last week, the new feature that's coming out, this quarter, where you would be able to this is the part where where it'd be up to you. Right? The retrieval aspect is the part that's always gonna be handled by Coveo, but then the generation aspect, with the passage retrieval API, during the retrieval stage, we would send you all of the chunks that we think are useful, and then you need to take care of of taking those chunks and generating an answer based on that. But, you know, with the out of the box and default Coveo RGA, that part we just handle for you. And and and so that gives you an answer and and, based off of those chunks and how that works. So if we look a little bit deeper yeah. So a little bit of a of a more advanced architecture here. Right? You know? And I even talked a little bit about that, but, we're not gonna touch that today because it it would be a little bit harder to to test. But, if you have restricted documents with permissions, for example, well, Coveo is gonna index its permissions, and and and we only return the chunks of documents that you have access to, because we use our our our our normal Coveo, machine to be able to understand it. So we get those documents in the index. Then when we when we make a request as an authenticated user, it goes to the search API that then, you know, verifies the permissions as it usually does. It gets the, you know, it gets the the context of prompt generation directly, and then we call Azure, OpenAI in Azure to get the to be able to generate an answer and get back the answer. So, yeah, as I talked about and then maybe I should have shown that when I was talking about the chunks. Right? So when we index the content, RG, what it does is it will take that that that that information and turn it into several chunks or passages, if you were if you prefer to call it that. And then we'll be able to create factors based on those specifically so that they can be easily returned and in an efficient and timely manner at query time. And then, you know, when you make the actual request, you know, the query goes into the query pipeline that then gets applied to machinery models, a semantic search that calls the index vectors directly in the index, but then also makes the request to the index before returning the answer. Yeah. So, yeah, you know, the rank results and everything, the typical things that you see in the normal Coveo pipeline, outside of RGA. And then we the the RGA will go and ask the vectors in the MEL memory in order to be able to get the right chunks and and be able to insert the right question, call Azure OpenAI with a generated answer, and then we stream that back to the UI. So that's why you'll see you know, so oftentimes, you'll notice, especially if you've played with, you know, our customers or ourselves live with RGA, that it it does take a few seconds, sometimes for the full answer to be to to load. That's because there's sort of that extra step of needing to to degenerate the answer, and that always takes a little bit of time, because it's computationally, hard to do. But then we stream the answer asynchronously so that we still have the the results available on the page. But then, you know, as as we get the the the answer, we stream it to the front end, and then we're able to show it to the user. Yeah. So this is it for just the sort of the quick aspect of things. What I'll do now is I'll I'll continue to share my screen, and and I'll talk about, you know, indexing the content. Because that is always the first step of any Coveo project. If you don't have index content, you can't get anything done. And so, you know, that's why that's where your your organizations come in. That's where your you know, the organizations that I created for you earlier today, they come in, and that's where you can go ahead and play with those. For this, workshop, you know, so as mentioned, right, RGA requires the most accurate data to show useful answers. Choosing the right source of content is crucial. For this workshop, you have two options. You can either follow what I've written here in the, and I'll talk about it, but I what but if you can either follow what I've written here in the slide, which is to go on caveo dash power dash up dash challenge dot vercel dot app slash web slash wikiHow because that is the con this was the content that they decided to use. It's a specific narrow version of wikiHow because the real wikiHow website is way too large and and would take too long to to true to fully index. And so I've narrowed it down to only the section about, gardening and and homing. So things related to the home and related to gardening. And so that's why you'll see some of my questions that I use to to showcase that or narrowed to that topic. But so, yeah. So you can either use that or, if you have maybe a prospect in mind or you have a use case in mind, you can decide to go for that one. If that's the case, there's always sort of the how do I index what's right for my for the customer. Ideally, you wanna have a site map. That's the best way. There is the web source that's always available, but the web source, be aware, is slow on purpose, because we don't want to accidentally be DOS a a customer's website or a prospect's website. We force it to only call one page per second. And when you do that and you have thousands of pages, that's a lot of seconds that you have to wait for before we fully crawl the site. The site that limitation. So just be aware of that, if you decide to use the web the web crawler. But for the for the site map, you'll see this is a lot faster. So what I'll do is I'll I'll I'll continue to share my screen, but I'll I'll open this. So as I mentioned, the power up challenge, and, it's the wiki wiki house sign up. In fact, a lot of these, I'll I'll put in the chat. I'm sure I can do that. There we are. So you can also click on the chat. So you can see if you if we look at that, right, it's, it's seven hundred, I think. Yeah. Seven hundred and six, pages, things like how to propagate bamboo, compost in an apartment, make a protective right gear, fruit and cilantro, use potash. So there's quite a lot of them, turn an expander and paint black chrome. So there's a a lot of random pages. Be aware these were made by, users and random users sometimes. So we might see a little bit of interesting pages in there, but I think we have enough useful content as well so that we can actually have what's important. Oh, you guys cannot see the chat. I don't know how to share it without the chat, but the URL is this one, caveo dash power dash up dash challenge dot vercel dot app verse and it's let me can I zoom in a little you know? Hang on. And what I'll do is I'll I'll make sure to zoom in a lot here. There we go. Coveo dash power dash up dash challenge dot versella app slash web slash wikiHow in one word slash sitemap dot XML. So the this is just a site map that's available to use. Now how would I go around building that? I've already done it here. But I'll just create another one just so we or rather, fake create another one. So you what you're gonna wanna do is you're gonna wanna scroll down to the site map section. Make sure you choose the one that has a cloud on it. You don't want the one that has a you don't wanna have the one that has a a a spider that's for the crawling module, which we won't need to use today. But you're gonna wanna take the one that has a little cloud. So I'll I'll go in here. I'll name it I'll name it wikiHow two. I'll take that URL, the sitemap, and put it into my sitemap example, and then click next. Now it's gonna warn me, by the way, this information where you're adding is gonna be, viewable by everyone who has access to content in your index. Is that okay? And I say, yes. That's okay. This is public content. I this doesn't matter to me. Like, if you were indexing content, you know, otherwise, you have to limit it to only me. But in my case, I'm gonna make it available to everyone. I click add source, and now it's gonna bring me to, a panel in a second. There we go. And now, this is where I can go and and add advanced settings. I will let you know. You might wanna do that. By the way, the the site might take about ten minutes, fifteen minutes to build, if you're gonna go for that one. But, but the reason why I'm saying let's wait a few seconds is if I go back here, curating the content is also very important. So you might think curating the content, I'm talking about getting only data that's that's accurate, but that's true. But it's also making sure that you're only indexing the parts of the the parts of the, of the site that are useful. So what I'm doing here and I'm gonna showcase as well how to achieve that. There we go. I'm gonna showcase as well how to achieve that. Yeah. So you wanna use web scraping if you're gonna use a sitemap. There you know, you can just try to figure out how web scraping works, but that's a little bit of a of a of a problem thing to do. So what you can do instead is use a Chrome extension that we have to generate a valid web scraping config, or you can just search on the Coveo. You know, their link is here, but I don't expect you to be able to click on that. So instead, what we'll do and what we'll do is we will go on the Chrome extension, And I'll go in the search box and type in Coveo, and I have the web scraper. That's the one I wanna use. I think the I I'm logged in as Coveo, so I may have access to more things than you guys do. But this is the one that's public. It's the web scraper helper for Coveo web sources. I already have it in Chrome, so I'm not gonna reinstall it. But it's it takes a a few seconds. You might have to restart Chrome in order to to use it again. But how it works, let's show let let's showcase that. So let's let's click on the first link that we have in our site map, which is how to propagate elephant, elephant earplants, which I'm not even sure what that is, but I'm sure it's useful. And what I'll do is I'll open my, inspector. Actually, well, I can just click I can click inspect, and it opens this thing. And then if I scroll or I click on the little icons here, I have a new section called web scraping. That's what the Chrome extension does. So what it's gonna ask me the way it works, you create a new file. You name it whatever you want. I'm gonna call it, again, WeGeo and let's close that. Actually, I didn't realize I was flipping up. And now it's adding default things. So you might notice, for example, that my header is suddenly grayed out or rather, a little bit more transparent. And that's because I have a CSS here that is saying if if the CSS rule if it's the header, HTML tag, if it is a class header or it has the role equals header, then ignore it as an element to exclude. Same thing with footer and then no script and nav. So that's just out of the box, things that are available, because that's useful for most things. But let's say I wanna also ignore coauthored by Mark Leahy, plant specialist. That's not very useful to me. In fact, nothing in the sidebar is useful to me. I noticed it has an idea of sidebar. So what I'll do is I'll go to web scraping, add a rule, and call it sidebar. And then as I'm typing it, you can see the sidebar suddenly, disappeared. One thing I do wanna note as well is that this, is that this this crawler site not crawler and the website crawler as well, by default, will not use JavaScript. By default, the the it will not try the little JavaScript. It will it will not execute JavaScript. It will just load the HTML and index it as is. There are ways to force it to, to to load JavaScript, and that is in the advanced settings. Is it? No. Yes. There we go. Ex execute JavaScript on pages. So if I click on that, I could say, you know and then how long do I have to wait before, so for the JavaScript to to execute before I actually start indexing, let's say, for a second, for example, so or a thousand milliseconds. But in my case, I don't think I need JavaScript. So, yeah, so if I go in here, right, I can I can add these things? There is also, ways that you can, extract additional metadata. But in our case, that's not gonna be useful. And the same thing with sub items, again, that's not gonna be useful for us. It might be useful for a real project. If you have a main page and you wanna split into multiple sub items, it's something you can do as well with the web scraping. But in our case, we're just gonna exclude the sections that are useless. I've gone through that process already because that is quite a lengthy thing. But just letting you know if that's what you're gonna gonna wanna go forward after you're done, sort of building your full, thing. You go in here, you copy the JSON, and then, and it's a little bit, unconventional, but you go on to under web scraping, edit with JSON, and then you copy whatever you had. And and then if I click done, for example, now my configuration is here. The reason why I'm not going deep into that is that if you go to the same URL I gave you earlier, but you remove the slash site map, I already built a fully working you know, well, it it's not the best the best perfect one, but it is it does remove the need to remove. So I just take this. As I said, I let's delete the config one. Edit with JSON. Add my JSON. So it's right here. Click done. And now I have my WikiHow web scraping that's available right there. So, again, it's the exact same link that I have here, but you remove sitemap dot XML. And then you get access to this to this. You can just do control a, control c like I did, right, to or you can just it's not that big. You can just manually take it. But you you have access to this thing. You then copy with JSON in the web scraping section and copy it. So same URL, just remove the site number XML. Then once I do that, I'll I would click save and rebuild source, but I don't wanna do that necessarily, but you are more than welcome to do that. In fact, you will require you you will be required to do that if you wanna get RGA working. So and but in my case, the reason I'm not doing it is I'll quit without saving. It's because I already have a wikiHow page. It is exactly that. If I go and edit it, you'll notice it's called wikiHow. It's the same URL. If I go to web scraping, it's the same oh, no. I'll just log in again. Apologies. It's the same web scraping. If I open it, if I edit with JSON, you'll see it's the exact same thing I have. So nothing nothing crazy going on here, but then you build it. So that's gonna take a few minutes to to fully build. One thing I will, show you is if you wanna track how things are going, the log browser is your best friend. Right now, I have nothing because I built it, last week, or yesterday. I forget when I built it. But I I built that source, you know, in the past. And so right now, it's showing me the last, the last hour, and I have nothing in the last hour. Let me check if I have last day if I have anything. Yeah. I know I built it a little bit, before that. So that's why I have nothing in my logs. But if you were actually indexing, you'd be able to see which pages are getting indexed. And then if you follow in the content browser, you can sort by index date newest, and it will show you which are the last pages that were added to your index. So as as you as you go through this, you'll you'll start noticing this. Let me see if I have yeah. So that's that's it for now, for this building. So you really do need to build that those pages. Again, if you need to to use one that already exists, you can take that, but you are more than welcome to try something else. If anyone in the chat in the during in the webinar is interested, you can, in your the q and a, paste or talk about, like, which site you'd be interested in Coveo, attempting to index, how you can see how I would approach, that that content, and and we can sort of walk through that together as well. Also, if you do not have access to your org, please let me know. If you go to and I'll paste it again right now. I assumed everyone was familiar with it, but, platform dot cloud dot Coveo dot com, the Coveo platform. If you, reach that URL and you log in with, with the email that you used to register to this webinar, and that because that is the one that I used to create the org, you'll be either prompted to to accept an invite or you should be able to see it in as as if you click on here and you click gen AI workshop, you'll see we have a there's a lot of them. I have access to all of them, but you you should be able to only see one of them, the one that I invited you in. If that's not the case, please let me know so I can make sure you have access to one. Yeah. So, once that's built, in my case, it's already built, there's only two steps that we need to do today before, you know, before we're able to wrap this up. So, again, as I said, we don't fully need two full hours to do to do this RGA, to do the the RGA demo, but I I did still wanna make sure we had time because, understanding what we need to index is a big part of it. The next big part is we go into the model section. So in my case, I already have them working, but, essentially, you'll click on add model, and you'll select, you'll need to use both relevance Gen AI relevance generative answering sorry, and semantic encoder. When you create a semantic encoder, I'll as you'll notice, the the organizations that I created for you only have, are allowed to have one semantic encoder, and so you will not be able to create more than one. And you will receive an email letting you know, hey. By the way, you've reached your maximum number of semantic encoders you that you can create. That's okay. A maximum of one. If you create one, suddenly you can't, create it, you know, can't create another one. That's okay. And I'm seeing that's even more okay because, you know, in my in another one of my of my demo organizations, I have two two Gen AI models that are learning from different types of content, but only one semantic encoder. And that's because in my semantic encoder, I've I I told it to learn from more than just one of my, sources. I learned I told it to learn from different sources so that it can it can figure it out. So let's just sort of fake create. I'm not gonna fully do it because I don't wanna build the relevance generative instrument model for no reason. But I'll just showcase what it the flow looks like. And data where it's the exact same flow for the semantic encoder with the same UI, same principle. So I click on it. Right now, it's telling me, oh, by the way, this is what it does. We already know what it does, so let's hit next. What it's asking me now is select which source I want to be able to learn from. So I have two sources. Right? This one is currently empty. I could just select a source, or, if you were in in a, let's say that you were working in a in an environment where you have a lot of different sources, and they're all useful, but you've manually curated and told Coveo, through a metadata, let's say, it's called the field is accurate. You said to Coveo, I have a field called is accurate. Yeah. And when that the value of is accurate is set to true, no matter which source, use it for Genii. I could go into add a filter here and manually say is accurate. Well, in my case, it doesn't exist. Let's let's pretend is accurate is called, Or let's say we know that the author Alex is is, always relevant. I could say author is equal to Alex, and so that is telling Jenny I that no matter which source it comes from, if the author is Alex, use it and learn from it. So you can always leverage that. You can do a mix of both. You know, we can go deeper. We can, we could, let me see. Yeah. So there's a limit with the UI. Also, you can be when we can add more filters, the author is Alex, and, also, the company is equal to, for example, so that no other Alexes can can start being useful. So we can, you know, we can go deeper. And then if you need to go you use a a more complex request, then you can go in the in the back end of the source and start editing in the JSON with the filter you wanna use this. So you would create that. Let's say I wanna create again WikiHow. You know, right now, it's like, oh, by the way, WikiHow may have seven hundred and six. That's seven hundred and six, items in the source that it's gonna learn from. So I say that's perfect. That's what I want. There's a few things you'll notice. Right? In order to learn, it needs the item needs to be in English. French support is coming very soon. But, but right now, it's only English, so it needs to be in English. So it's telling me, you know, that's seven one six. It needs to have a permanent permanent ID, which all of your sources should already populate with a permanent ID. That is, by the way, a requirement for all of our machine learning models. If your pages don't have a permanent ID, Coveo is gonna really struggle to be able to to, boost it and generate content from it. So but but if you use any of our out of the box connectors, the permanent ID gets automatically generated, so you don't have to worry about it. Matching the filter is if I have a a you know, the source and then the filter, it's gonna it's gonna specify exactly which one I do. So now it's I'll take everything in the seven and six, click next, and then I name it. So I'll call it, yeah, I'll put WikiHow again, but I'll call it WikiHow two. Right? But I will actually build it. And and then if I click start building, it's gonna start, you know, creating the model, and then you'll just have to wait maybe an hour, an hour and a half before it ends it finishes learning. You will notice, it's still a new important things. It's still a new by the way, remember the semantic encoder. Without without cement the semantic encoder, RGA is gonna struggle to give you what you need. And then it's also telling me, you know, creating the model is not enough. You also need to add it to the to your query pipeline. So I I'll I won't build it right now, but this is how I then I click start building, and then I also do the same thing for the symmetric encoder, which is the the exact same flow. That UI, this area is the exact same flow. Oh, advanced is where you would need to go in order to, to go to the advanced section. I didn't notice here. But, yeah, I could I could enter a very complicated, filter in here if I wanted to. And, yeah, the semantic encoder works the exact same way. The important aspect that it's telling us is that, you know, this is all good. It needs to learn. It needs to build. If I click on it and I open it, you'll see, you know, for example, a hundred percent of my items have chunks. It there is a total of eight thousand chunks. So eight thousand chunks for seven noted pages. So that's that's the the size that we're talking about when we're talking about number of chunks. And, you know, it always depends on the size of the of the item. The bigger an item is, the more chunks there is going to be. I can see that there's an average twelve chunks per item. The page that has the smallest amount of chunks is only has two, and the page that has the most is thirty two. And there's an average of two hundred and forty five words per chunk. So I I get a a bit of this information. I have no duplicates, no missing ID. Every a proportion of items dot chunk is zero. So then this is all this is all very good, and this is what I want. So I have this information in here. And then if I wanted to change the, if I wanted to change the config, it can go in here and and start changing it and relearning or rebuilding the the model, but I I don't think I need to do that today. But this is what it would look like. But yeah. So this is the model is built, but in order to have an effect on the front end, you all always have to add it to the query pipeline. This is definitely something we will tackle, later this week, though, so on Thursday. But I really want to to at least at the end of today, at the end of this workshop, if you're able to have these two models built, but make sure you only create those models after you've finished building your sources because we really need to start building the source before we do anything else. So I think for now, this is gonna be what it looks like, creating the model. And, before I, you know, I I jump into either q and a or or in this this session so you can work on on getting that ready. You know, I do wanna talk about the relevance happening in in September. We have a partner exclusive, session with Anne Marie. Anne Marie are let me let me actually, put it full screen. Anne Marie is our VP of partners and alliances. So she she will be available, on September fifth at eleven AM eastern time, eight AM Pacific time. If you wanna build the practice with Coveo, how do you achieve that? It's not just with Genii. It is with everything Caveo related, including Genii. So you can you you you're more than welcome to attend that that, that session. Let me just check the q and a. Yeah. We would like you to forward you the invite so you can so so that you can attend that. But yeah. So there's also the QR code that you can scan in here if if that's something that you're interested in. I believe this is going to be available in the recording as well, so you might you might as well take it there as well. But, yeah, we'll make sure to share that recording at at the end of the session. This is going to be it for the sort of formal part of the, the the formal part of of of the presentation. What I what I'm going to go after now is either a q and a or and q and a is is actually rather open, but also maybe an office, time. So please take the time to build those sources and and build those models. If you do struggle with those, please let me know, and we can we can get started and make sure you have that working. And and I I highly recommend that you have that working before the session on Thursday. On Thursday, I'll be showing you how to create a UI based on it. It's a rather simple fact, or rather simple thing to do. We'll also talk about how do you do QA, so quality assurance, and and how do you make sure that, you know, the model has learned the the most accurate things, and how do you go and combat that if if that is not the case. But, yeah, so this is it. This is gonna be it for now. So I'm opening the floor to q and a. I'm gonna stay a little bit longer. If you need to, work on this by yourself, I understand, and I'll I'll let you. But, otherwise, you know, if you have any questions, please ask them in the chat. I'll make sure to I'll make sure to answer them. So please remember to come back on Thursday and put your source and your models already built. So on that note, thank you very much, everyone. And I'll I'll just stay on the line in case anyone has more questions as you as you go through this.
Partner Power Hours: GenAI | Builder Workshop - Part 1

This is the second session of our Partner Power Hours we kick-off the 2-Day Builder Workshop! It’s designed to get you hands-on with our product, enabling you to connect demand to technology seamlessly.
- Day 1: Demo of pre-built solutions and a step-by-step guide for participants to create their own solutions.






Make every experience relevant with Coveo

Hey 👋! Any questions? I can have a teammate jump in on chat right now!
