Hello, and thank you for joining us. My name is Juanita Oguin. I lead Product Marketing here at Coveo. And today, I'm really excited to be joined by Rowan Curran, Senior Analyst at Forrester. Rowan's research focuses on AI, data science, generative experiences, and he's also the author of the Forrester Wave for Cognitive Search. Rowan, I'm so excited to be talking very popular and maybe somewhat controversial topics today. We have one hour to explore these topics, so why don't we just jump right in? Sounds great. Thank you so much for that wonderful intro, Juanita. I'm super excited to be here with you and with, everybody on the line today. It's a very exciting topic, and I think it's gonna be a lot of fun as we go through, and also answer some questions at the end. So, what we're gonna go through today is basically taking a look at how generative AI is impacting search and how search is impacting generative AI. And, really, you know, at the core of this, as we've all kind of seen over the past, I'd say, you know, fourteen, sixteen months or so, we've really seen generative AI come almost out of nowhere for a lot of us, blessed out of nowhere for others of us, and really just start to take over the cultural zeitgeist. And as we're seeing it today, generative AI, has effects that really go way, way beyond business. They extend into our lives as individuals and into our, lives as customers and also to the way that we interact with businesses as employees as well as constituents with our governments. And so there's tons of different use cases that are spread across, you know, all of these different areas. But how we actually got to this moment wasn't just, you know, through the introduction of a, you know, new model, into the world. It wasn't, you know, the release of, GPT three point five series of models, that have been released by OpenAI in, the fall of twenty twenty two. You know, those models were up on their website available for nerds like us to play with, but it wasn't until this really great, you know, application experience was released in late twenty twenty two that everybody suddenly realized how powerful this stuff could be and how useful it could be in business and in our daily lives. And, what this led to once we kind of got this great application that everybody, very quickly stuck onto was this really wonderful complimentary pressure from the top down and the bottom up, in our businesses and in our enterprises, both from the executive and shareholder and board level as well as from the individual contributor, in consumer level where everybody is was excited about the potential of generative AI to, transform our organizations and to really, adjust the way that we seek, find, and, retrieve information and also act upon it, within, all of our workflows. Now what this did overall aside from having this, you know, broader, appeal, across, you know, the enterprise was really change the conversation around AI from being a very kinda high level boardroom conversation where, you know, it's very much about data science and building these very complicated models and perhaps implementing them for end users, to really bringing it down to a much more kitchen table conversation, now the average person has some idea of how AI can actually affect them in their lives in both positive ways and ways that may be a little bit more nerve wracking. But, ultimately, what this means is that we have a huge amount of excitement and a huge amount of pressure to start building applications around this stuff. And the way that this has manifested over the past year is, a lot of folks in this call may be aware is that, we had tons of people just yelling out, I need ChatGPT for my enterprise. In the same way that we had many of our customers yelling, I need Google search for my enterprise for the past, you know, eighteen years or so. And in the same way that, you know, it's not as easy to build, you know, a public facing, you know, high quality search engine. It's not, as easy to build a high quality enterprise, chat based knowledge retrieval system as it is to build a very compelling consumer experience like Chat GPT. And, one thing that this, you know, really drove was a huge explosion in the interest and, the attention around search in twenty twenty two. And it actually led to search being the top use case that we saw within, generative AI over the past year. And when I say search, I mean, broadly, any type of knowledge retrieval and/or search type of, business process that you are trying to go after and attack. Now, this is being used, you know, kind of across organizations, for use cases like, research and development. So one of the earliest use cases for, large language models and search and knowledge retrieval, was with a, that I was, with a primary manufacturing company that was doing some materials design, and they wanted to use large language models to better understand the context of their previous research documents and, and, research, papers that they had published before. And then we also have lots of folks using search and knowledge retrieval applications, supported by generative AI and all kinds of, employee and, internal facing customer support types of use cases. So, these are, things like having a more intelligent, help desk support tool, like we've seen in a couple of different telecom companies, both to help, the, you know, internal, enterprise end users kinda get basic help desk support, but then also to do more complicated things like help folks request environments for, like, virtualized, compute environments for, testing applications in where those folks didn't necessarily have to know all of the details about that, the technical environment they're requesting because the language model is able to reference the, the necessary information and then generate the answer for the technical folks that were actually implementing it. But overall, you know, we're seeing, these knowledge retrieval use cases be kind of a key approach here. And we'll look at a bit more around the architectures, later on in the presentation. But the most common approach that we're seeing to these knowledge retrieval use cases right now are some flavor of retrieval augmented generation, which you may have heard of. And as I said, we'll talk a bit more about later on. Now I'm also gonna talk about a few of the other very common use cases that we're seeing in the generative AI space before we move on. And I think it's really important to note these because, particularly with the first two, they build off of and also complement the knowledge retrieval and search use cases. And then for the latter, it also can build off them as well, though in a little bit of a different direction. So, the second, you know, very common use case that we're seeing for generative AI, and this is probably the most common use case when we exclude, you know, grounding an actual company data, is using large language models for writing and knowledge support. So this is everything from marketers and salespeople using large language models to help them write emails or generate new copy to grant writers who are using, large language models to help them, you know, write the new request for whatever type of research grant they need to get. We have a lot of government folks who are looking at these types of use cases. But then, we can also use this, in support of our knowledge retrieval and search use cases as well. We've started to see folks actually integrate, generative AI into their knowledge management and knowledge creation process because, these language models basically allow you to reformat, and readjust and change content to fit whatever, need you have. So for example, a lot of folks, especially, you know, large manufacturing companies are experiencing high turnover or very high, employee aging or employee retirement rates. And so they're trying to capture all of that knowledge from all of those folks. And a lot of those folks aren't necessarily knowledge creators. And so they're looking at using generative AI as a way of having those folks kind of write down, you know, a simple procedure for how to solve, you know, a particular maintenance issue and using a language model to then reformat it in a way that everybody else can consume. And kind of building off of that, we're also seeing tons of use cases, for generative AI in content summarization and transformation. And now this one almost even more obviously fits, directly into the knowledge retrieval and search use cases as well as being its kind of own standalone use case. So, you know, anybody who has been in the search space for the past couple years has seen large language models be used for chunking documents, summarizing documents, things of that nature. But, this has really started to, you know, hit a new fever pitch of adoption as well as a kind of broader set of applications for what types of content you are summarizing and why you are summarizing it. So probably, you know, the quintessential example, this year is using large language models for summarizing call center transcripts, which, you know, a whole host of different folks are doing at this point in time. But it's not just the value summarizing those transcripts so you can have a good idea, you know, what happened on the call without having to read, you know, line by line. It's that, an additional step where we're seeing folks actually use generative AI large language models in the data pipelines to extract additional metadata, so things like topics, entities, sentiment, etcetera, from that summary so that you can then drive a better search or a better text analytics experience down the road. So all three of these categories of use cases kinda bleed into and support each other. But, ultimately, these are kind of where the locus of generative AI use cases are today with the, very large exception of, TuringBots, for coding and testing. So I'm not gonna talk too much about this here. I just wanted to, make sure we are all aware of this significant space. So this is basically using large language models in support of software development, for the generation of code and prototypes and things like that. And also to a lesser degree, in support of, analytics and data science. But this is another very important area of generative AI that we're seeing a significant amount of adoption and deployment. But ultimately, you know, the focus of these things is really around the search and knowledge retrieval, types of use cases and types of asks. And given that, you know, so many folks are really interested in in, search once again, and they're asking for it being to be embedded in all of their, applications, we've seen a plethora of companies just, integrate, you know, some kind of adjunct search capability into their platform or into their application, or we've seen a lot of these, what I will unkindly call "ankle biter" companies who have kind of come out and said, okay. Well, we can do, you know, a question and answer with your PDF, so therefore, we have a search experience for you. But, when it comes down to it, what you really need for a true enterprise search experience, we're actually going to be searching across multiple different data sources and pulling back, different types of information, dealing with all kinds of different things around security access controls and things like that, you're going to need a platform that can support all of these things. And a lot of the newer entrants, into the search market can't necessarily do that, because while you can skip a lot of the steps, to get to a good search experience, by using large language models in a simple kind of vector retrieval today. There's lots of things that, you can't just kind of build with a snap of a finger like these, security and access control, to various data sources, like having the proper, methods of filtering and, pulling from various, application and, data source connectors, being able to do reranking of the results in a way that makes sense, or even doing chunking of the documents. You know, this is an area that a lot of folks don't recognize as an important part of actually building generative AI applications is taking a five hundred page document and knowing, what pages to break that up into in order to be able to reference it in a vector database. But, ultimately, you know, no matter what kind of, like, search application or use case you are building, it's not just about the technology, and it's not just about the data that underlies it. It really is about how you are using it. So before I get into a bit of a discussion around the architectures here, I just wanted to really emphasize that where folks are really falling off in their successful implementation of, general AI use cases overall and then more specifically some of these knowledge retrieval use cases, particularly ones that are chat based is in that change management, step, in that implementation and fuller rollout step. So there's, I think, been a lot of excitement and maybe, you know, we had a little bit of naivete around how sophisticated users would be when they started to interact with large language models because it seems so easy for all of us to just, you know, go online and use ChatGPT. But when it comes to using it in an employment context, when it comes to using it, you know, in our work context, there's more, training and shaping and nuancing of how you actually help people understand how to best use these tools and to make sure that they are sticky in terms of how people are actually accessing them. Now in in terms of what tools are available, you know, there's a whole range of them that are out there, and they, you know, really span the entire stack, of, the hardware and software environments that we all deal with. Everything from embedded capabilities all the way down to, you know, model training in the various hardware capabilities that are associated with that. Now, again, I won't spend very much time here. Just wanted to, call out that there's, you know, a very broad set of capabilities being offered by different vendors in the generative AI space. But to kinda help make sense of all of this stuff going on here, we've seen four main routes, that folks are adopting generative AI into their organization. Now they're not necessarily doing one or another of each of these approaches. They're actually doing a combination of several of these, in many cases. So at the very first step when folks get excited about generative AI, you know, oftentimes, they're just saying "okay, well, I'm gonna go online and use whatever tool is available not grounded in my enterprise data." And that's what we call bring your own AI, which is a, very challenging issue for a lot of enterprises and certainly something that if you are not thinking about it in your own enterprise, I would, really start to consider what types of, policies and approaches you have to folks that may be trying to, get access to these tools if you're not providing them on your own. But then, you know, we're seeing many more folks actually start to, gain access to generative AI capabilities through, embedded software, and through embedded capabilities within other applications. Now this both means kind of embedded within endpoint applications, like single use applications, like text generators, like Jasper or Writer. But then it also includes folks that are getting their generative AI capabilities embedded within their other software platforms. So embedded within their cognitive search platforms or embedded within their broader AI machine learning platforms. Folks, you know, really are trying to access this, you know, in a way that is within a familiar environment, and that gives them, additional tooling and support. Because if you go option two or option three, as you see on here, you know, you're going to have a significantly, increased workload in terms of how you are shaping the inputs and the outputs of these models. And we'll, take a look in the, upcoming slides about what that actually means. Now, as I'm going through the next steps, I, wanna emphasize that, you know, while I've been talking about, generative AI throughout this entire presentation and also search, but mostly generative AI, that it's really important to keep in mind, what you can do with predictive AI, particularly when we think about how we are combining it with generative AI. And what I mean by that is that we need to keep in mind all of the, you know, historical, you know, propensity modeling, segmentation, standard analytics, all of these types of things that have given us very good insights about our customers and about our companies. And, remember to leverage them into our generative AI applications as well. Because one of the big aspects of generative AI applications is while they are incredibly good at helping us, derive deeper and more contextual information from unstructured data at this point in time. They are not necessarily good at doing analysis or extracting information from large scale structure data. So we still have a great need for standard machine learning models and even statistical models to derive those insights that then maybe we are wrapping up inside of a nice natural language presentation with a large language model. So, as I go through some of the architectures, subsequently to this, keep in mind how you might be folding in the predictive AI into the generative AI applications. Now, before we get into the architectures, you know, again, to just point out that the, platform approach to this stuff, I think, is, you know, a very important thing to keep in mind for everybody. Because at this point in time, you know, the the amount of AI capabilities is really quite overwhelming. You know, even if we go beyond the applications and start getting into what are the technologies that are available, you know, beyond the core AI platforms into these extended AI platforms that are emerging, like the low and no code platforms, so the AI ops tools, it can be, you know, very, challenging for individual companies, individual enterprises to keep all of this stuff stitched together and to keep a good understanding of what is happening in the market. And so that's why in part we're seeing a a very significant movement to platforms both for AI applications more broadly and then also for cognitive search solutions more specifically. So now I'm gonna kinda quickly go through sort of the progression of what we're seeing in the knowledge retrieval space and how you can start thinking about the more advanced versions of knowledge retrieval applications today and tomorrow when we're thinking about incorporating large language models and generative AI into that. So at the base level, we have, a very simple interaction. We have a user prompt that hits a model and generates a response. This is probably the generative AI experience that, you are familiar with through the interaction with something like Chat GPT, where you enter a question and you get a response back. And it's not grounded in any external data. So here, you know, I'm asking a kind of silly question about whether the mother in Jaws 4 is cyclically controlling the shark, and it's just kind of generating information from the trained knowledge that it has within the model itself. It's not actually grounding itself in any external information, and I, as a user, have no way to understand if any of that information is based in real fact or whether it's just a, piece of coherent nonsense that's been kicked out by the model. Now, to start honing in the behavior of the model, we're gonna start to add more components. So the first thing to add is a system prompt. And this is something that helps us ground the behavior of the model in the types of, voice and the types of direction that we want it to go. So in this example, we're using a, input shaping and a system prompt to basically ground this initial query. You know, "when do I need to file my taxes by", to shape it around some aspects that are important to this type of question, like, not having it answer if it doesn't know and only answering questions related to taxes. And so in this example, you would, you know, send that query to the model. The model would generate an answer. And in this example, we're starting to take the first step of actually, understanding the output and then making sure the output is high quality. So in this case, we would have the model or have the answer sent through another language model to review the answer and then maybe have a human in the loop if this was a more batch application to check it and see if it was high quality or not and then, pass it off to the dataset. So that's kind of the first step of how do we actually get these models to do what we need them to do. But that's not nearly enough for, an enterprise user who really needs to be making the right decision with the right content. And so to do that, then we need to move into the realm of retrieval augmented generation. And this is what really starts to bring enterprise authenticity to generative AI. So in its, most straightforward form, retrieval augmented generation is a way of not only using a language model, but using a language model in combination with your data stored in a vector database and then retrieved and then given to the language model and then summed up into an answer for you. So the way that it typically works is you have a question that is sent to a language model that then generates what's called an embedding, essentially, a number that corresponds to that information. That is then, sent to the data source and a similarity, measure is run. In this case, cosine similarity. Pardon me. And then, once the, similar pieces of content have been pulled back from that dataset, then they are sent to another language model, which ingests that information and then takes that information, generates a new answer, and then is able to provide that answers to the end user, but it's able to provide that answer with citations from that information that it acquired. And so, this is the type of application where you can finally start to ground it in real enterprise, information, real enterprise knowledge so you can start to do things with assurance and with, you know, great alacrity. And so one of the, early examples that we saw of this was with the Mayo Clinic who implemented a generative AI application where essentially they were using a set of vectorized disease information and some patient information with an application that was HIPAA compliant, which, again, you can't just have kinda any, you know, software setup for this. They needed to have a HIPAA compliant setup in order to build their application, which, you know, is a pretty high bar for a lot of companies. And so, they were able to, set this up and then retrieve information about, the, diagnosis and the patient conditions in order to more quickly get to proper plan of care for a couple of different diseases because they kinda tightly constrained the domains under which this retrieval was operating. But it essentially allowed them to get to, information that they already had, but in a more contextual way that was then presented to them in a more consumable way because it was wrapped up in this natural language presentation from the large language model. Now, where it starts to get very interesting is when we start to go beyond this simple retrieval augment simple retrieval augmented generation architecture and start to integrate things like predictive AI into this combination of search and generative. So an example architecture of how these things are starting to be built out today are by doing things like taking the user prompt with the system prompt and then injecting into that system prompt additional data about the user. So information from your CRM system about, you know, their recent purchase history, information from your forecasting tool about whether they're likely to churn. And then, inserting all of that information as part of the system query, basically like a kind of Mad Libs type of setup, so that, whatever goes to the language model also has that additional grounding information even before we get to whatever retrieval we're going to do. And so, then the large language model has is able to create a much more sophisticated and contextual query, which can then go to the search database. And, in this case, we're also not just talking about vector search. We're talking about a hybridized approach where you're combining lexical and keyword search, which have very good precision, and vector search, which has, oftentimes very good recall and maybe slightly less good precision. And, using the two of these, you can oftentimes get to a more nuanced answer than you might with either one of them separately. But once we have these, you know, then we have the information. We have the set of results. But then we need to do all of that work to rerank the results, to filter it out, to apply enterprise access control so we don't send the wrong information to the right end user. Once we do that, then we have the user prompt, the grounded, contextualized enterprise data, and we can start to generate the final response to the end user. And so as you can see, you know, once we start to really get, you know, deep into how we are going to make applications that can be really robust and effective in being able to reference our enterprise data, the architecture start to get a little bit more complex. And this is still just our, you know, generation one of, enterprise generative AI applications, especially the knowledge retrieval ones. When we start to look at some of the folks that are really starting to move into the gen two of these applications, they start to get a little bit more intense. So I won't spend a ton of time going through this slide and the next slide, but, this is all just to point out that, you know, essentially, we're still starting with that same retrieval augmented generation core. That same core of I have a language model, I have some stuff I'm retrieving from, I need to pull back an answer. But we're adding in all the stuff on the left that we already talked about around adding the predictive insights and the machine learning insights to that actual query. But then we're also adding in things like a secondary model to build an execution plan. So if it's more complex query that involves pulling information from multiple data sources or maybe doing some type of comparison, then you're gonna need some type of model to build a plan for how to answer that question. And then as you get kinda deeper into this setup, then you also have to kinda review that plan. You have to then reference external agents if there's external information that you want to bring into whatever your response is. So, it may not be enough for you to be searching your, you know, your keyword and your vector indices. You may also need to call out to some external system like, some type of manufacturing quality control system that is using, again, machine learning models or statistical analyses to brief insights that are then pulled back into your main system and then inserted into the query for the final model to, then produce the final response for you. And then, you know, when this gets really intense, then we also have folks that are implementing what is called "LLM as a judge", where essentially you are using large language models to judge multiple responses coming out of the application so that you are further refining the quality of the response. Because, you know, large language model is somewhat nondeterministic, so having more than one option to present the end user and choosing the better one can sometimes produce a better end user experience. And, you know, we've already gone over the previous slides, so I won't overwhelm everyone even more by going over, this approach. But the one thing that I do want to point out on this slide is that not only are we talking about how can we retrieve information and bring it back to the end user, we're also talking about how we can go out into external systems and take actions there. So one of the big steps in the kind of gen two of enterprise generative AI applications is a lot of folks are now focused on how do we use these tools to also generate code, from the language model that's then going to go run in some external system, execute some action, or retrieve some information, then bring it back to us, in the main application. That's really where this is starting to go in the future. But before, you know, I say thank you and goodbye to everyone, I do wanna point out two quick things here at the end. And the first is the rise of multimodal, large language models. So, this has been, you know, an area that it's been very exciting to research for the past number of years. But last year, we saw kind of the first research models released around this with Cosmos one and a couple others from Microsoft. And then we saw GPT four and now Gemini from Google, which, these models are now able to interact with both images as well as written content. And that opens up a whole new world of use cases for knowledge retrieval and search and for all types of enterprise knowledge management. So in this example here, you know, I just grabbed a random, process flow diagram off of Google image searches, looking at, the Amazon Web Services flow. And I asked, Gemini to describe what was going on and how I could implement this. And it did a pretty good job of doing that. And so you can see how this could, you know, really start to bridge the gap between teams and between parts of the organization that may not have, you know, a deep, institutional or historical link between them, but they need to understand how their processes are working, how they might interact, how their data systems might interact. And this can really help, start to bridge that gap. And, you know, in the last few seconds before I go, I also wanna point out that, you know, there's been a lot of talk about image generation and a lot of talk about deep fakes and stuff like that. But when we're talking about an enterprise context, I think there's some interesting use cases out there that a lot of us haven't considered. And this is just one quick example of them. So this is using a model called QRControlNets on top of Stable Diffusion, which essentially allows you to control the generation of an image and form it into a workable QR code. So, you know, this, sorry I can't recall the prompt off the top of my head that we used to generate this. But if you take a photo of this QR code, it's a workable QR code in addition to being a photo of a library, and it will take you to "Forrester.com". And so this type of image generation use case, again, this starts to really change how you are thinking about knowledge creation and knowledge management, because if you can kind of integrate this type of knowledge interaction with your environment, that's a much more kind of ambient communication tool than having, you know, a big black and white QR code or another an otherwise, you know, nice poster. But with that, I will say thank you, to everybody for your time today. It's been really great chatting with you. And, yeah, I'll pass it back over to Juanita. Thank you, Rowan. Wow. So much great information. I have a ton of questions for you, and so does our audience and our guests here. I will ask these one by one if that's okay, and I'll try to add a bit more context where I see some of these are going. Great. The first question is around how complex building a GenAI system seems. So the question is building a solution seems complex with so many parts and components. How can companies ensure they're building GenAI properly, and what are they missing to have an effective system? Yeah. Well, so the latter part is very hard to answer because there's a lot of components, and there's, you know, an ever increasing number and an ever increasing way of combining those different components. So, that's the first thing that I will say on this. The second thing that I will say is that, you know, for most companies, unless you are a high-tech company or you are very, very focused on innovating around these tools, you definitely need a partner to work with you, in terms of understanding, you know, what are the best practices to building, whatever flavor of retrieval augmented generation you're gonna do. Or, you know, think about how you're going to source generative AI capabilities if you're not gonna build it yourself. You know, are you gonna get it as part of, you know, a prepackaged solution? Are you gonna kinda tweak some kind of Copilot or something like that? So this partner aspect, I think is really crucial for everybody who's adopting generative AI overall. And that could be, you know, a direct technology partner, it could be an implementation partner. Just somebody who is able to spend more direct time understanding how the space is moving and how it's evolving because it is evolving very fast. So if we would have been having this conversation, you know, six or seven months ago, the amount of, nuance and complexity in terms of how much we knew about the, you know, the extended retrieval augmented generation architectures and what, you know, were effective about them and what were kind of the best practices, they would have been much, you know, that would have been a much shorter section than we just had. So the space is of is moving and evolving very quickly. So having somebody who is able to kinda track those evolutions and changes and, you know, honestly, for a lot of us, it's not worth paying attention to, you know, Anthropic or whoever else kinda coming out with their newest model every other week. It's very cool and a lot of them advance the space, you know, pretty significantly, but when we're building enterprise applications, when we're rolling them out to our end users, we can't be thinking about changing the underlying model every two or three months. Like, that's just not feasible, not sustainable. So I that would be kind of the the second part of this is work with partners, but also and then also don't get caught up in the the new shiny around this. And the third piece that I will say is that, having your data in order is it's almost like banal to say at this point in time. We say this every time a new technology comes on. You have to have your data in order in order be successful, but it's so critical here. And it can really cause you to either, you know, be able to accelerate very quickly in the space and start to build something very useful. Or if you don't have your data, you know, accessible, well structured, all that type of stuff, it can take a lot longer to build a solution that is extremely, powerful and useful and accurate for you. And that doubly goes for the content that you're thinking about making part of your solution, is, very long in length. You know, we have much longer context windows than we did, you know, even a few months ago, but, still this aspect of how you chunk things down is very important. Awesome. Thank you for that. Maybe along the same lines, question here is around advancements in the Gen AI space seem to be accelerating. So how can companies catch up? Can they catch up? And could Gen AI be the tech that helps leapfrog the status quo, maybe leapfrog the competition? What are your thoughts there? Well, so I if you can do if you can implement AI overall well, not just generative AI, I definitely think that that can be a huge differentiator for your company. But I will say that just leveraging generative AI into your company, that is the table stakes at this point in time. You know? Really, at this point, it's all about, you know, how can you like I was talking about before, how can you differentiate on your data and make this the most applicable to your organization so that you can find what is special and unique about, you know, your information and then act upon it. And in terms of kind of keeping up with everything, it's again, I think it's it's important to not just get super hyper focused on what the newest and coolest model is. It's important to really, you know, look at pragmatically what types of use cases do we wanna build, what models are available today, and how do we start, going out there and, you know, building whatever architectures around them we need to. Or in some cases, you know, you don't need to even focus on the model itself because, you know, there is plenty of variability in the quality of the models. But there's also a ton of variability in the quality of prompt engineering that folks are doing, the the quality of input and output controls that people are working on. And all of that stuff really, really matters in terms of what kind of capability you're going to get out of the other side of the model. So for example, I have seen plenty of fantastic generative AI applications that are using, you know, BERT and StarCoder, or, you know, GPTJ or something like that. Some of these, you know, lower end models, quote, unquote, at this point in time. But, you know, they were performing admirably well in the the circumstances they were deployed in. And, you know, could it have been done better if it was using GPT four or Cloud three? Possibly, maybe. But would that differential in terms of it's that much better actually be worth the additional cost and the additional switching cost? I think that that is much more questionable. And that, aspect of, you know, dialing in what model we're using, how we're using it, that is gonna be a big theme throughout the course of this year. Twenty twenty three was a big year of just focusing very excited about generative AI, kinda just starting to, not even dip their toes in. Just kinda jump in with both feet. And this is the year of optimization and learning how to swim. And so, that's what the huge focus has been on your how do we actually get these models to retrieve the information that we want them to retrieve, be able to, audit that information to make sure that it's valid, and then present it to our end users in a cheap and fast way. That's really what everybody's kind of honing in on right now. Great. Thanks for that response. Moving on to this next question. You talked about this a little bit on one of your prior slides. The question is around we know a lot of tech giants are investing heavily in the GenAI space to offer Gen AI within their actual software solutions. But there's also concern about vendor lock in and being tied into a tech monolith. So what are your thoughts for companies, around how they should make these, purchase decisions or tech buying decisions? So I would, so most folks, I I would recommend that you look to whoever your already trusted providers and partners are, and not necessarily try and go find whoever the coolest newest startup is with the biggest model that you can find is. And, within that, you know, we do see, you know, folks, like, all of the hyperscalers are offering various models, they all have various platforms for this. But one thing that, we've noticed, you know, not just in my research on the cognitive search space, but now we're doing research on the, AI foundation models for language market, which is all of these providers. And one very interesting thing about the space is there's very there's tons of cross pollination. So aside from the fact that, you know, there's a limited talent pool and so a lot of these folks have spread out from one or more companies to all these others, but there's lots of cross investment between different players in this market. So you have, for example, you know, Microsoft and Amazon both investing in, Mistral. And you obviously have, you know, then, Microsoft and Amazon both investing in, you know, their own respective, like, pet companies as well. But there's a lot of, you know, cross investment going on between all the big players in this space. And I think that, buyers here, you know, should look so should go in a couple different directions. So if you're trying to just get kind of a capability that is built into some type of application you're getting, you know, you're not going to be developing a solution or kinda working on, you know, anything technical under the hood, you're kinda leaving that to your partners then. I would say, you know, the model underneath doesn't necessarily matter as much to you as long as you are not the one who's responsible for metering or gating its cost. And that's where we're seeing a lot of folks kind of go on this is, you know, there was a big push for, you know, "I need to bring my own model in all circumstances and I need that choice". And that is still there to a certain degree, but lots of folks are also saying, "actually, you know, this is too much for me for, you know, x y z use cases, CRM, whatever. I don't wanna figure this out for myself. I wanna figure out only for my differentiating applications." But on the other hand, for anybody who does wanna kind of build things out, they are really trying to not lock themselves too deeply into any one, provider. And to that end, you know, we've seen tons of uptake of open source models overall and of open source providers. And then additionally, we've seen, you know, I would say all not quite or sorry. Nearly all of the proprietary providers also providing some way to access open source models as well. And then, you know, I think moving forward, I think we're gonna continue to see a huge proliferation of models, not just kind of the core foundation models, but also domain specific models and other models that are more specifically tuned and quantized for particular purposes. And that I think is going to be driving a lot of the movement in this landscape. And I think that, you know, that is again going to, reduce the want to lock into any one particular vendor. But I do think that it's also important to recognize that, you know, know even if you are locked into even so if you're not locked into a vendor, you and you're trying to switch between models, there's still a switching cost that is significant for large language models that doesn't exist for necessarily, like, all of their technologies. So if you're switching from, let's say, you know, GPT four to Quad three again, if we're gonna use those or to GPTJ or, you know, Mistral or Falcon T five or whatever it is, or Falcon one eighty, or Flaunt and Flaunt T five, then the prompts that you're giving to the these models are very different, and you're gonna have to reengineer them. So, you know, it's neither of these kind of approaches is going to save you from all problems. It's just going to be how are you thinking about, what type of long term strategy and what type of long term approach that you wanna take to this space. Amazing. Lots of useful information there. Thank you. Earlier in the presentation, you talked about a few different use cases for Gen AI, knowledge retrieval, and others. When it comes to business applications, which are the most prime or popular to be enhanced by Gen AI? Any any specific ones you're seeing there? Ones that have not been enhanced by generative AI at this point? Are both primed to be, enhanced by Gen AI in terms of whether it's customer service side, maybe sales, other. So I think there's a lot of distance to go in improving the marketing and sales pipelines and their communication with broader organizations using generative AI. So we've just started to see kind of the broader, life cycle components get enhanced by generative AI integrated in. So for example, like customer feedback and customer research tools are just starting to fold in generative AI capabilities. You know, we already had kind of the, like, the text generators and whatnot folded into these workflows as well. But I think that's one space where, you know, the whole life cycle, once kind of more components kinda come into this, the the whole marketing sales and customer experience life cycle can really be enhanced by these tools. But I think it's important to also recognize that at this point, that's mostly gonna be kind of on the back end. It's not necessarily going to be customers are all gonna be interacting with large language models from your favorite brands. We will probably see more of that over the course of this next year. You know, a few of them have been rolled out already. But, you know, there's so much to be done internally, and there's, you know, a lot more ways to manage risk around internally, deployed generative AI applications. That's, what I expect to see, you know, in the short and mid term around this. And then I think, you know, the other prime area, I think, to be, you know, almost completely reinvented to a certain degree is knowledge management overall. And just the practice of how we create, manage, and maintain knowledge within an enterprise. So, one of my, colleagues, Julie Moore, who is fantastically smart, has this great concept that she's been pushing for a couple years around agile knowledge management. Essentially, bringing the ideas of agile development to how we create and manage knowledge. And that, you know, was a great concept, a great idea, but as with agile software development, you know, that requires a lot of effort, communication, and a lot of overhead to make that possible. With large language models, like I was talking about before with, you know, the manufacturing folks who can have these, you know, folks who are about to retire easily generate knowledge articles. This is a way how we can start to, like, really transform the entire act of how we create and manage knowledge. And that then, you know, starts to lead us to, okay, how what types of knowledge are we creating and how are we creating it? So do we need to necessarily think about, you know, these giant long policy documents anymore? Or maybe we need to take a more, like, componentized modular type of setup to how we are designing our contracts or RFIs or things like that. And those types of things I think are very interesting because then that leads you to an entire, you know, rejiggering of your, like, process in your organization, and that leads to different ways of actually working together. So I know that that kind of jumped off pretty far from, you know, what is the, you know, the areas that are primed to be, reinvented by this. But I think it's really once we move from having kind of these initial point solutions, whether it's in sales or whatever type of life cycle, software development to having a more holistic set of capabilities, then we can really start to see how this is not just, you know, freeing up a bottleneck here or freeing up a bottleneck here, but, you know, really enabling us to change the flow of the whole, business cycle, whatever that may happen to be. And that's gonna happen in the next couple of years, and that's what I think is gonna be the really exciting stuff. Absolutely. True transformation, really, across all. Yes. And, actually, I just wanted to throw that, now, I'm glad you brought that up because I think the transformation aspect of generative AI is something that is starting to trip up a lot of people in their initial implementations and rollouts. Because, we all had kind of, you know, these grand expectations after we use Chat GPT that it would be able to answer all of our questions. And, you know, we can design business plans for us, and it's all gonna be great. But, what we are actually seeing from the impact of generative AI in this generation one of applications as we're seeing these initial rollouts is that they are incremental changes when you kind of draw them at a high level. Like, they're not yet fully transformational, and that, I think, is very logical and makes total sense. Like, we don't know what the new shape of collaboration and communication looks like because we haven't actually had the tools to build that system yet. So,I think there's been some folks who are kinda, like, getting cart before the horse in terms of the transportation potential. But then there's also some folks who are saying, oh, well, you know, you're kind of blowing this all out of proportion. This is not actually that interesting. And I don't think those folks are correct either. I think we are in the state of, as I said earlier, really optimizing how we are implementing these tools. And so we need to manage expectations and build our metrics around how effective this stuff is being, with, that kind of expectation in mind. That makes complete sense. People are testing, but the true transformation process changes, people changes, all of that, I think, is what you're saying is is yet to come. Yes. Yes. Absolutely. Alright. We're we're getting, we're getting down on time, so I'll ask just one or two more questions here. So, you know, I think enterprises are sort of, they're struggling with silos across the organization. So the question here is around who is best help, who is best poised in an organization to help avoid additional tech silos that new tech tends to cause? Yes. Let's not do shadow IT all over again. Please. Please. Please. Please. So what I'm seeing in organizations, so I'll start with the bad of what I'm seeing. So I'm seeing some organizations basically push forward generative AI initiatives from some kind of Gen AI like strategy or tiger team that is not necessarily tied into the broader enterprise strategy or even tied into the IT team. So I think this tying together of generative AI strategy with your broader enterprise AI analytics business strategy is really key at the beginning here. And so the folks that I've been seeing that are successful with this are basically having the strategy be coordinated and managed by a combination of relatively senior folks. Oftentimes, it'll start in kind of like the marketing or digital organization or something like that, but then very quickly move not into the IT organization, but having a very strong participant and even leadership role from the CIO or, in a lot of organizations from the chief software development and chief data scientist. And having, you know, those folks, you know, participating in this conversation can really help to, you know, accelerate your adoption in kind of the the understanding of these use cases. Sorry. I've lost the train of thought there. We might wanna reask that question. I apologize. No. It's great. So question here is, at first, app first thinking creates silos Oh, yes. Yes. Who is best poised in an organization to help Yes. So, the app-first thinking, like, we obviously don't wanna do shadow IT all over again. But what we're seeing in organizations is they can avoid this if they are actually taking a deliberate strategic approach to generative AI. And by this, I don't mean just having, you know, we have a generative AI tiger team or some kind of, like, group that is like, oh, we're gonna throw out all these use cases and not be tied into the, you know, the broader organization. What I'm talking about is actually having a deliberate strategy where you are integrating the excitement about generative AI with whatever previous AI initiatives you may have had or even with whatever previous analytics initiatives you may have had and bringing those different parts of the organization together. Because, you know, as we saw earlier in the slides, you're not just going to be bringing in an application. You're probably not just going to be training a model, you're not just going to have a BYOAI policy, you're gonna be doing all of these things together. And so having, you know, a group that is responsible for triaging use cases for approving, disapproving, you know, new potential applications, that is really important for avoiding this very siloed approach. Additionally, I think it's important for organizations to catalog and understand their vendor providers and what their vendor providers are offering, because even if you're not kind of sourcing some new capability from some new partner, some new tech provider, somebody in your organization may have found a way to, you know, gain access to to some, you know, beta generator of AI capability or even a vendor may have introduced it as part of kind of their baseline capability. And then all of a sudden, you know, some marketing or, you know, research part of the organization has access to these tools that you never knew about because they were not an explicit paid part of the engagement, and now you have to think about managing them. So, really kind of having a broader strategy that encompasses generative AI, AI overall, application development, and analytics and data science, I think is what is going to ensure success, in the short and medium term. And definitely, I'm just gonna preemphasize this, this, not separating your data science and analytics process from your generative AI, broader AI practice. And I mean, that especially, I think, even more so or not even more so, but I think especially goes when we're just talking about, like, search specifically and not just generative AI enhanced search. Because one of the big things that we kind of saw over twenty twenty three was folks recognizing, hey, search is a key part of our insights and knowledge strategy. Maybe we should think about folding it into how we're delivering insights overall. And I you know, this is something we've all been talking about for a number of years kinda coming down the road, and I think finally people have started to get it at a very grassroots level. Absolutely. One final question for you. I think it's an important one to ask, especially because we talk technology so much. We are tech first now more than we have ever been before. So the question really is around, what are you seeing companies do right or how are they upskilling their teams to be able to really manage and thrive in this new space? So there's no one thing. I would say that it's like a lot of soft things, and it ultimately adds up to having a very, tech hungry culture, I would say. And so what we're seeing in terms of, like, actual manifestations of this at companies is folks doing things like having, you know, demo days of various generative AI tools, having user groups internally to communicate and this discuss the use of, you know, for example, a lot of companies are putting their own large language model or a vendor provided one behind a firewall with no additional tooling to kind of give people something to use. Having user groups around that type of thing so folks can start to develop, you know, various techniques for prompting and getting responses and all that type of stuff. But that can, help people kinda get up to speed or stay up to speed on this. And then additionally, you know, just having kind of ongoing communication, research, both with your kind of, digital innovation teams, but then also communicating that out to the rest of the organization. And then the last thing is I think, you know, to just be very receptive and open and encouraging to any ideas around AI and generative AI that are coming from your broader organization. Now that doesn't mean that you're should listen to all of them and implement them because a lot of them are gonna be bad ideas or somebody's gonna come to you with a generative AI idea that is actually a predictive AI idea. But, kind of changing the mindset from, you know, no, we can't do this or hold off to, yes, but or yes, and, as the response, I think, you know, can really help, folks be in a good position for, you know, we're we've already been in a phase of continuous adaptation, you know, for the past, whatever, like, fifteen my entire career at this point in time. But that is just going to accelerate, even more so, as we continue to move forward. So it's really just, like, recognizing that we are in a continuous state of change and your organization needs to be set up to address that, not to just say, okay, well, every five years we're gonna look at what the new innovations are. It needs to be a cultural practice, rather than just kind of a a a post hoc thought. Love it. All great things. It makes sense. Dive in. Let people get their hands dirty. We're at time. Rowan, thank you so much. For those of you that submitted your questions, we really appreciate them. If we didn't get to your question, a member from our team will respond to you directly. We hope you've enjoyed today's session, hearing from expert, subject matter experts like Rowan. Thank you so much for your time, Rowan. So much to unpack and so many great insights. We hope you have a great day. Rowan, thank you. Thank you so much for having me. This has been great.