So good morning. Good afternoon, everyone. Thank you for being here today. I'm Lindsey Price, product marketing here at Coveo. And today, we're gonna dive into what it takes to build an AI agent without the complexities. We'll talk about where AI delivers value and how you can actually take your AI agents from experience from experiments to market and avoid the common pitfalls along the way. Today, we're joined by Isaac Sacalick. Isaac is the president of Star CIO. He spent over twenty years as both CTO and CIO. He's a published author with books on digital transformation, and he's a regular contributor to AI dot com and InfoWorld. He continues to share a ton of insights on his blog and podcasts about driving agile innovation and transformation. And Isaac is no stranger to Coveo. We've partnered many times in the past and it's always insightful. So Isaac, thanks for joining me here today. But before we dive in, just a couple of housekeeping items. You are all on listen mode, but that doesn't mean we don't want to hear from you. If you have questions, please ask them in the Q and A along the bottom of your screen and we'll take some time at the end of this webinar to answer them. Today's webinar is being recorded, so you'll receive the link within about twenty four hours after we wrap here. So with that, Isaac, let's kick this off. Yeah. Thank you, Lindsay. And, good morning. Good afternoon to everybody who's here. I'm really excited about this topic because, obviously, there's a lot of excitement around artificial intelligence and Gen AI, and now, increasingly more around AI agents. And, I'm sure you've heard the term before, but in simplistic terms, AI agents are when we bring AI into our workflows, and they're helping us accomplish tasks. They're partnering with us increasingly. Agents are being able to speak to each other and communicate with each other. And, what we're looking to do is to use agents to do more automation of things that are repetitive, that they can make safe decisions on their own and put human or people in the middle to help make more consequential decisions. So you're gonna see and hear a lot about AI agents. And what we're gonna be talking about today is, you know, somewhere just like in applications, we've used third party applications to do a lot of the work that we do. And at some point, you start thinking about, well, what about our proprietary, workflows? What about our customer experiences and connecting our end to end, way we service customers. How are we gonna build Agentic around this without all the development complexities? So that's our discussion today. And you've all been feeling this, for a long time. You know, I'm aging myself a little bit. If you know this movie from Ferris Bueller, he says life moves pretty fast. If you don't stop and look around once in a while, you could miss it. And, that's certainly true for digital transformation and for technology, but it's even tougher, for AI. When I think about AI, maybe I'd adjust this quote and say, AI moves pretty fast. If you stop and look around for too long, you'll miss the opportunities. And this is how your executives, how your board is feeling right now. They're seeing what the large technology companies are doing around AI. There's, literally an example or an announcement every day talking about some version of AI being able to do something miraculous. There's a lot of talk about industry disruption. There are efficiencies to be gained. There's new skills that we need to bring, or organizations through. So there's a lot of pressure to look at what's happening with AI and agents and start saying, well, how do we do this faster than our competitor? Okay. Now with agents in particular and AI in particular, if you AI to do this too fast, if you just start putting things out into production without testing it, if you don't bring people along the journey to participate in validating how agents are changing their workflows, if we're deploying an agent directly to a customer and we're not testing to see what its boundaries are, you might end up with a crash. And there's plenty of examples out there, that talk about companies that have put agents in AI out into the field only to find that they weren't behaving as expected. And so what we're really trying to do when we think about agents is think about, how do we introduce the right types of guardrails, right? How do we put the car on the road, have them being able to drive at a reasonable speed, being able to say what the safety features are around this particular AI, and being able to experiment and improve this AI to deliver business value. So that's what we're all after, Lindsay. And now what we're gonna start shifting gears, talk about, well, how do we actually think about how fast we have to do this? What is this the velocity that we have to think about when we're introducing agents? And so when I think about the future, I start looking back in the past. I start looking at the different waves of digital transformation we've all been living through, since the term was coined roughly about ten years ago. I remember hearing the term the first time and I said, wow, I've been doing digital transformation for a very long time, but let's step back to twenty eighteen. This is roughly when most organizations start, catching wind around. We have to do things differently around customer experience. We have to do better with data and analytics. We're gonna become more automated with our DevOps. And most of what we were trying to do in twenty eighteen was drive growth. Right? There was a very, AI marketplace. Companies were raising money. There were new competitors. There was definitely new ways we wanted to expose technology, to our customers. So a lot of the speed we were driving back then was how do we get to customer differentiating and differentiating products and services and becoming more data driven, changing our business model very quickly. And then when you look at the next sort of wave of transformation, we had a crisis with the pandemic. We started adjusting to financial, uncertainty and to inflation. And so the next two segments of transformation really focused on how do we think about becoming more efficient? How do we engage our people differently so they're, more empathetic to their needs? How do we get more, smart about our cloud spend? How do we integrate more about what what our different solutions are doing? And so twenty and twenty twenty two and twenty twenty two became very much more practical years for transformation. And then you look at what's happening now with AI, and we're now in, I think, the second wave of our AI. You know, first wave started in twenty twenty four when the language models came out and we were all trying to figure out what does this mean and what can I do with these things? There's a whole bunch of other technologies that we were also looking at, digital twins and real time enterprise. But most of twenty twenty four really started the hype cycle around what can we do with AI. And as we are start planning for our twenty twenty six, we're now starting to look at not necessarily a moonshot around AI or all the transformation that we can get from AI. We're starting to look at some real opportunities to deliver short and scalable business value, from AI. If you look at these different examples of what you can do with AI and what some of the other trends that are hitting us, going into next year. Only AI, of the ones I'm listing here are the ones that can is is the only one that can can really drive growth, that can really expand the business, that's really gonna transform both the back office operations and the front office. So I'm gonna pause here, Lindsay. I know, you know, the Coveo team, has been working with transformation and organizations doing this across a bunch of different industries, which are leading the way with AI and what sets the front runners apart? Yeah. I think those are some great, like, relatable analogies of just how to get there, but, like, how to get there fast and safely and where are we today. I think what I've seen work best with our customers is our customers having a strong understanding that AI starts with great search, having a strong search foundation to both connect and retrieve your enterprise knowledge as the basis to build upon. In any company, our data is everywhere. Sorry. In any company, our data is everywhere. It's saved in many places. And at Career, we connect to those into a unified index and create that single source of truth. So whether you're creating an AI agent, powering your service, your website, your Agentic use cases, it can scale across the enterprise with that single retrieval layer. And the success of that really hinges on getting that retrieval layer right. A lot of companies spend their time focused and spinning their wheels here, ensuring that the retrieval's right. It's secure. It's accurate. They're avoiding hallucinations. But our customers have turned to Coveo to help manage that. If we look at some examples, the Mayo Clinic has four thousand clinicians that rely on AI agents for fast accurate access to their protocols. And they're seeing about sixty percent of their patients are resolving their issues via self-service. At BMO, financial advisors can type a single question and get a trusted compliant answer instantly. And these are all just continuously improving the customer and employee experience. And in all industries, specifically, if we think finance and healthcare, there's no margin for error for these answers. We have to trust it. It has to be accurate. If we look to Gardner's recent report of Rethink search, they say that AI agents require a retrieval first foundation. Without it, Agentic' answers are generic or wrong, but with it, it's trusted and actionable. So they've really doubled down and focused here on this foundation. And the most regulated industry is finance, healthcare. Their innovation depends on that strong AI retrieval. They're not building it from scratch. They've partnered with Coveo to ground their Agentic. And they've also then built the platform to continue to grow and scale on. So, Isaac, with that, let me flip it back to you. When companies start down their path of building agents to deploy AI, when they start to transform their business infrastructure here, what are some mistakes that you're seeing them commonly make? Yeah. It's a great question, Lindsey. I mean, if you look at the AI that I presented here, you don't really have a lot of time to work on, let's just say, an emerging technology that's becoming more mainstream and is clearly making an impact across all the industries. And you shared some really good examples around it. So if we're sitting here two thousand twenty five, the things that we are planning and experimenting on right now are things that we really need to start showing early wins on, delivering value on, getting adoption on a year you know, within that first year. And the ones that are really successful, we need to be able to show that we can scale in two years. So the first mistake, I think, is just being paralyzed, not knowing what to focus on, you know, thinking through strategy to a point where nothing is actually getting done and, the world is moving faster than you can keep up with. So that's one mistake. The other mistake I see is, you know, clearly, every organization is gonna be experimenting with language models, with search, with the ability to build and test agents out. And, there's a lot of excitement inside the organizations around doing this. There's a propensity to start, what I call peanut butter spreading your resources. Right? And that means instead of focusing on a handful of really, strategic experiments, start having dozens, hundreds of experiments happening all over the organization because there's tons of tools out there and tons of capabilities to go experiment with. And what you find is you spend more of your time coalescing information and figuring out what's going on and trying to guess which ones are delivering value instead of making some early decisions about where should we prioritize our research. So having too many, experiments happening at the same time is also an issue. So the way I like to think about this is think about it from the vantage point of your customers. Think about it from the vantage point of your investors and think about it from the vantage point of your employees. Okay. And we can all have this picture in our mind of places where we can deliver value, and start thinking about, how to experiment with it, how to get feedback around it, and how to show some quick wins around it. And we all see this picture on the bottom hand right of the screen of what things look like when we're falling behind. A lot of manual process. We're still stuck with antiquated systems the way things used to work before, and we don't wanna be in that situation where we're falling behind. So I wanna pick some big wins. I wanna find things that can scale. Some research that just came out around this, Lindsay, and this has been in the news quite a bit. This came from MIT Nanda. It's a report called State of AI in Business. And, they came back with a startling revelation that only five percent of custom enterprise AI tools are reaching production. Now some of that you sort of experiment you should, expect, right? The goal isn't to take ten experiments and see them all in production delivering value. They're called experiments. Right? And so some of them are gonna be learning. Some of them are gonna things that we pivot from. We don't expect a hundred percent of what we're doing to reach production, but I think, you know, I've been a CIO. I've been a CTO. If I went back to my board and said, you know, we're gonna invest ten million dollars, in AI experiments in twenty twenty six, and only five percent of those dollars are gonna end in experiments that lead into production, I think I'd have a hard time selling that. You know? And I'd have a hard time going back to the employee base and saying AI five percent of what you're working on is just gonna be a learning exercise. I think we need to do better than this. And the state of AI in business, the Gen AI report here, gives you some clues around this. Okay? And what they say is for those of you who are looking to build Gen AI capabilities, workflow based capabilities, and getting into AI Agentic, the ones that are doing it with partners, the ones that are doing it backed with really smart vendors that understand how to build strong, scalable AI systems are twice as likely to be able to bring those into production than those who are really trying to do it from scratch. And they have development teams. They're getting into the weeds on all of the development capabilities, in cloud native environments. They're building their own models from scratch. They're testing them out. And you look at the entire development cycle to go from model to workflow, to being able to do an agent, to being able to test it internally, to being able to test it into production. It is a ton of work to be able to do that. So I look for simplicity, and I think what, what, we're seeing here from MIT is here's a good starting point. Don't try to build these from scratch. I recently wrote about this in a in a, in a blog post, three research backed changes, CIOs must lead in driving change. And Lindsay, some of the other examples that show in here really talk about the human element and being able to drive change management. That was a problem area around us and, just the ability to bring the organization on to, leverage the intelligence that you have. Sorry. That's AI I really like when you talk about, unified search and retrieval is the first thing that you need to do. If we wanna partner with an AI agent, I wanna make sure the agent is smart to be able to work with me. So tell me more about how Coveo and how it fosters collaboration to develop search and AI capabilities as a foundation of the AI and agents that we're building into the future. Yeah. Definitely. And I definitely I like the human element of it. We're moving to agents. It's AI, how do we bring it back to what we're doing? And Coveo internally, we genuinely believe in collaboration. That's how we innovate. But I think where collaboration really shines is how we support our customers. Coveo doesn't just deploy the tech. You don't just buy it and we don't just walk away. It really does become an embedment within their team. We work with their team. We share best practices. We create co solutions, and we start with the real challenge. We see what they're trying to solve. We map out a plan, we design solutions, and we deliver a measurable impact that aligns with their strategies. Coveo has an entire team behind business valuation to work alongside their clients to ensure that they're seeing real ROI in the areas that they're focused on. If we take Mayo or BMO, for example, the results they saw came from working hand in hand with their teams and continuously refining it based on the results they were seeing. And that collaboration only helps customers take these experiments that they're spinning their wheels on, take that ninety five percent and increase it so that more gets to production. Working with it, it speeds up adoption. It removes the development complexities. It ultimately drives ROI faster. And it's the collaboration like this that keeps our customers working with us and the double digit double digit improvements they're seeing in things like deflection rate, self-service success, cost savings. They speak for themselves and I for one love seeing our customer success stories keep coming in. But it is getting that retrieval right so that they can scale it across their enterprise. And ultimately, today, when the Coveo platform is onboard, we're seeing our customers able to build on that foundation, and they're able to build and take their agents to market to be effective to deliver results faster. So, Isaac, I'd love your take on how you're seeing organizations use the knowledge and data to continuously develop successful AI capabilities within their enterprise. Yeah. I mean, what they're saying, Lindsay, is, looking at, you know, how do we bring these technologies together? So the last two years, everybody's played, you know, with LLMs. They've used LLMs. But the reality is the LLM is sitting here on screen number one, and the work that you're doing every day is in screen number two. And you need to do, as a person, bring this information back and say, okay. The AI is advising I do a, b, c, and d, and I'm going into my platform of record and doing a, b, c, and d. So we've seen LMs and what they can do. As a technologist, right, when we start thinking about how do we automate workflow, we've been doing this for a long time. We've built APIs out, so that we can connect different parts of our workflow on the back end. We built automation out. And now we can start talking about, well, how do we bring these two technologies together? Bring the front end of knowledge and language and the back end of connecting it into our actual workflows and seeing what happens when we put them together and create an AI agent. So let's see what this looks like, Lindsay. I'm gonna try to keep this simple for everybody. And let's just start with the couple building blocks where we understand how a language model works. We put input in, we get it churns, it uses its brain, it comes back with some output around what you've asked it. Most of the language models that we're using take text input and output, but increasingly you're seeing more audio based, AI models taking audio in and out. You're seeing image and video based. So anything that can take ins and outs and put it through an AI model to say, you know, let me give you some suggestions on how you're going to solve this problem is essentially what language models are starting to do for us. Okay? APIs have been doing the opposite things. They take machine input, usually in the form of something called JSON. They process that information. They send an output out to a system to be able to do something. It might be in your CRM. It might be in your ERP or your conversant solution. And then it's gonna send a result back to the systems that called it. So very machine oriented, type of way of automation. So we've end up with a situation where models are really good at language, but they can't actually action. And APIs are really good at action, but they can't actually but they they can only speak tech. I need a the software developer to be able to create a workflow around the APIs that I'm creating. So now let's start bringing this together. And so this is what an agent starts looking like as you're bringing it under the hood. The first thing I'm going to do is go back to what Lindsay was talking about and think about universal search and retrieval. And I'm gonna break these into a few different components. The first component is just understanding my surrounding perception. What is being asked of me? What are some of the rules, what are some of the additional knowledge that I'm pairing with, the question that's being asked to me. So I might be asking a question about customer service, and I might be bringing in a lot of information about that customer. I might be bringing in some business rules around how I wanna respond to that type of customer, and that's all coming in to the early perception stages. My brain, the AI model behind this is gonna do a lot of reasoning, but it's also gonna be up doing a lot of planning. How do we think about answering this particular question? And then later on, as we start putting this into actual workflow, it's gonna use feedback loops to get smarter and learn and improve, how it's responding back to people. Underneath this all is the memory. And the memory is really your relevant knowledge retrieval. It's everything the agent and other agents have done. It's everything in your knowledge bases. It's everything people are doing in your organizations to search. And what we're doing is taking this entire perception, brain, and memory. We're now connecting it to our action, which is essentially APIs out. We're building something called an orchestration layer, which is the ability to bring all this technology together. And we're focused today, when we think about an AI agent, we're focusing on agents that take a user's input and being able to provide some action and give them action give them activity that comes out of it as a as an English response or a natural language response to what it is recommending to do or what the human should do in the context of doing their work. So this is very much what an AI agent is doing and what it looks like. To become more agentic, okay, we're going to start doing a lot more other things around it. So let me share some examples around that. Let's take a financial services example. These are three different agents, that are going to collaborate within the context of wealth management. So I'll be a wealth manager. Lindsay is my client. I'm going to have a pairing, with a client relationship AI Agentic. That's gonna look and to make suggestions about how I can manipulate their portfolio. To be able to do this, I'm going to be looking at a separate agent that's going to do a full analysis around her portfolio and make suggestions about places I might do, shifting of the portfolio. I might increase some, some investment. I might decrease some investment. That portfolio optimization is giving me those suggestions. And then all of this is being fled into a third agent that's just, focused on compliance. Are the recommendations that the agents, and that I'm giving back to Lindsey, meeting compliance requirements? So these are three simple agents. You see they're very role and responsibility focused, and we can use them to answer questions, in the context of servicing a customer. There are other examples of this. Okay. In health care, I might be providing remote health care, virtual patient care, to a a patient. There's an engagement agent, a patient one. There's diagnostic support. I'm being told by that patient's some of their conditions. I'm using diagnostic support to be able to get some answers over what some of the conditions are are are being, that might might be some of the sources of their ailment. And then if I'm going to prescribe something or make recommendations, there'll be some clinical oversight that will look at their entire history and make sure I'm not providing some recommendations or prescriptions that are problematic. So there's a simple similar example, very similar pattern for healthcare, in manufacturing. Also similar type of use case in worker training. We're bringing new workers on all the time. I'm going to give that worker an operating assistant agent so that they can work side by side with, technology and learn from it. There's gonna be a knowledge synthesis agent that captures what they're doing and gets smarter. And a third one that's providing feedback back to our, managers and to our leaders about how this particular workflow is working. So, Lindsay, three different examples of agents that are working. And I I'm just wondering, what are some of the examples that Coveo focuses on to help customers accelerate their path to AI agents? Yeah. And just like what you said there, like, agents are really only as good as the information that they retrieve, and that knowledge and that information they retrieve has to be contextually rich. And it's really at the core of what the agents deliver. And that's why we focus on three things, unified access, accuracy, and relevance. So if we talk unified access, our knowledge is everywhere. If you can't find it, a human nor an agent can deliver. So with Coveo, it's the single source of truth, and security lives at the index level. So we know we can trust what's coming out and the right person seeing the right things. If we look accuracy, the information has to be accurate. With the index, we only deliver enterprise verified AI answers, no hallucinations. And if it doesn't know the answer, it's not answering it. And it'll flag the gap so you can fix it in the admin panel later. That's real ROI. And lastly, relevance. This is where Coveo's edge is. We're not just about fetching results. Our models truly understand the context, the behavior, and the intent to deliver the answers that are genuinely useful and not just technically correct. If we take Xero, for example, they created a digital first content led customer support experience, and chatbots became their first line of support. They saw a thirty percent reduction in issues requiring human support there. And with over seventy five percent of their initial messages were being handled by the AI agents that they built. So, Isaac, if we circle back, I know you have a lot of experience with this, but you've highlighted that customer experience is really the key differentiator in GenAI. Do you wanna talk a little bit more about that? Yeah. I mean, I just look back to the, previous generations of major technologies that, have come out in into the enterprise and are also customer facing. You think about the the decade of becoming mobile first and building mobile experiences out. We did a lot to service our employees with mobile capabilities, but the real differentiator is when we connected customer into something that they couldn't do very well before into our ability to service them very reliably on the back end. So we're not seeing a lot of customer experience agents out just yet. It's just early stages around this, but I'll tell you how we're gonna get there. We're gonna get there by tapping the intelligence of our subject matter experts to build up our knowledge basis, to build up our search capabilities so that there's a wealth of information to build an agent around of. And then we're gonna be working very closely with our people who are doing work every day and saying, is this agent making it easier for you? And is it delivering results, that we can rely on? Can we, base our decisions around these things? And as we are doing this more, reliably and more frequently, we are getting we will get into a situation where, we can start seeing agents being able to talk to each other. We'll have more confidence and trust on their recommendations. So we'll automate some of the easy and low risk things that they're doing, and that's gonna get our people thinking more about customers and how do we service them. So let's talk more about how do we get there. Okay? We're talking about being able to take this roadmap of building more of these agents out in our customer facing areas, in our proprietary workflows. And there's a few things that we have to really do, to enable this as trailblazers, as engineers, as marketers, as product folks. Very first thing is we need to be able to have the right management around it. Are we doing things that are compliant? Is our data biased, in ways that we have to address, our data before we even start making our agents. And I'm gonna start addressing change management very, very early on. I'm gonna bring my subject matter experts in to make sure that my knowledge bases, my search is working accurately. It's probably the most important place when we start building search in, as a foundation. And then I'm gonna start bringing my workers in, my people who are doing the workflow in, to start behaving and acting like a human in the loop, and partnering with AI agents and seeing what they're recommending. So there's a lot of management they have to do. Now here's the part where I think most organizations can get better at in terms of going from what we're trying to accomplish and bring AI into our workflow, and being able to work with the underlying technology. There's a ton of areas that I need to focus on that are very important to building. Building the knowledge out, deciding what decisions an AI agent is empowered to do, and coming up with some metrics around what is a reliable answer around that. So those are some of the things that I really have to focus on. And then going one step further is, you know, going from this idea of it's working to now bringing out into production. How do I make sure that this agent that has worked for ten people in pilot mode is now gonna work with a thousand customer service agents or two thousand wealth management agents or, you know, five thousand people out in the field doing virtual care. So I start really caring about the underlying operations that go into making sure that agents are scalable. Now AI look at this. I look at this and say, well, how do I ship the work parts of the work so that I focus on the things that are truly proprietary, and the places where I have to improve accuracy and bring my people in and get help on some of the operations and technical side of things. So here's how I build a development strategy around this. And the first thing I'm going to do is really think through where am I going to have real value from the agents that I might build out. So I might start out with a wide funnel of several hundred ideas, but I'm not gonna go commission experimentation to grounds all those. I'm gonna pick the ones that are really value driven, We're gonna make a real impact in both employee, customer, strategy in terms of what we're trying to accomplish. And I'm gonna focus my experimentation and my build in those areas. I'm gonna spend a lot more time on the people impacts, and I'm gonna understand what high reliability means right up front. Right? What does success look like is an AI agent that's able to provide recommendations that look like a, b, c, and d within the following boundaries. And those are gonna be my non negotiable areas of requirements that I plan upfront. Then I'm gonna go back and look for my partners. Okay? I don't think most organizations have the technical skills or time to go build the brain. Okay. I think they need help to build the foundation of the brain, to build the underlying technologies around it, to focus on how unified search and retrieval actually works and focus their own efforts on building the parts that are truly proprietary. How do we connect our APIs? What types of inputs are valid for this agent? How do we, automate some of the decisions when we feel trustworthy of our Agentic? And how do we validate the results? So, right, that's the part I think most organizations are gonna have to, focus on. So, Lindsay, I did some writing for you guys. We've been a partner for a long time, doing webinars like this, writing for you, around all the great things that Coveo has. And, you have some key APIs that are key to enabling agent development. Can we talk a little bit about those? Definitely. And I think exactly you said that. Developers don't wanna build everything from scratch. They wanna move fast, reduce the complexities, and ship something that works and save costs. It's this perfect balancing act of how do you get there. Just as you said, the Coveo platform, it's giving us the building blocks for reg, for generative answering, for search with tuning controls, analytics. And if you want more control, we have our suite of APIs for indexing, retrieval, generative answers. So you can customize them as much as as much or as little as you need and fit them in where you need in your workflows. There's code options. There's no code options. The key here is flexibility. If it's freeing up developers' times from dealing with prompt engine prompt engineering and hallucinations or access controls, Coveo handles it. We index the contents where it lives, document level permissions, just so we can serve up accurate trusted answers or passage chunks to fit your needs. Coveo is agnostic, and I love this. You can build it once and you can deploy it anywhere. So whether you're building a chatbot, an agent, an internal tool, a support portal, it's one back end and you can power it wherever you plug it in. And if we start talking more Agentic, we have integrations with the major agentic platforms like Microsoft copilot, sales agent force, AWS bedrock, and more. And those agents can also use the Coveo MCP server for retrieval. As for our Salesforce users, our Agent Force integration has simply wowed our customers, delivering context rich contextual answers with zero custom development. It just works, and our users are impressed, and we have some really new AI things coming on that soon. A little teaser. But the bottom line is, Coveo is really giving our developers what they need to build smart AI agents without the heavy lift so that they can move faster, they can save AI, and they can scale. And with that retrieval first platform, it's a platform that you can continuously build upon and innovate. And that's exciting. So AI, it's counterintuitive, but in a discussion around building AI agents, you highlight people and change management challenges. Do you wanna dive in a little bit more on that one? Yeah. I mean, look, technology doesn't build itself. Technology doesn't change how we're operating or deliver value on itself. I need to engage the people of my organization to do that. So let's see. Let me let me give everybody a road map on how to think about this. When I say the word road map, very often people think road map is sort of linear. Right? I'm gonna, you know, come up with an idea. In three months, we're gonna have a POC. In three more months, we're gonna go to pilot. And, the ones that are successful will go into production in a year. That's a very linear roadmap. Right? I see organizations that are successful building with AI agents are doing constant feedback loops. Right? And they're starting with at the top meeting very regularly and talking about where are we seeing AI as potential value drivers in our organization. That can come from anybody in the organization. But what I'm gonna ask them to do is come back with a one page, one page, Lindsay, value proposition. Why is this important for customers and why is this important for the business? I'm gonna ask them ten questions around this. This isn't a vision statement. There's a link down there that I can that you can get my vision statement template for free, but I'm gonna do those re reviews on an ongoing basis. And the reason is because the technology is changing so much and people are becoming more knowledgeable about what AI can do and what AI can do inside their workflows and for customers. So I'm gonna do that every month. That's gonna feed back into what do I need to make sure that we have high quality relevant, information going into the brain to be able to use this to start building and testing our ideas around agents. So to be able to do this effectively, I'm going to use some words that I've been using in my books for a long time. I want to co create this. I wanna focus on the knowledge, inside my organization, and I wanna partner with that can help me with around a lot of the technology aspects around this. And so then the third part of this is just being able to increasingly engage more people around using AI agents in their workflow. And I know there's a lot of people that are fearful about AI agents and, what it means for their job and what it means for their future. So, a lot of what we're talking about here is bringing them in early, having them part of the process to understand how the world is changing, having them, in that process of defining what success looks like and what some of the validation rules looks like and getting feedback from everybody that says, is this making our work easier, faster, more reliable, more productive? Is it better for customers? And I'm going to constantly churn this out. And if I do that, what you're gonna start seeing is I'm gonna constantly add more experiments, even on a monthly basis. I'm gonna sunset some of those, but some of those are gonna go through this entire cycle and become things that are part of our production environment. Now, AI, I recently wrote a white paper for Coveo around us. It's called make AI work, unified search and retrieval for the enterprise. This URL at the bottom will redirect you to be able to download this. And what you're seeing here is a workflow that I elaborate in, the white paper about how to really get collaboration. You know, we've been talking about agile and iteration and experimentation for a very long time. When we talk about agents, we're talking about bringing intelligence and bringing AI in. There's no more importance of having that constant cycle of, you know, who are my experts? Where are my APIs to connect things to? And how do I bring these two things together to validate, as AI building my agent? Is it meeting my validation rules? Is it meeting my success criteria? So I talk a lot about that in the white paper, and you can see the URL to do this. So Lindsay, this has been fun, right? We talked about agents. We talked about, how to simplify them, what they are, some of the things that organizations need to do internally and some of the things that, they should get some help from partners around this. There's a lot of what I do, outside of my work with Coveo, a lot of writing, write, my books, my blog, my community that I've just launched that you can get to at star AI dot com slash dtc. And then the coffee hour that I host every single week, for, digital trailblazers, you are the people who are leading change and leading transformation in your organization. Lindsay, it's been very fun, sharing all this knowledge, with your audience. Yeah. Isaac, thank you. I think it's, it's always fun to kind of get you on here and kind of see where things are going in your world and how this I do have one question from the audience that I'm going to ask you. Following the guidelines that you laid out, things look like they're progressing with the enterprise, but how do you know when to stop when something goes wrong? And then I'm gonna add to that then. How do you course correct it? Yeah. I look. I think it comes down to having some pillars defined upfront. Right? I described a vision statement that I think is really important to align people about what you're trying to accomplish. And I talked about validation rules, and the ability to define what success looks like. So as you're navigating and you're iterating through what you're trying to build, you're trying to see, am I getting closer to that original vision, or am I starting to veer off course? You know, maybe I don't have the data. Maybe I don't have the people's participation. And what naturally happens in that discussion is, do I need to change what I'm doing, or is there something better for me to work on? And it is it's not a science answer. Right? This is why I try to tell companies, look, you should be looking at your portfolio, understanding your capacity to be able to experiment. So, we would put a number up there. We would say, look, we think at any given AI, we can run five experiments. And so if, you know, experiment number four wasn't providing results as fast as, we thought we could do, we would say, look, maybe it's time to pause experiment for and let one of the new ideas come in, realign our people to it and realign our integrations with it to see if we can, get better traction on it. So it's a constant churn, Lindsay. Yeah. And I think that makes sense. The generic question, but I love it. Where do I start? So I think you talked about strategy, planning, people. Where do I start in the development of things? Yeah. I start looking at mapping what people's roles are and what are some of the things they're doing in their different tools. And I start thinking about it from the vantage point of a six segment back well. What does this entire flow look like? And if I'm doing something on the customer end, I'm gonna start creating some journey maps. And then I'm gonna, if I'm doing something on the employee workflow, I'm gonna have an entire, end to end workflow that's built up. Maybe not with all the details, but enough for me to get started and start asking questions AI, if we put an AI over here in this one spot, what do we think it's going to be able to do with this? And so I start brainstorming the, the idea, the blue sky thinking about how AI is gonna impact something that we're doing with customers or start doing with employees. And at the same time, like I said, this is a flywheel. Right? So if I think I'm gonna focus on the customer engagement and wealth management, just for ex example, I'm gonna map out all the things that that wealth manager is doing with customers. Those are gonna be the workflows that I might consider doing. And then I'm gonna start, I'm gonna start using my brain and start anticipating how I might actually orchestrate this by feeding questions into the brain, going back and doing search, going back and doing retrieval, and start saying, what happens when that becomes part of the workflow for a person to be able to use this information in the context of doing that work? That's gonna give me my early examples of connecting a workflow, a job, a role with the information that I have on hand, and the ability then to integrate those two and say, can I bring these two things back to that wealth manager? Can I bring it back into their tool to actually start experimenting and seeing, maybe not with all of them, but with a small number of them where they can actually see their interactions feed into the agent and start looking at the recommendations? It's pretty AI back to them. I love it. It's a lot of iterating. If it were easy, we wouldn't be here. There's a couple more questions in the chat. We will follow-up with you one to one on person afterwards. With that, we're at time. So I just wanna thank all of you for joining us here today. Isaac, thanks for joining me on this one. Hopefully, you all found something AI, and you can walk away with something you can both think upon or action. And I welcome you all to join us next week to hear more about agent agents, Agentic AI innovation. The QR code is on the screen, and we look forward to seeing you guys all again soon. AI, everyone. Bye, everyone, and thank you, Lindsey.
Building AI Agents Without the Development Complexities
Cut through the AI noise and focus on what delivers value.
Most AI agent projects stall before they start—not because of vision, but because of complexity. Building from scratch requires time, budget, and in-house expertise many organizations simply don’t have. But it doesn’t have to be that way. Join Isaac Sacolick—CIO, author, and digital transformation leader—alongside Coveo’s product and marketing experts for a focused conversation on how mid-to-large enterprises can deploy proprietary AI agents fast—without building an AI factory from scratch.
What you’ll walk away with:
- Clarity on impact – Where AI agents deliver the most strategic value (and where to avoid the hype).
- A faster path to deployment – How to integrate data, processes, and automation without heavy custom coding.
- A foundation you can trust – Why a retrieval-first approach, highlighted in Gartner’s Rethink Search report, accelerates outcomes and avoids common AI pitfalls.
- Inspiration from industry leaders – Real-world examples from healthcare, financial services, and manufacturing leaders.
- A practical roadmap – Steps to start building proprietary AI agents without the development headaches.


Make every experience relevant with Coveo

Hey 👋! Any questions? I can have a teammate jump in on chat right now!
