Welcome to Tech on Deck, a podcast series focusing on the critical business challenges facing b to b tech firms with an emphasis on enabling technology. I'm your host, George Humphrey, and today, I'm joined by Scott Ferguson, senior product manager from Coveo, for the discussion on making Agentic AI payoff, practical uses and real ROI. That I think that's the big one everybody's looking for right now is that real ROI. So, Scott, you wanna just introduce yourself real quick to the audience? Hey, everyone. My name is Scott Ferguson. I'm a senior product manager at Coveo. I've, been with the company almost a year now and, you know, a background in, you know, games to lots of different, SaaS type organization, and this is my first foray into, Agentic AI. So a lot of steep learning curves, but it's been a really fun ride. So excited about what we're gonna cover. Yeah. Scott, it's probably the hottest place to be in tech right now with what you're doing, what you're focusing on. Got a lot of questions for you today. We've got about twenty five minutes to get through them. So I'm gonna rapid fire if you're ready to go. That's that's good. Alright, Scott. So, you know, I call this the second stage of real AI implementation and adoption in tech. And it's really this shift away from generative AI to agentic AI. So when everything exploded on the scene a couple of years ago with OpenAI AI generative AI just blew everyone away, we started to see a lot of use cases in tech organizations, in improving operations, things like that. But now we've seen the second explosion, which is this Agentic AI. We recently wrote a paper on the difference between agentic AI, teammates, copilots. But for you, given that you're in both of these spaces, Generative and Agentic, how would you define Agentic AI in very practical terms, and what's the difference between that and, say, the first phase of Generative AI? I think I'd start by maybe making sure that we're, you know, speaking apples to apples. And so just to, like, a quick foundational base about, you know, the what makes generative AI and how a lot of us experienced in the last, you know, several years, It's, you know, it's very static, closed, and reactive. Where you flip the switch with that with Agentic AI is essentially that model is a lot more dynamic. It becomes contextually aware. It's more capable of reasoning, and what makes it key is its ability to, action autonomously, whether it's, you know, calling the tools or nodes within its ecosystem or its interoperable ecosystem that has access to in that grounded trusted information. Interesting. And and so then from your perspective, what are the reasons you think there's such urgency right now for businesses to adopt Agentic AI, and and what do you think is driving this this acceleration? It it feels like this is moving even faster than generative AI did. So kinda what's your take on that? It is. AI wouldn't say it's a hype. I'd say it goes down to two words, and those two words are unlocked potential. I think that's what's driving urgency. If we go back to, you know, as far back as, like, the sixties when chatbots came out, chatbot was a dirty word for a long time. Anyone that's experienced the chatbot, you know, they're scripted. They're they rely on keywords in that algorithm to, you know, give you that experience that was just frustrating. And, suddenly, you know, in the last couple years, LLMs came into the mix, into the equation, and they changed that game to the fact that it's more of a conversational experience. Agentic AI has a similar shift, and, it frees those work workflows. It's, like, liberating to them. So you can actually get your outcomes to be what your original you know, how they were originally envisioned, and they can even, you know, evolve beyond those expectations. So, like, that promise, you would you say the promise of those chatbots is finally coming to fruition? Yeah. We just need a new word. And that's why I think a lot of people are referring to it as a conversational experience because that's what you're seeing. You're seeing things that can, you know, AI, is contextually aware and can reason and actually, unpack something that's complex. Yeah. It it feels like you're you're having a real conversation, I I think, because you really are. And so it it can be jarring when you think about it, but so you said something really interesting there in in comparing this back to the chatbots of yesteryear. You guys at Coveo work a lot with companies on this transition towards Agentic AI. Would you say there's still a lot of fear and anxiety or resistance for your customers because of such a bad taste from their customers over AgenTic AI implement or I'm sorry, over, chatbot implementations? There have you know, no. I wouldn't call it jaded, but there's obviously people that have been, like, scorned and burnt in the past. But since the reintroduction of LLMs to be, you know, merged with the generative AI component, I think people have, you know, witnessed tangible differences and, like, seeing is believing. At an enterprise level, yeah, people are a lot more, what's the word, cautious or calculated in their implementation, so they'll do some, you know, valid testing, you know, running things in an experiment type a b type way of gauging is their actual value and return, but you were not seeing as much hesitation. We're actually seeing people just being careful and the reason why I say careful is that if you if you take, like, for example, the concept of, you know, hallucinations, people in the past have always talked about, you know, I'm I'm I'm looking for when a customer is talking to you, they're like, I wanna I wanna, like, zero hallucinations, and you're like, well, the the industry is not ready for that yet. Where people are being cautious and careful is the fact that on a private LLM model, you know, with lots of restrictions where, you know, it's not responding because it's very constrained, you can get it down to, like, one or two percent. But on those public aggressive models, it's it's horrendous. It's, like, eighty percent hallucinations, and you would never put your your business at risk in exposing it to that kind of, that AI impact. But, you know, if you use something that uses at least what we use as, retrieval augmented generation, that's AI RAG, it's grounded in trusted permission aware knowledge, and that helps mitigate that type of scenario, greatly. That, you know, that's exactly where I wanted to go next with my next question. Getting under the hood a little bit, getting a little bit more technical. You know, I so I I live in AI tools every day. I'm constantly using Gemini AI. I'm using ChatGPT. I'm using even the Comet browser I've started to fall in love with. But I I have definitely noticed that difference between controlled tools with less hallucination and some more of these just open tools. So when you get under that hood, and you're talking about LLMs and RAG, can can you talk about you mentioned trusted data. Can you explain how the pieces work together, to deliver accurate permission aware answers and and to avoid those hallucinations? Give me a little bit more juice on that. Yeah. And then I can even I can answer that and also talk about, I guess, how an AI agent, can actually increase some type of risk and what you have to do to mitigate that. So in terms of ensuring that, you know, it's permission aware knowledge that's grounded is the fact that what makes it different is you're calling upon your customer's indexed documentation. It's not going out in the wild and getting information from Reddit. It's not getting information from, like, other unknown sources. It's, the passages that it's retrieving and, AI, it's taking that from your own documentation, but then using its own brain to be able to make sure that it's, parsing that correctly and then making it into something that has the appropriate context for what you're looking for. I think the biggest risk when it comes to whether it's a hallucination or anything is that an AI agent is actually naturally higher confidence, and it's smarter. So, essentially, as that confidence rises, its ability to hallucinate or at least to think that it's providing the correct answer increases. And so that's why for players like AI or others, it's very important to invest in enhanced observability, enhanced prompting, and then really putting together some, very well structured guardrails. That's just critical to success. Yeah. Fascinating. You know, so having a a RAG model that's really focused on your internal dataset is gonna be really, really important to improve accuracy. Absolutely. And and right now, I'm I'm writing a framework paper on a data maturity model because I'm terrified with the conversations that I have with customers of TSIA on how unprepared they are with data readiness. So can can you talk a little bit about your perspective on data maturity models? How important is it? When you work with customers, do you have that same fear that I have that so many companies just are so immature with their data maturity and data readiness? Just tell me a little bit about that from your perspective. Yeah. When it comes to data readiness, they're AI the data maturity. We start with our indexing. So we're able to assess, like, the level of quality that they have and the way that their data is structured or unstructured. What's really different is that and where Agentic AI really comes into place is that it can scale accordingly based on the problem's complexity. And so if you're an organization and you're a little bit hesitant about maybe your confidence is low in terms of how you have your data structured or where you have it located, what kind of restrictions you have on it. I think one thing to understand, and this also goes into maybe your, concerns when it comes to just, like, your overall budget for this, is that simple queries get simple, fast, low cost, efficient responses. That's the beauty of, like, the orchestration that happens. If it finds out that that might even one word or two word queries or questions from a from an end user, that might look like a simple question, but it's actually complex. If they use the term, like, is broken, what does that mean? So that's actually when conflict queries justify deeper orchestration because they deliver better answers, more complete answers, And then they they yield, you know, a higher c stat, you know, more case deflection. Like, there's an there's a tangible ROI associated with them. So it's not just about spending less. It's about actually spending smarter. Fascinating. And and do you think that companies are gonna have to do a lot of data restructuring? Or do you think as you're doing this indexing as technologies or AI itself is becoming more advanced in discovery, do do you think companies should spend a lot of time restructuring, modifying databases, or do you think we're at a place now where that's less important than it used to be and you can go in and and find that data? So there's always gonna be a lot of, bad data out there. And the undertaking of asking an organization to go through an audit self audit without understanding what it is that the systems are looking for is almost would would be, unfortunately, like, an an an example of, like, the blind leading the blind. You're it's it's not a realistic expectation. So the onus should be on the groups that are at that ingestion point that have, the enhanced capabilities and are continuously making improvements to how they ingest and index that information, how they can ingest unstructured data, how they can understand how, for example, in the past, if information that was in, you know, like, different formats that were unrecognizable in tables and such can now be consumed appropriately, where in the past, it would have been, something that was just almost use useless. Wow. It it it's a wild time. So we at TSIA, we advocate to have a centralized AI of evangelistic organization for AI. So we think important to the c suite, there should be a team that is responsible for driving AI transformation throughout the entire company. What what's your perspective on that? And do you think that a counterpart to that should be a centralized data leadership or centralized sort of evangelism for getting data in good shape. You're seeing right now in the news this, this these coalitions, these, new standards that are being promoted and and shared, and I think that's something that I think a lot of people are welcoming now. But when it comes to having at least a subject matter expert members or a small group, that can really help from, validating the integrity and the quality and keeping everything grounded and trusted. So I don't know if you would say you need a full department, but I think that, the benefits of having some knowledge, knowledge experts within your organization to help using the tools at their disposal to validate and actually help make improvements to the outcomes they're they're generating is key. That that almost sounds cultural. It's as important to have a cultural transformation as an organizational one. Yep. It's a good way of putting it. Awesome. Very cool. Alright. I'm gonna switch gears a little bit now because we're talking about a lot of this transformation. It's unavoidable, and the impacts we're starting to see are real. Like We've we've seen this story from MIT. Ninety five percent of AI implementations fail to produce an ROI. I to get controversial, I think that's a BS. Maybe it's just because we see so many use cases where there actually are some dramatic returns on the investments. So I I don't know that it's ninety five. I know it's not ninety five percent in the other direction. Right? I know that so many companies are still struggling as they go through this. But, just to you just mentioned the news. We just saw this week in the news that Salesforce has announced that they're reducing their support staff by four thousand employees. Basically, I think it's half of it, about five percent of the, employee pool of the entire company. What's your reaction to that? So are you from Coveo seeing similar success stories? AI it doesn't sound like a success story to have four thousand people lose their job, but it is an ROI story. For sure, there is a promise of ROI reducing cost and improving customers' experiences. And it's really this agentic AI that's delivering on the results. In my opinion, probably way more than than the early generative AI implementations. What's what's your reaction to that news, and are you seeing similar stories in your customer base? I think the best way to describe it is the way we're interacting with information is changing. And, what's driving that is orchestration is at the heart of that transformation, whether it's support, Agentic, and, Agentic AI is really understanding its user's intent and reasoning through complexity and guiding them dynamically. And because of that, what you're seeing is you're seeing obviously an economy of scale that's associated with it. Are we seeing on our side some real world, like, tangible results? Absolutely. We're seeing them pretty pretty quickly. And when I say pretty quickly is where it used to be you'd implement a certain service or solution, and your measurable results would take, you know, a quarter or so. We're seeing it a lot more accelerated. It has a a much greater, velocity of ROI. We've got something called the passage retrieval API. It's our PR API. And in just two weeks, for one major implementer, we saw a seventy three percent jump in answer accuracy and then a boost of twenty two percent in retrieval precision. That's making a huge difference. And when customers are measuring that against their current infrastructure and what they have. They're making that decision as to where do we go from here? How do we continue to accelerate with that? Yeah. We from the Agentic force or the Salesforce and now rolling out agent force, We understand that we have ourselves agent force actions. It helps classify and route and resolve cases automatically while keeping the knowledge of sources grounded, and, it remembers the context. And in doing so, small things like that add to optimizing AI was done in the past by a few members can be done by a single. Yeah. I so Phil Mannis from salesforce dot com gave a keynote speech at our conference back in the spring. And, he put a number up on the board. I'll I'll probably get it wrong. So if there's anybody from Salesforce, don't don't pay too particular too much particular attention to this data point. But I'm I'm pretty sure it was ninety six percent case deflection when they implemented this capability. I think that's probably what allows them to get that kind of a reduction in human labor when you have that kind of a result on the automation side. So can you give me, like, your your top two without naming names, protect the innocent and the guilty, but, two really great examples. You just gave one there of where you're seeing some dramatic impact to the financials of your customers? I I I think we've shared this recently, but where it used to be, you're seeing some impacts, you know, in the hundreds of thousands in terms of cost savings, and it's tied to what you just mentioned, case deflection. We're seeing, customers that are promoting, you know, seeing things in the millions and not just we're talking, you know, a few. We're seeing things as well as, like, you know, tens of millions. I think that's the biggest, wow moment. And then once you see that being advertised and advocating, you know there will be an associated ripple effect, and it'll have both some positive and negative consequences. But the whole goal of this is to make things more satisfying for the end users. Yeah. Yeah. Back to that, better experience. So it it gives the customer better experience, and it gives, an internal ROI on the investments in those technologies. I think that's a total win win. So very cool. Another question for you is then we see leaders it's like all anybody could talk about is AI and and leaders being pressured from the c suite, from the board of directors, like, you gotta do something in AI. So what what's the first practical step that you would encourage anyone you know that's considering or or under this pressure? Where would they start? What would they do to start small to demonstrate real returns? I mean, George, if I was gonna give leaders, one bit of advice, it would be to start small, but start grounded. I think those two have to be intentional. Like, it's intentional. Also be intentional about defining the role. If you're using a Agentic k AI and adopting it, be clear in defining the role of the AI agent and what it's gonna play. Be also clear on what's being augmented versus what's being replaced. Doing so is gonna help, ensure buy in from additional stakeholders. You'll reduce any kind of resistance you're gonna get because it'll be aligned with your North Star vision for what you want this Agentic AI to do for your organization and what role it's gonna play. Right. Definitely, leaders should be picking, one high impact use case. Prove its value quickly and then scale there. Yeah. Everything grounded needs to have its own trusted knowledge and permissioning from day one. I think that's also another thing, not to miss out at. And, and lastly, that's how you move fast without getting lost in experimentation. You AI, I was just gonna ask you another question, and I think you answered it right there at the very end, which is it was a loaded question. So is it better to throw AI at something that's less understood AI a process that, you know, has never really been implemented? Or would it be better to start with something that is well understood, that's maybe painful and repetitive, maybe prone to human error, but but at least the process is well understood and data available to make decisions in that process is well understood. Yeah. You you definitely need a quality baseline so that you can really measure its value, measure the benefit and the enhancement you're seeing. And that's how you, you know, you win. You gain advocacy. You can it's it's something you can then champion within your organization. Yeah, definitely something where you have quality metrics on that are established. And from there, that's where you can be more provocative with exploration and thoughts of, like, how do we go beyond just just this use case? I think a lot of people are still confusing parts of the Agentic AI capabilities with automation, and, like, it's not automation. It's it's it's about, like I said earlier, like, freeing it from workflows, so that you can get to how, you know, this was originally envisioned in a blue sky scenario and then striving towards that. Love that you mentioned grounding it in metrics. You know? Yeah. Like we mentioned, case deflection rate, time to resolve. Those would be some good ones. Okay. So here's my very last question for you because we're we're short now on time. I just asked you about leaders under pressure. But if you had one piece of advice to give to someone coming out of college, someone earlier in their career, how much of the advice would you give them be around AI in their career? No. This is a perfect question. My daughter is at the age where she'll be shortly entering that university stage, and, really, it's about understanding at this stage how to maximize the optimizations that you're working with AI in all facets. And it's about really understanding that this is about enhancing your way of doing your activities and duties, really just cutting through the chase. If there's a task that you are weaker on, use it, augment it to strengthen that area so that you can succeed and unlock your potential in other areas. When it comes to members coming out of college and looking to join the workforce, it's the same thing. Understand, how will this enhance your potentials and be mindful and aware of where will there be roadblocks, where will there be opportunities of things that can be automated, but take it from a pragmatic perspective and be reasonable with it. And, I think there's a lot of people that have an interest in and an aptitude for, everything that's AI related, and I think those that are going not trying to swim against the current are gonna really enjoy this ride during this transitional phase for the next decade. Everyone always I AI heard this quote. I think it was might have even been, like, Bill Gates, and everyone was saying that everyone always overestimates the impact technology will make in the next two years but underestimates the impact it'll have in the next ten years, and that's an example. When people are coming out of school, they're fearful or concerned about what's it gonna be like in two years and you should be thinking or you can't predict in ten years how fundamentally things are gonna change. Wow. Yeah. Words of wisdom. And on that note, it's a great place to leave it. Scott, I wanna thank you very much for participating today. So Scott Ferguson, senior product manager of Coveo. That was an awesome conversation. I personally learned a lot in that short time we had together. Thank you for the audience joining today as well listening in to this episode of Tech on Deck, and we'll see you next time. Thanks again, Scott. Cheers. Cheers.
Register to watch the video

Making Agentic AI Pay Off – Practical Uses, Real ROI

Agentic AI is moving from concept to competitive advantage at record speed—cutting support case volumes, accelerating resolutions for customers and employees, and transforming commerce experiences. TSIA sees agentic AI as one of the greatest contributors to improved efficiency and enhanced customer experiences in the history of tech. TSIA’s members are already experiencing a 79% increase in operational efficiency with an 8% increase in customer satisfaction. But those gains only happen when AI is grounded in your business realities and data.

Join Coveo’s Scott Ferguson, Senior Product Manager, and TSIA’s host, George Humphrey, Distinguished VP and Managing Director, for a practical, slide-free conversation on how to turn agentic AI ambition into measurable impact.

You’ll learn:

  • The fundamentals of Agentic AI, why businesses are racing to adopt it, and the must-have elements for success
  • How to match the right level of AI complexity to your business priorities, data readiness, and cost‑to‑benefit realities so you can prove value fast
  • Why grounding Agentic AI in trusted, permission-aware knowledge is essential to eliminating hallucinations, reduce risk, and build stakeholder trust

You’ll leave with a practical understanding of Agentic AI and core use-cases for reducing cost and increasing revenue, along with practical decision criteria for scoping projects and critical capabilities you’ll need to get from proof-of-concept to production – without getting lost in the technical weeds.

George Humphrey
Distinguished VP and Managing Director, TSIA
Scott Ferguson
Senior Product Manager, Coveo
drift close

Hey 👋! Any questions? I can have a teammate jump in on chat right now!

drift bot
1