Thank you, Alex. Tia, we're so happy to have you join us today at Relevance three sixty, and I don't want to waste a minute. You have had a storied career in AI. I wonder if you could share a little bit about yourself and your role at AWS. Thank you, Seb, and thank you so much for having me today. AI journey didn't start in AI. I've been on this applied AI path probably for the past seven years. But prior to that, I spent a vast majority of my of my career working for financial services institutions in technology, mainly around software and full stack engineering when the public cloud became this, like, phase or wave or craze, if you wanna call it, and focused on trans a lot tons of transformations. So whether it was going from SQL Server to the greatest version of Oracle Exadata or whether it was going from VM to private cloud to public cloud or going all in on DevOps. And so I guess you can say that this is just the next chapter or evolution of my career and this whole AI, applied AI, generative AI phenomenon. When I stepped into it, didn't know it would be what it is today, But I joined AWS a little over three and a half years ago, and I joined as a general manager for AI AI for AdTech and MarTech. So think personalization, audience segmentation, all of those things to help customers or help companies get to know their customers better, delight their customers proactively, serve, and or reach their customers more proficiently, leveraging artificial intelligence. And then about, I don't know, a year ago now, I moved into a unique role in the company where they tap individuals to be a technical AI for one of the most senior leaders in the company. And believe it or not, my days are still spent on AI, sometimes capacity, sometimes the model, sometimes bedrock, but just a wider, deeper breath, if you will, across the entire step for generative AI. Thank you. There's no shortage of buzz around AI today, especially with Agentic AI dominating the headlines. What are you hearing from your peers and customers about the real opportunities and maybe distractions in this space? So I think in the current climate we're in globally, not a North America, US, Canadian based, opportunity, if you will, companies are really trying to figure out, one, how to optimize expenses, so do more with less, and how to innovate faster to generate more profit. That's just the meat of it. And I think companies see generative AI, specifically AI, as a means to an end. Now it's not to be taken lightly. And what I mean by taken lightly, we know that it takes a huge investment. Whether people, capital in the form of infrastructure, knowledge, models. There's an investment that has to be made. And oftentimes because I probably spend fifty percent of my time with AWS's customers and partners like yourself. Oftentimes, I'm really diving deep. Okay. You want to do x, but why do you think Agentic AI or generative AI is the solution for that? How are we gonna measure success? And then how do we try out the path to achieve that even if it is deterministic AI, generative AI, agentic AI to ultimately reach that goal? There's this I call it probably when, that company, OpenAI, first launched chat g p t a few years ago now. It was this craze, as I mentioned earlier, where people just wanted to do something. And now we figured out how costly and expensive it is. And so companies really have to have a strategy for this and how they're going to implement, measure, and continue to refine. And so I hear the desire to do it. I hear companies really cracking down on the how and the what more than they were two years ago. And I hear people really thinking about the trade offs of rebuilding versus buying and continuing to measure success ultimately for the customer or even for internal, employee productivity. K. Really important to be able to measure success as you're saying. And but, can you share any real examples you've seen of external use cases that were delivering results? So global financial services institutions, and they are using agents to help, check regulatory reporting. So regulatory reporting is a must do no matter if you're in the EU or America or wherever, and they are using agents to augment human support to improve quality, accuracy, throughput, and reduce the number of humans touching to validate creation and accuracy of regulatory reporting. They still have to have humans in the loop, right, to validate what the agents are creating and, answering and producing, but it has definitely improved accuracy and saved time. And so as you can tell from these two examples, it crosses the gamut. Right? Whether the creative aspect, the must do aspect. But in those two examples, companies are using generative AI, so large language models, as well as Agentic a Agentic AI to improve throughput, reduce time to market, and overall save expense for the company. Okay. So really interesting examples, and it shows how you and, actually, these things were really to improve existing processes and do things, more efficiently. And, you said it a little bit, but, you often emphasize solving real customer problems rather than deploying AI for its own sake, which we've seen a lot at the beginning. It's like everyone wanted to use AI, but they were not sure for what. Are you seeing more organization shift toward this problem driven AI adoption? And what do you think is driving that change? So there was a recent study in MIT. MIT published a study around, the cost and the success of generative AI. And what they published was ninety five percent of pilots fell. Now so to your point, why do ninety five percent fell? And I believe it's in two two buckets, and the article dives a little bit into this. One, they don't choose the right problem from the beginning. They think that generative AI AI, Agentic AI is a one size fits all things, which means it can solve any problem. And with anything in a company, if you don't have the right culture, you don't have the right tools, you don't have the right skill set of your employees, then you're likely not going to succeed. And so the difference between the companies that fail and succeed are just that. They have a strategy with the culture that supports it with the right people, and they're measuring success. The other reason why, other than just choosing the wrong problem or opportunity from the beginning, the other reason why companies most often fail is the lack of ability to fully integrate. So going from pilot to production is not to be taken lightly. Pilot, small data, tinkering with something, proving proving it out, yes. It kinda works. Production, data is different, scale is different, impact is different. And companies have not figured out how to truly take generative AI and go from poly to production to fully integrate it into their company. And so if you don't choose the right problem and you don't have the right mechanisms, processes, and tools in place, it's hard to succeed. Yeah. You're right. And, we've seen it a lot also. So a lot of prototypes that don't see the light, actually, because, yeah, the processes were not there to to to transition that into production. I'll just gonna change subject a little bit and talk about the data foundation. Garbage in, garbage out remains true. Structured and unstructured data management remains a major hurdle when it comes to successful AI implementations. What do you recommend for companies, to start when, tackling the data challenge at scale? So I like to bring up my Capital One journey. And though I work for AWS, I'm a firm believer. Had I never worked for Capital One, I probably wouldn't have been the the best candidate for the job I was hired into. Capital One is all of this is published, so it's nothing secret. Started out on a public cloud journey twenty fourteen, twenty fifteen, well before, certain companies were really going all in on the public cloud. Well, you would have thought, okay. They finished the public cloud journey. Now what? They didn't rest. They went immediately into a data transformation. And they chose to do a data transformation because Rich Fairbank, the CEO back then, had a had a vision that AI was going to transform the business even more fundamentally as we knew it at the time. And so he understood that in order to really benefit from artificial intelligence, the data had to be right. Now you can imagine through acquisitions and all, you know, the regulatory landscape and different lines of business, The data was messy, murky. Not it wasn't cataloged. You didn't have the proper lineage. Certain things that might represent customer ID for commercial was different in the retail bank, and and there was a normalization, so to speak. So we embarked on this journey when I worked there to really go real time streaming, high quality data products, public cloud first. So what did that mean? We created products that focused on fixing and maintaining our data. And we understood we had to do that because to your point, Seb, garbage in, garbage out. So we needed to know where the data was located, how to access it, how to keep it, clean and high quality, and then cataloging and all these other things were turned into products that enabled us to propel and and keep that momentum. Now you talk about generative AI, and you touched on something. We did that with all of our structured data. Now the beauty of generative AI, if done well and done right, is you can tap into unstructured data that it can learn from to produce something for us that didn't exist. Right? And in doing that, people assume data isn't as important in in the cleanliness of it, the high quality of it, when that's actually the total opposite. While you don't have to structure into columns and rows, which takes an enormous amount of time, it takes a conversation, conversational AI or conversational data. You still need to know it's trusted, And you still wanna have belief in the data and know where to access the data and make sure that you're using the data appropriately. And that only helps you in techniques such as refining and fine tuning to gain, greater quality, if you will, from your generative AI and agentic AI, you know, applications. Thanks. It's so true, and and we've seen it. Generative AI sometimes just put light on the problems that were already there. So, the bad data just resurfaced. Ai overload is real, and so is the overwhelm that comes with from rush and rushed and failed implementations. We've talked about it a little bit, but, just to summarize, what do you think high performing teams are doing well to ensure that the AI projects deliver tangible ROI? And also, more importantly also is how do they avoid the common pitfalls? Yeah. So I think high performing companies, and then we'll get to the team aspect. In one, they have a strategy, and it starts at the top. So they don't look at this as AI of desk, pet project. They are actually making Agentic AI, generative AI AI, a strategic pillar in their multiyear strategy. You clearly have it airmarked. You want to do x for y. People clearly understand it. You allocate a budget towards it, and you have KPIs or key performance indicators that you are tracking along the way. I want to improve conversion. I wanna improve search accuracy. I want to improve, engagement and time spent on a page, on a site somewhere. I want to improve self help so that you reduce the number of calls in a call center. All of these are, key performance measurements that a company is tracking backwards from, and they have a strategy and the dollars to support it. From there, people understand roles and responsibilities. You have a culture that, one, empowers innovation and failing fast so people aren't fearful to try new things, but then you have, ownership. People know what their role is. They have a high degree of ownership to go do to measure and meet said KPI that's already established. From there, you have the team level, high performing culture that you mentioned. That's the right skill set, the ability to continuously learn. So what are those resources and where can people find them, and do they have the time? Again, I've already mentioned failing fast, and you have a reward mechanism that allows people, creativity. Collaboration is really key because not no one team has all the answers. So how do you collaborate across teams? Rewards when they do something really well, and how do you elevate and learn from it across the company, and you empower your first line managers. You empower your first line managers to coach, develop, implement. So, again, at that point, it's not the CEO or the senior most leaders that are saying yes or no. Those AI managers are supporting their teams, and they say yes or no to propel and go faster. With that, those companies also are not fearful to know when the when to buy or when to build. And if I had a met a matrix that I could draw on the screen across the x axis, you would have, uniqueness, IP. Is it core to your business? Across the y axis, you would have resources. And that's what you kinda use to guide If it's something strong IP that you wanna maintain that sets you apart from your competitors, maybe you build. But if it's a common commodity, you know, multiple people use it, it's no trade secret, and you have the resources, maybe you go acquire, but you integrate fast. And so there's these trade offs you have to make, and companies need to be okay knowing when to buy, when to build, when to do a hybrid, and empowering their first line leaders and resources to know how to make those decisions. Thanks. And, we've seen it, also that, moving from, we've seen people build prototypes really easily and fast. But, as you see here and with your answer, it requires a real company focus and a real project and real processes around it. It's not just, something you can do in a day and and and then try to to put it to production. So thank you, Tia, for your time and those incredible insights. It's clear that success with AI isn't about adopting every new technology, but about building a disciplined, customer focused foundation that drives measurable outcomes. I hope today's session sparks new ideas, for how you can accelerate your AI strategy and next steps with Agentic AI. Laurent, back to you.
Register to watch the video
September 2025

The Foundational Elements to Innovating in Your Business with AI

Winning in the Agentic Era: Power Every Search & Agent with Relevance
September 2025

Discover why most AI pilots fail—and what leading enterprises are doing differently to scale generative and agentic AI with real ROI. Learn how companies are tackling data quality, moving from pilot to production, and using AI to cut costs, boost accuracy, and deliver measurable business outcomes.


Key points:
  • AI is no longer optional: Companies see generative and agentic AI as a way to optimize expenses and accelerate innovation.
  • li>
  • Why 95% of pilots fail: Most organizations either pick the wrong problems to solve or fail to integrate AI from pilot to production. Success demands the right culture, strategy, people, and processes.
  • Data is the foundation: “Garbage in, garbage out” still applies—high-quality, trusted, and well-managed data is essential to AI success.
  • Real-world proof: Global financial services firms using agents to improve regulatory reporting accuracy, throughput, and efficiency—AI that delivers tangible business value.
  • What high performers do differently: They treat AI as a strategic pillar, set clear KPIs, empower first-line managers, foster collaboration, and know when to build vs. buy.
Tia White
Technical Advisor, AWS Utility Computing, AWS Utility Computing
Sébastien Paquet
VP of AI Strategy, Coveo
drift close

Hey 👋! Any questions? I can have a teammate jump in on chat right now!

drift bot
1