Welcome to Relevance three sixty. I'm John Ragsdale, and I'm going to be talking today about learnings from real enterprises about GenAI. I'm here for TSIA, and all of our clients are enterprise hardware and software companies. And when GenAI exploded onto the market a couple of years ago, there has been so much interest, and I've had so many conversations with companies. And I think it's fair to say that there are some pace center companies that are getting amazing results. I know you're hearing, from Jeff Harling in Zoom. Up next, he is an amazing person with a great story. But, unfortunately, there are a lot of companies that are still kinda shopping for use cases and struggling a bit a bit to get value from those initial GenAI, investments. And so I wanna quickly talk through today four things that I am seeing that are really common to the companies who are getting those great results, but also things that I see a lot of companies struggling with. So we're going to go through people, process, technology, and data. So let's dive into the people part. And I did a survey last year on the roadblocks and obstacles to AI success, and we asked this question about how much do companies really trust AI to get involved in creating and executing business strategy? And interestingly, I found that almost all of the other questions in the survey were greatly driven by how much you trust in AI. And I know that trust in AI is typically a culture thing. It's a corporate executive thing. But in general, the companies that have lower trust in AI see a variety of issues. They think that AI is less important to transformation, which I think is going to cause competitive issues. They're less likely to have a cohesive AI strategy and unlikely to have a corporate team driving AI strategy. And they definitely are struggling more to get budget for AI investments. And I'm talking about this because if you are a manager and you've got an MBO to have a GenAI project up and running and delivering value this year, which I know a lot of you do, if you're dealing with a corporate environment that is mistrustful of AI, you're really gonna struggle to get results. So, you know, start collecting, some great case studies, some ROI stories to help them understand how transformative this technology can be to support organizations. And the other angle on people is, of course, employees. And I'm thinking on Gartner here, but a lot of analyst farms and definitely media, there have been so many headlines about how Gen AI is gonna replace jobs. And I think that this is really a problem because you're never gonna get value from a GenAI investment if employees aren't using it. And if employees are threatened by the technology, it's unlikely they're gonna dig in and do everything they can to use it and get value from it and give feedback on how to improve it. So if you're not doing it, make sure that you're constantly communicating to your support employees that AI is not gonna replace your job. It's just going to enhance your intelligence. And we're seeing that increasingly technology companies are really selling custom support services like a dedicated account manager or a dedicated support team. And this is about people. Customers wanna invest in experienced employees who understand the technology. They understand the customer environments, and AI is never going to replace that. So make sure that your employees are threatened by Gen AI. It is really about trying to automate and eliminate some of those boring, repetitive tasks. And if you do have some level one employees, maybe there's an opportunity to upskill them, start cross training them, but definitely, fear of AI and a threat to jobs should not be, impacting your ability to get value. So the next topic is process, and this is really a big one. I mean, knowledge management has always been such a big interest area for support operations. But when Jenny and I came out two years ago, we started seeing, our inquiry volume dropping about Kilometers best practices, Kilometers metrics, and I was hoping that was because companies had it all figured out. But my colleague, Val Golovski, who is our vice president of research for support services, he recently published, our knowledge management maturity model version three, and this is, including Gen AI. And what he found in the research he did leading up to that is a lot of companies have taken their eye off the ball on some core Kilometers best practices. And I don't know whether it's a lack of resources because they're putting those resources on AI projects, or maybe they're mistakenly thinking that AI is going to supplement or eliminate the need for these Kilometers processes. But what we are seeing are the companies that are getting the most and fastest return on investment for GenAI have really strong Kilometers programs. And I wanna share a couple of examples of that. The first one is linking knowledge articles to cases. And this is nothing new. This was something that I, implemented for my support team back in the nineties. So, only eighteen percent of companies are requiring agents to link every closed case to one or more knowledge articles. And this is problematic for machine learning, for AI, because you've got all of this great content in the case that details of what's going on, and error logs, and environmental things. And, if you are linking a knowledge article to that case, it's making a strong association with specific issues, specific questions, and what the correct answer is to those questions. And having that positive linkage is going to really accelerate the accuracy of your large language model, of Gen AI in general. And in addition to that, we see that the eighteen percent of companies who are requiring that linkage of knowledge articles to cases are averaging considerably higher customer satisfaction with self-service. They're getting higher explicit deflection rates. So, this is definitely a practice that you should consider introducing if you're not doing it already. Another thing is dedicated knowledge authors. And I know that there are some knowledge management methodologies that say you don't need dedicated knowledge authors, that everyone's responsible for creating content. Well, I do agree everyone's responsible for creating content. But let's be honest, not all tech support agents are really that great at authoring content. And the nineteen percent of companies that we see that have dedicated knowledge authors tend to see improved deflection, higher first contact resolution, higher first date resolution, and overall reduced resolution times. And in reference to GenAI, these dedicated knowledge authors are not only proactively identifying content gaps and filling them, but they're taking all of that content that your support techs are authoring and making sure it's well written, easy to consume, it's correct. And the higher value, your content is, higher quality that content is, you're really going to accelerate the ability to get great answers, great suggestions from a GenAI tool. And the third area, which luckily is a standard practice, is having a knowledge management program manager, and two thirds of companies are doing that. And, you know, these people's responsibility is making sure that you're adhering to all of the processes, that you're being really timely in getting new content created and published, and, you know, publishing rate is really, really critical. So, again, if you're making sure that that content is there, that you're following all of those processes, you've got a much better foundation, to build, an implementation of a large language model and Gen AI, and you're definitely going to see faster results. So moving to the technology area, I am not gonna do a deep dive on Gen AI technology. I think Coveo is doing a great job of that in this event. But the thing that I wanted to talk about, which I see creating a lot of confusion in the Barca, is just the number of options available. And in the last year, equity firms and venture capitalists have invested fifty six point seven billion dollars in Gen AI companies. And it is truly the most revolutionizing thing I have seen in my thirty years in Silicon Valley. And truly, anybody in a garage who comes up with an idea was able to get funding and bring a product to market. So the good news is there's a lot of companies to pick from. The bad news is there's a lot of companies to pick from. And, unfortunately, there's not always a lot of differentiation or anything special about these products. So, I don't want to knock startups here. Startups drive innovation. I've got a lot of clients that are startups, and I hope I'm not offending you. But if you are looking at Gen AI startups and evaluating their tools, there are three questions I would really encourage you to ask. The first is, are they enterprise class? Are they working with larger companies? Do they have scalable references? Are they selling to companies like yourselves? And another thing I found is some of them are only working with employees. They're not really doing anything customer facing. And that's really critical if you wanna start using JEDI for self-service. The biggest area that I struggle with is so many Gen AI startups are just taking that open source Gen AI models and really taking it, white labeling it, and selling it without adding any special sauce to the product. And so if there is no unique capabilities, there is no intellectual property that they're using to extend that, they're not really coming up with a compelling product. And it makes you question, should I just be building at this myself because I could do the same thing they did? So, I challenge these startups all the time to be able to articulate what it is that's unique and differentiating about your product. And I have to say, a lot of them cannot come up with an answer to that question. And the other issue that I have is that, again, I'm working with complex hardware and software companies. And if you're using open source Gen AI large language models, they are not trained on this. And if you're going to take that and start training it on your internal content, it is going to take a very long time, like maybe twelve to eighteen months, to really start getting complex diagnostic recommendations out of these tools. So, there are some start ups that are focused on very niche industries like industrial equipment or telecommunications equipment, and they're easily finding an audience. But that's something you should really push for because if they are trained on other companies similar to yourselves, I know everybody's stuff is unique, but there is enough commonality that if their LLM is built on that, it is going to allow you, to train it on your specific content and start getting results in four to six weeks even, instead of, you know, maybe a year or more. So keep this in mind. I love that you're talking to startups, but definitely make sure that they're bringing something to the table. And I can tell you that when we survey our members about where they are buying Gen AI, the majority, eighty one percent, are looking to their incumbent vendor for intelligent, cognitive, analytic based searching, whatever you you wanna call it, but companies like Kaveo. Because if you've been using a product like this, their core search platform has so much AI and machine learning and natural language processing built into it, that while you've been using it, it has already learned what your good content is, what your most useful content is. It understands the vocabulary of how customers are asking questions. It knows what content to associate to certain questions. And so putting a GenAI product on top of all of that learning is going to give you almost immediate results. It's also going to eliminate that long period of training and erroneous answers and the hallucinations that everyone worries about. So if you're using one of these intelligent search platforms, already, I would really recommend looking at their GenAI capabilities. And I promise you, in most cases, that is really going to accelerate your time to value. The final topic I want to cover is data. And GenAI is like every other AI wave that's come before it and has this core issue of garbage in, garbage out. And the quality of information that is going to be generated by a GenAI tool can never be accurate if you're training it on really poor content. So I you know, this is always a problem with any technology issue, but I think it is so front and center for GenAI. And we had Doug Schmidt from Dell give a keynote at one of our conferences last year, and Doug's message was service leadership has to become their own CIO when it comes to AI and Gen AI. You can't assume that IT is going to take care of this. You've got to take responsibility for understanding your data and making sure the data is good and understanding your various data sources and the value of each one. And that can is so true for for GenAI. And, another, data point from the AI survey I did last year when we asked, what were the data roadblocks or data obstacles implementing AI and Gen AI. The top two, data quality, redundant, duplicate, vintage data, and centralizing data from various systems. These two things are really at the core of many companies struggling, to get value from Gen AI. And to drive that point home, I have this, data point from from Doug Golovsky from the work that he has recently done, and he surveyed our members to ask, when did you last review or audit your knowledge content? And twenty two percent of companies were honest and said, never. Twelve percent greater than eighteen months, eighteen percent more than a year. You you think about release cycles in cloud companies Barca coming monthly, and your content is evolving so quickly. And your knowledge bases are filled with duplicate and outdated and poorly written content. And if you're not taking the time to clean that up, indexing all of that garbage content is really going to pose problems for getting any sort of value from a GenAI implementation. So, obviously, companies that have those dedicated knowledge authors, they're working on this all the time. But if you're one of those twenty two percent of companies that has never cleaned up, your knowledge content, this would be the time to do it. And if you don't have dedicated knowledge authors, take a couple of folks who are really good at authoring Kilometers, and give them a special project for a couple of months to go start cleaning that out. Another thing you need to think about is, although it's getting easier and easier to index content wherever it is stored, having dozens of knowledge repositories is problematic. Because you're going to have so many different kinds of formats and voices and a lot of duplicate content that I would really encourage you to try to start pushing people toward authoring in fewer environments because it makes it easier to reinforce specific formats. It makes it easier to audit that and clean that up, and avoid having so much content that's duplicated in so many different places. So, this is probably the most painful thing. I've been through this myself, I know. But if you will take the time and really do a thorough cleanup on your knowledge content and put some processes in place to do this regularly moving forward, it's really going to accelerate that time to value. So I would encourage you to do what you can, to avoid that garbage in, garbage out problem. So I'm ready to close, and I just want to recap, some of these points for you. When it comes to people, I think that we we really have to build a culture where we're asking our employees to be an agent of change. We want them to embrace this new technology. We want them to, innovate with it as much as they can. And I know that change is hard, but pace that our companies are always transforming, and constant change is just part of their culture. If you haven't had an outside review of your Kilometers processes, your Kilometers program for a while, it's probably a good time to do that. Obviously, KCS, does audits. TSIA does audits. There's a lot of great, knowledge consultants out there. But you should really have someone come in and take a look and see if maybe some of your processes are outdated, maybe you've been ignoring things that you shouldn't have, and all of that investment is really gonna pay off, in time to value for GenAI. When we're looking at technology, I would really start with your incumbent vendors. What do they have existing? What is on their roadmap? And in most cases, even if a feature you really, really want is still six months away on their roadmap, you're probably better waiting for it because there's a lot of risk involved in going with the startup that may say they have that feature, but is it really in production? Are you gonna be the beta tester for that feature? So, again, I love Barca ups. I think we need to always find use cases to test start ups and and give them some some real world experience. But balance your risk, with what you're looking for and make sure you're going through those questions I said to ask them, to make sure it's enterprise class and scalable. And finally, you guys gotta get to work on those knowledge bases. Twenty two percent of companies have never cleaned up their knowledge content. And I think that is probably the single thing you can do to get more value faster, from a GenAI implementation. So I know it's not fun, but, definitely a project that you need to to get to work on. Well, I thank you very much for joining me. I hope that this was useful. I know it was a quick presentation. I've got a lot of data in those slides, so you can download them later and spend a little time on them. I've got my email address here. I'm certainly available on LinkedIn, and so if you have any follow-up questions, feel free to reach out. So, I'd like to give a shout out to Tracy Carson and Patrick Martin for inviting me to be a part of Relevance three sixty, and a big thank you to you for taking time to join my session. So, enjoy the rest of the event and have a great day. This is John Ragsdell from TSIA. Thank you very much. Bye bye.
März 2025

The Keys to GenAI Readiness: Learnings from real enterprises

März 2025
a Relevance 360 (R360 Folder) video
John Ragsdale
Distinguished Researcher and Vice President of Technology Ecosystems