In one short quarter, mainstream news outlets like the New York Times, The Washington Post, NBC News, and Slate covered generative AI applications like ChatGPT, Jasper, and Dall-E extensively. All of these Large Language Models (LLMs) are touting values across virtually every use case. Overall, the common thread is improving the human experience — particularly customer self-service.
So let’s look at how generative AI can help improve your self-service offering. Through deep learning, generative AI leaps beyond just returning links to actually providing the answer. Specifically, generative AI models are fed vast quantities of existing content to train the models to produce new content. Large amounts of data are fed to an LLM, creating a learning foundation for the AI model. In addition new content or data can fine-tune generative AI models and adapt them to new use cases. During tuning, a specific set of data trains the model to adapt it to a use case.
Yet it can be difficult to know where to start when choosing a generative AI tool for your organization.
We recently wrote about the importance of preparing your organization for generative AI. You can do this by taking some important steps:
- Deciding on a business use case
- Focusing on a Knowledge Centered Service® (KCS)* strategy
- Investing in foundational technology like intelligent search
When you’re ready to use tools like ChatGPT to empower customer self-service, there are some questions you should ask.
Improving Self-Service Success by Building Customer Confidence
Nearly 80% of self-service resolution is driven by three factors: clarity, credibility, and confirmation. With a generative AI model, it’s not just about the generated information being correct. You also want your customers to trust that it’s correct.
When getting customers comfortable with this technology, you need to consider the following three tenets:
1. Show your work
It’s easy to ask a tool like ChatGPT a question and get an answer that looks extremely accurate. But looks can be deceiving. A generative AI system, using deep learning and LLMs to craft a natural language response, can sometimes provide false information. When AI-generated content that’s false or pulled from bad sources is presented as factual or true, it’s called a “hallucination.” To avoid this, you need reliable input data, reinforcement learning, and an accurate language model. These are used to train any system you plan to use in an enterprise setting.
Allowing customers to check where an answer came from and the logic behind it can increase their confidence. Do this by providing the attribution and source links wherever possible. Then they can visit the source if they want to understand how an answer was assembled.
2. Guide customer journeys
Giving customers guidance on how to use generative AI technology can help improve confidence in their responses. It can also yield more relevant answers. There’s a learning curve with generative AI tools, which is why prompts can help users have more productive interactions.
Generative AI and the technology behind what makes it work is something entirely new for many people. Guidance like visual cues, prefilled forms, or written instructions helps users understand how to interact with the technology. And, in return, this helps the generative AI model refine future answers.
3. C.R.E.A.M. (Content Rules Everything Around Me)
The quality of generated content is a critical part of what makes a generative AI solution successful. It’s not enough to have an answer that looks accurate. It needs to pull from high-quality sources to ensure that it’s factually and contextually correct.
Your company’s knowledge management process — and all the content that forms its foundation — is the baseline for generating answers. Without a solid understanding of your existing content, your generative AI won’t have what it needs to generate reliable information. So, this is what we mean when we say “content rules everything around me.”
If you put an LLM on top of an insufficient knowledge base, it won’t yield helpful results.
8 Questions to Choose Best Generative AI Solution
When choosing a generative AI tool for your business, there are a lot of questions to ask. Some of these questions focus on the technology itself. Others help you boil down the best way to use these tools within your existing customer service framework.
1. Can generative AI eliminate the need for support agents?
No. Generative AI is about augmenting, not replacing, human agents. For a frictionless customer experience, have a plan to manage the digital self-service to human interaction transition — and vice versa.
This could include:
- Instructions on when a customer should escalate an issue
- What AI tools like chatbots can/should handle
- When an issue is complex enough to need problem-solving by an agent
No language model, no matter how good it is, will be up-to-date enough to know all the answers. Ask how this technology fits into your existing service model. And how it can help your live agents-versus if or how it can replace them.
2. How does a given generative AI tool support KCS?
Generative AI can provide more personalized, relevant answers in a fraction of the time of a traditional search. When reviewing vendors, ask how they can support your customers and employees to find the right information more quickly. For example, Coveo has a feature called Smart Snippets. It displays a relevant snippet of information on the search results page. Users can get an instant response to their search query or dig deeper by clicking on the snippet source.
3. What is the timeline for implementation?
Integrating generative AI into an existing service model takes time. From pre-implementation tasks like integrating the system into your existing workflow (e.g., portability) and tech stack. To post-implementation fine tuning, like customizing models for specific tasks.
The most successful implementations start with understanding the scope of the project and asking the right questions. For example, how do you integrate search and question-answering together? How does it contribute to the user journey? These questions can help you narrow choices for a solution that works for your use case and organizational goals.
4. Is this tool enterprise-ready?
Tools like ChatGPT that use LLMs and generative AI need maturity before they can be enterprise ready. Find out if there’s an administrative wrapper like intelligent search. As well as other tools that allow you to train and define an LLM to serve as a conversational agent.
Investigate the scalability and maintenance of the solution. You’ll want to make sure that it can handle day-to-day requests without being interrupted. Ask about security measures and how they align to your IT policies. Not many IT departments will let you download code, throw it on the corporate site, and say “go at it.”
5. Can the system leverage pre-trained models and can they be fine-tuned?
Many generative AI vendors provide pre-trained models to help you get up and running quickly. Pre-trained models can be used in the same way as self-training or custom training. However they are usually more limited in terms of customization and may not always fit into an existing service model. If a tool uses a pre-trained model, ask if you can fine-tune them for existing use cases and domain adaptation (i.e., using your vocabulary).
6. What infrastructure is required?
You need the proper infrastructure to support ingesting and feeding the right context into an LMM. Open source models like sT5 and MPNet outperform mainstream GPT models in most real-world retrieval scenarios. It’s important to ask about the performance and operational efficiency of the LLM you choose. Consider the appropriate neural network architecture given the cost/quality constraints to productize for hundreds of enterprise clients.
7. Can you build a test model?
Ask the AI vendor if you can build a test model. This should allow you to evaluate the performance of their technology with your existing data before you commit to it. You’ll get a realistic sense of how long it will take to train a model, as well as generated content accuracy and answer speed.
8. Is the retraining schedule appropriate and easy to change?
Machine learning models need to be retrained to avoid performance degradation. A retraining schedule is typically based on intervals of time (e.g., weekly, monthly, quarterly, etc.) They can be performance based, with retraining occurring when performance falls below a predetermined threshold.
Ask how the generative AI vendor approaches retraining – is it done manually or implemented automatically based on preset conditions? It should also be easy to change since the conditions and/or requirements of the tool may need to be updated to align with your internal processes and needs.
Choose the Right Generative AI Solution For Your Business
Generative AI is a new and exciting tool that will change how we consume information. Make sure you choose the right platform for your business by considering all your needs, options, and more.
Curious to see how Coveo is handling generative AI? Find out more here:
How do you decide if generative AI has a place in your customer interactions and contact center? Download a free copy of our white paper, Preparing Your Business for Generative AI.