Generative AI (GenAI) has moved from the realm of science fiction to tech teams to the highest levels of executive management.

McKinsey’s latest data reveals that over a quarter of C-suite executives are personally using GenAI for work. The topic of AI is on board agendas. It’s making its way into budgets, and according to McKinsey, 40% are planning to increase investment in AI thanks, specifically, to advancements in GenAI. 

From where we’re standing, GenAI has a bright — practically blinding — future, but it’s also not without risks. GenAI in its nascent form presents a host of obstacles including inaccuracy (famously referred to as “hallucinations”). It makes stuff up, sometimes with spectacularly bad consequences

If you’re part of a big organization, this may be the least of your worries. Plagiarism, bias, massive pressure to upskill and hire employees with AI experience, and data security are also key concerns with AI adoption. 

A year after OpenAI’s ChatGPT changed everything about how we do business, the question is not will businesses embrace and adopt GenAI, but how can enterprises ground it in reality?

The Challenges of GenAI for Enterprises

CIOs are under pressure to deploy GenAI quickly and effectively, but this technology is not without some significant challenges. Ignoring them is not an option. Before you can pinpoint and invest in an enterprise-level use case for GenAI, it’s important to understand the risks which include: 


Public GenAI tools open up businesses to new security risks because they don’t have built-in security layers for enterprises. In our recent GenAI Report, we found that 71% of senior IT leaders believe GenAI could introduce new security risks in their organization.

If enterprises don’t have a secure index that considers permissions and roles, generated answers can come back with content users shouldn’t see. 


Public GenAI platforms have parsed the internet. But, they haven’t parsed the unique data that your organization stores in popular SaaS platforms like Salesforce, SAP, Sharepoint, or Dynamic 365 — and the power of a large language model (LLM, the technology GenAI is powered by) increases with access to more content. If you want GenAI technology to provide answers about your enterprise’s specific products, services, and organization, that information needs to be indexed, securely. 

And an LLM is only as good as the content from which it draws. For example, it’s like the difference between researching a topic from a single book versus digging into an entire library. But many companies are ill-prepared to make use of GenAI as their content is spread throughout multiple, siloed repositories. This impacts the timeliness and relevance of the content GenAI has access to.

Building connectors to index every complex system in a unified way is difficult, time-consuming, and expensive. 


GenAI solutions have the ability to confidently produce factual errors that appear accurate, a phenomenon called “hallucinations.” 

For businesses, these inaccuracies, biases, or misleading information can have significant consequences including an erosion in customer trust and opening businesses up to ethical and legal ramifications. Whether they are answering to healthcare professionals or government bodies, companies need compliance and ways to verify the accuracy of GenAI.


Search and GenAI need to work together to deliver consistent experiences across all channels. If an enterprise provides both chat and search experiences, customers (and employees) should not receive different answers in these channels. This is crucial to offering great experiences.

Why implement a new technology without first making sure the conversation you’re having with your customer is consistent?


Solving all of the above is easy, right? If only. Organizations seeking alternatives to public GenAIs will need a platform that handles all of the aforementioned challenges. But do you build it yourself — or buy it?

There are several issues with building an infrastructure that contains necessary security features and role-based access permissions; an intelligence layer that can disambiguate and understand user intent; native connectors into popular organizational platforms; and an API layer that works with any front-end. The expertise and experimentation necessary to build this at scale can take years, impacting both time to market — and having the necessary resources available to innovate. 

You may wonder if there’s a use case for GenAI that’s truly enterprise ready. The answer is yes, but it requires that you adopt a grounded GenAI model.   

Choosing a Grounded GenAI Tool

We feel grounded when we’re standing on something solid and dependable – when we can count on the laws of gravity and physics to keep us from drifting away. Think of GenAI grounding in similar terms. It’s about choosing a GenAI tool grounded in facts and high-quality content. 

In your business, you shape reality for different audiences including your employees, customers, shareholders, and partners. This reality is built on the foundation of good content. To achieve accurate grounding, you need to follow a few best practices which include:

Relevance, in the context of information retrieval, is an integral part of enterprise search technology. Relevance matches each user query with the correct access and further pairs this response with industry-leading semantic search capabilities. 

AI comes into play with ML-powered ranking and personalization algorithms that help identify the most appropriate content which is fed to the LLM as “grounding context”. 

The LLM identifies chunks of content that’s most relevant to the user and the query. Rather than sending full-length documents to the user, the grounding context(e.g., small amounts of information gleaned from larger documents)is limited in size. Retrieving information in small chunks versus sending entire documents is important because it ensures that you won’t exceed an LLM’s context window sizes. 

“Chunking” – the process of splitting documents into smaller, meaningful pieces of text — is how you provide grounding for your generated answers. It’s necessary to do this to keep costs manageable as you scale your GenAI system.  

Creating a “Chunking” Strategy

So now you know that creating a comprehensive document chunking strategy is foundational for deploying GenAI at the enterprise level. It’s what allows you to balance the two components needed for GenAI-assisted information retrieval including:

  1. Using grounding context, retrieve relevant content based on a user’s query within the limits of the LLM’s processing capabilities. 
  1. Ensuring that document chunks are appropriately tied to semantic search – that is, they should make sense to the searcher even though they’re smaller pieces of larger documents.

But, equally important, is a focus on correct prompt engineering. Prompt engineering is the process of creating and optimizing the cues that get GenAI to generate the most appropriate answers. Developers employ prompt engineering when building LLMs to make them more effective and less prone to errors.

Prompt engineering involves collecting some sample data to create an evaluation dataset, building prompts into the system, then using an evaluation dataset so that users can test them. 

Selecting the right words, expressions, and symbols is important because getting it wrong means GenAI might just serve up a plate of gibberish. Several key considerations underpin a good prompt engineering approach, and these include:

  • Accounting for information deficits – Configure your model so that it can’t respond to queries if it doesn’t have the right information available. This prevents hallucinations and ensures output is grounded in real information. 
  • Sourcing all responses – Supply citations or context with the generated response which supports the generated answers. This shows users that responses are reliable and credible. It also allows people to take a deeper dive if they want to explore a topic more thoroughly. 
  • Using appropriate language – Use language that’s appropriate in the sense that it’s contextually relevant, professional, and tied to your specific industry and use case. You should also set boundaries that prevent the model from answering questions about offensive or inappropriate content.
  • Being consistent and clear – Prompts should be articulated clearly, adhering to proper grammar and spelling conventions. They should also maintain a consistent format across all engineers within an organization, ensuring uniformity and ease of understanding.
  • Conducting testing – Implementing functional testing is the only way to know if the model works and how various changes impact GenAI output. The goal of testing is also to monitor model reliability, consistency, and accuracy as prompts are revised and input data is updated. 

From an enterprise-ready perspective, the importance of prompt engineering for GenAI can’t be understated. Poorly configured prompts can lead to irrelevant outputs and introduce bias, security breaches, and expose sensitive data to unauthorized users.

The Future of GenAI for Enterprises

In a solidly post ChatGPT world, creating an enterprise-ready GenAI solution is top-of-mind for executives, tech leaders, board members, and employees across all types of organizations.  Innovation is what puts your company at the forefront of the competition — but you have to act responsibly. Dynamic grounding helps you do that. 
Learn more about the different challenges that Coveo Relevance Generative Answering can help you solve — and get questions answered in real time by a search expert by requesting a demo!

Enterprise Tested. Trusted. Ready.
Coveo Relevance Generative Answering