Prompted by the meteoric popularity of ChatGPT, generative AI (GenAI) has captured the attention and imagination of millions from lay people to global corporations. 

Whether it’s surfacing technical information, offering product advice, or providing step-by-step assistance, GenAI has quickly become one of the most talked about technologies for businesses. 

But as with many disruptive technologies, it also opens the door to risks that CIOs must address and mitigate. We look at the key CIO-level issues to address when implementing GenAI, so you can fully realize the potential of the tool for your business.

The Hype Around GenAI

According to a Gartner poll of more than 2,500 executives, 45% report that the public attention around ChatGPT led them to increase their AI investments. And 70% stated their enterprises are currently exploring GenAI. 

While organizations race to adopt GenAI, at the same time they’re concerned about their own readiness and the risks of this new technology. About two-fifths of executives in a Google survey felt a high degree of urgency to adopt GenAI, but 62% say their organization lacks critical AI skills to fulfill their strategies. Many also voice worries about the risks of using these tools in their businesses, with top concerns being inaccurate information (70%) and bias (68%).

In keeping with other disruptive technologies, the initial hype must weigh the risks and challenges of applying the tech to businesses. 

GenAI Risks and Hurdles for CIOs

CIOs are in the difficult position of figuring out the best ways to take advantage of the quickly developing tech while making sure to mitigate the risks. 

Through interviews with more than 50 CIOs like you, we’ve identified the biggest headaches to overcome:

Headache 1: Privacy of Public Generative Engines

The widespread availability and ease of use of public GenAI have made it easy for anyone to get seemingly accurate responses from these models. However, a significant risk for enterprises arises as these public gen engines save and store chat history to further train their models, potentially exposing proprietary information.

Six percent of employees paste sensitive data into GenAI tools such as ChatGPT, security architecture company Layer found. Of those employees, 15% have engaged in pasting data, with 4% doing so weekly and 0.7% multiple times a week. 

A real-world example of this issue occurred at Samsung Electronics, which banned the use of public-facing AI-generative tools for employees after engineers accidentally leaked internal source code into ChatGPT. Several companies, including Apple and Bank of America, have also banned or restricted the use of public facing models by their employees.

But banning something doesn’t mean employees won’t try to use it. The risk of employees entering private and sensitive information, such as source code, customer meeting notes, or sales details, into public GenAI solutions remains a significant problem for enterprises.

Headache 2: Proprietary Enterprise Data

Public GenAI platforms have parsed the internet. But, they haven’t parsed the unique data that your organization stores in popular SaaS platforms like Salesforce, SAP, Sharepoint, or Dynamic 365.

If you want GenAI technology to provide answers about your enterprise’s specific products, services, and organization, that information needs to be indexed. We’ve already covered why it’s an issue for enterprises to let public GenAI platforms access their data. 

Building connectors to index every complex system in a unified way is difficult, time-consuming, and expensive. 

Headache 3: Security of Generated Content

Public GenAI tools open up businesses to new security risks because they don’t have built-in security layers for enterprises. In our recent GenAI Workplace Survey, we found that 71% of senior IT leaders believe GenAI could introduce new security risks in their organization.

If enterprises don’t have a secure index with permissions and roles, generated answers can come back with content users shouldn’t see. For example, let’s say you connect your payroll to one of these unsecured GenAI platforms. You ask, “What is my salary?” The platform could start generating a list of all the salaries in your entire organization. There are no controls for privacy of information. 

Headache 4: Multiple Content Sources

Public GenAI pulls content from different sources when providing you an answer.  Their power increases with the ability to access more content. 

And a large language model (LLM, the technology GenAI is powered by) is only as good as the content from which it draws. For example, it’s like the difference between researching a topic from a single book versus digging into an entire library. But many companies are ill-prepared to make use of generative AI as their content is spread in multiple, siloed repositories. 

In this year’s Workplace Relevance Report, we found that 89% of respondents search between 1 and 6 sources to find needed information to do their jobs — an increase of 7 points from last year. If that’s how difficult it is for your employees, imagine what kind of experience you’re offering your customers.

If you want your GenAI tool to leverage that content, you’re forced to build connectivity into a variety of data sources. That’s incredibly complex, time-consuming, and expensive.

Headache 5: Always Up-to-Date Content

The timeliness and relevance of content is one of the biggest hurdles to overcome in making use of GenAI in enterprises. 

No one would want their tax returns to be prepared based on last year’s tax law. In the enterprise space, no company would want to provide their customers or employees with out-of-date information. 

An AI-powered chatbot can only answer a question or prompt by drawing from the fixed data on which it was trained. But for the sake of the customers looking for answers, businesses must ensure information is current, relevant, and high quality.

Headache 6: Accuracy at Scale

GenAI solutions have the ability to confidently produce factual errors that appear accurate, a phenomenon called “hallucinations.” 

These are now well documented and one of the greatest obstacles for enterprises desiring to adopt GenAI. They may stem from biases the AI picked up or limitations of its training data. 

For businesses, these inaccuracies, biases, or misleading information can have significant consequences including an erosion in customer trust and opening businesses up to ethical and legal ramifications. 

Headache 7: Verifiability of Accuracy 

Whether they are answering to healthcare professionals or government bodies, companies need compliance and ways to verify the accuracy of GenAI.

But currently researchers do not fully understand how algorithms in deep-learning models such as LLMs work and as a result the logic behind how an AI arrived at an answer becomes impossible to trace. 

Headache  8: Consistency Across Search and Chat Channels

Search and GenAI need to work together to deliver consistent experiences across all channels. If an enterprise provides both chat and search experiences, customers (and employees) should not receive different answers in these channels. This is crucial to offering great experiences.

Salesforce-owned Mulesoft found that 81% of customers believed that one-in-five organizations across industries — banking, insurance, retail, healthcare, and public sector — provide a disconnected experience, failing to recognize preferences across touchpoints and provide relevant information in a timely manner. 

Why implement a new technology without first making sure the conversation you’re having with your customer is consistent?

Headache 9: High Costs of GenAI 

Solving all of the above is easy, right? If only. Organizations seeking alternatives to public GenAIs will need a platform that handles all of the aforementioned challenges. But do you build it yourself — or buy it?

There are several issues with building an infrastructure that contains necessary security features and role-based access permissions; an intelligence layer that can disambiguate and understand user intent; native connectors into popular organizational platforms; and an API layer that works with any front-end. The expertise and experimentation necessary to build this at scale can take years, impacting both time to market — and having the necessary resources available to innovate. 

After all, Coveo’s Relevance Cloud has been almost a decade in the making.

Getting Enterprise-Ready

There are some important steps you can take to address the issues of GenAI and get your enterprise ready. We believe CIOs need platform technology from which to generate answers that unifies their most relevant content at scale in a secure way. 

That output can be fed into the LLM of your choice that has been trained on your data. The answers can then be delivered coherently in search, navigation, recommendations, and other customer channels. 

To achieve this, we believe enterprises need to act on the following:

  1. Securely unify and enrich content across internal and external content sources (content and data layer)
  2. Generate in-session, personalized search, navigation, content recommendations, chats and conversations (relevance intelligence layer)
  3. Provide these experiences in any app across a digital journey (engagement apps layer)

Ending Thoughts

Mitigating the risks of GenAI by establishing trust will help CIOs like you protect the well-being of your customers, fostering greater trust. 

Companies who can apply GenAI effectively for their businesses while making certain they are maintaining accuracy and the responsible implementation of this new tech will find themselves ahead in the quickly developing AI landscape.

Get the Cure
Ebook: GenAI Headaches: The Cure for CIOs

Learn more about Coveo’s work with LLMs and GenAI — or request a demo today to get your unique questions answered. 

The World’s First Relevance AI Platform Offering Enterprise-Ready Generative Answering