Yahoo’s infamous 2006 “Peanut Butter Manifesto” warned us about spreading resources too thin. Twenty years later, enterprise leaders are making the same mistake with AI. Instead of focusing on strategic AI agent experiments, teams are launching dozens (sometimes hundreds) of AI pilots simultaneously. 

The failure rate for GenAI pilots is 95%—they never make it to production.

By and large, such widespread failure comes down to approach, processes, and integrations, rather than the viability of AI. During a recent Coveo webinar on building AI agents, Isaac Sacolick, President and Founder, StarCIO,  lamented the propensity of modern enterprises to “peanut butter spread” AI resources across the organization.

Say what you will about peanut butter spread, it’s a recipe for failure during AI implementation. In this blog post, we tell you why—and explain how Sacolick’s framework for strategic focus makes a lot more sense. 

What the Peanut Butter Problem Means for AI

Yahoo defined the peanut-butter problem as “spreading anything—money, energy, time—too far and too thin to be effective.” Of Yahoo’s then lack of focus and accountability (his words, not ours), Sacolick described “a thin layer of investment spread across everything we do and thus we focus on nothing in particular.”

So why is AI so prone to the peanut-butter treatment? 

  • Excitement and FOMO: Every department wants their very own AI project
  • Tool accessibility: Spinning up ChatGPT experiments is easier than ever
  • Leadership pressure: “We need to be doing something with AI” is a common refrain
  • Democratic impulse: That is, trying to be “fair” by giving every department equal resources

All of a sudden, you’ve got AI experiments everywhere, but without the connective tissue required to realize full potential value from a single one. 

Who knew this brand of enterprise peanut butter spread could be so expensive… 

The Hidden Costs of Spreading AI Too Thin

“You spend more time coalescing information and figuring out what’s going on, instead of making early decisions about where to prioritize,” says Sacolick. Put differently, companies with a peanut-butter problem shortchange their potential breakout successes by spreading resources too thin.

For his computer science class at Harvard, Charlie Graham also incorporated peanut butter into his articulation of the problem: 

“By the end, there was peanut butter, jelly, and bread everywhere. No sandwich. But we’d learned the point: you have to be super clear in your instructions or it won’t know what you want.”

For Fortune 500 CEOs, Writer AI CEO May Habib was less forgiving in her assessment. She characterized leadership’s approach to AI as a category issue, one that’s bleeding billions in the form of doomed AI initiatives.

What gives?

The Math Doesn’t Quite Work

Sacolick asks us to picture ourselves selling a $10M investment to a corporate board for something with only a 5% production success rate (according to industry data). “Ninety-five percent of what you’re working on is just a learning exercise,” says Sacolick. If you have 20 AI experiments going, critical resources are diluted to 5% allocation each.

Under those conditions, reaching critical mass becomes nearly impossible. Someone faster beats you to production. Initiatives repeatedly stall, breeding team fatigue and organizational skepticism.

It’s not a formula built for scale.

Image shows the 1-in-10 reality for most companies, underscoring the need to focus ai agent experiments.

What to Do Instead: Sacolick’s “Rule of 5” Framework

The Rule of 5 Framework is at its core a strategy of portfolio limitation: maintain a maximum of five concurrent AI experiments at any given time. “If we think we can run five experiments, and experiment four wasn’t providing results as fast, we’d pause it and let one of the new ideas come in.”

This isn’t about lack of ambition or even technical capacity, necessarily. It’s about achieving critical mass for the experiments that have traction. As far as identifying those, Sacolick recommends evaluating each potential experiment from three central vantage points:

  1. Customer impact: Where can we deliver differentiated value?
  2. Investor value: What drives measurable business outcomes?
  3. Employee experience: What makes work more effective and engaging?

The One-Page Value Proposition

Every experiment should also have a one-page value value proposition before starting. This is not a vision statement for the project. Instead, your goal is to answer questions about customer and business importance. 

Doing so forces clarity before resource allocation, which you can sustain through monthly reviews that ensure continuous relevance.

What to Include:

  • Problem statement: What specific pain point does this address?
  • Target users: Who benefits and how?
  • Success metrics: What does winning look like?
  • Strategic alignment: How does this connect to business priorities?
  • Resource requirements: What’s the realistic ask?

The Anti-Peanut-Butter Prioritization Process

McKinsey’s 2025 State of AI survey found that 88% of organizations now use AI, but only 6% have achieved enterprise-wide impact that moves the needle on revenue. The ones that do have figured out a sound methodology for prioritization, one that likely follows a phased approach:

Infographic outlines a prioritization process for ai agent experiments.

Monthly Portfolio Reviews: The Continuous Churn

Deploy experiments; see what’s working. In terms of measuring performance, Sacolick recommends a Flywheel Approach: monthly review cycles (not quarterly or annual) that aim to answer the central question for each active experiment: 

“Are we getting closer to the vision?”

How teams answer that question—the qualitative and quantitative data points they procure in support—should inform subsequent go/no-go decisions. 

Decision Criteria:

  • Progress against validation rules
  • Team engagement and participation
  • Early indicators of business value
  • Technical feasibility and integration success

This criteria ought to clarify where and when it’s time to press pause on underperformers. Doing so isn’t a failure—it’s learning and redirecting resources. It’s making room for new ideas, while continuously keeping the pipeline active and portfolio limited. 

The art of stopping, in other words. 

Adjusting Incentives and Culture

That art includes paying mind to the people involved, not just processes and technology. For example, breaking the peanut butter habit means explicitly addressing executive motivation and structuring performance management accordingly.

Sacolick also emphasizes bringing subject matter experts in early—not just for validation, but to ensure the knowledge bases and search capabilities underlying your agents are accurate. “It’s probably the most important place when we start building search in, as a foundation.”

It means adapting change management, as well, for which Sacolick has three recommendations: 

  • Bring the organization along on why focus matters
  • Celebrate the experiments you choose AND the ones you defer
  • Make resource allocation transparent, not political

Think about it: if somebody’s pet AI initiative churns—and if that person is effectively asked to take a corporate bullet for the team—they need to know why and what’s in it for them.

Rule of Five in the Real World: How Focus Leads to AI Success

These three Coveo customers resisted the urge to spread their peanut butter thin—and built real momentum instead. 

BMO Financial

Here the team concentrated their effort on shoring up advisor support rather than spreading across all functions. This led to instant (and compliant) answers, as well as measurable productivity gains. Again, success in this one area laid the foundation for future agent deployment in other areas of operation. 

Xero

The epitome of not-trying-to-do-everything, Xero focused on digital-first, content-led support. Doing so led to 75% of initial messages handled by AI—and a 30% reduction in human support. Further proof that going deep beats going wide.

Healthcare Clinic

Instead of 50 small AI experiments, this healthcare team focused on one critical use case: clinician access to protocols. As a result, they had 4,000 clinicians relying on AI agents built for a single purpose, leading to 60% patient self-service resolution. More importantly, success in this area funded expansion into others.

Relevant reading: Coveo for Agentforce 2.0: Unlocking the Next Wave of Service AI Agents

Common Objections to Rule of 5 Methodology (and How to Respond)

This kind of prioritization will create tension—especially around such a “hot” technology enabler. Which is more or less the point: tension is where organizations sort out which projects are actually worth the investment.

Here’s how to handle the pushback without backing down.

“But everyone wants their project funded”

To this, Sacolick says it’s better to have five successful initiatives than 20 mediocre ones. And the numbers certainly reflect the high probability of mediocrity, if not outright failure (see: MIT data on GenAI project failure rates).

→ Bottom line: Transparent prioritization beats perceived fairness.

“We have the capacity for more”

Easy to say, but is it really true? Once you count integration work, change management, and continuous monitoring, “capacity for more” becomes a far less common refrain.

→ Bottom line: Every business head will make a strong argument for why they deserve resources. Here strong leadership and clarity of purpose are essential.

“What if we pick wrong?”

A fair question, but the answer is built into the framework. Monthly reviews and data-backed project rotation will soon reveal what’s actually working and what’s not. The goal is to learn fast and pivot fast. 

→ Bottom line: The wrong pick with full resources is better than the right pick without.

A Practical Implementation Timeline for the Rule of 5 Framework

In a culture where deploying AI pilots is as easy as spreading peanut butter on bread, implementing a new way of thinking won’t happen overnight. Here’s a reasonable timeline for taking a more targeted approach to prioritizing AI initiatives:

Infographic details an implementation timeline for ai agent experinments

Sacolick recommends adopting a portfolio management approach. Use a dynamic visual matrix that shows active vs. pipeline AI experiments. The traffic light system works well here for progress indicators: green (proceeding); yellow (at risk); and red (pause decision needed). Consider simple voting methods to secure leadership alignment on what the priorities really are.

Finally, focus on the metrics that really matter: not number of experiments, but number of experiments reaching production; not ideas generated, but measurable business value delivered. 

Image promotes resource concentration rule of thumb for ai agent experiments

If It Feels Safe, It’s a Recipe for Mediocrity

Don’t forget what happened during Charlie Graham’s demonstration at Harvard: peanut butter and jelly ended up everywhere (the bread too). Not a big deal in a controlled learning environment; but the fundamental result is the same: resource waste, which becomes a much bigger deal at the enterprise level (as May Habib rightly warns).

The peanut-butter-spreading approach to AI agent experimentation may feel safe but virtually guarantees mediocrity. Isaac Sacolick’s Rule of Five provides the discipline needed to make sure the peanut butter goes where it’s supposed to. It speaks to the hard truth about AI strategy: it’s scoring that one big success that matters, not small gains on 10 smaller initiatives.

It’s a hard pill to swallow, in the age of rapid AI adoption. But it’s the same reality check for any hype cycle: eventually, the cost of peanut butter spreading resources becomes too high to ignore.