This article is part one of a four-part series exploring how to extend Coveo’s powerful search platform with vector embeddings from Amazon Bedrock to enable advanced visual search experiences for ecommerce.

The Visual Search Opportunity

Text-based search has powered e-commerce for decades. But consumer behavior is shifting. Google reports that Google Lens now processes over 20 billion visual searches each month with 20% being shopping-related. Users perform over 600 million visual searches per month using Pinterest Lens and related tools. Industry research also suggests strong consumer demand: 60% of Gen Z say they want visual search capabilities when shopping online, and retail brands implementing visual search report conversion improvements of roughly 20-30%.

The question isn’t whether to implement visual search—it’s how to do it without replacing your existing search infrastructure.

Why Combine Coveo with Vector Search?

This blog series describes customer-managed implementation patterns for teams that need to experiment now. These are not the default or recommended Coveo architectures. The examples here are intended to explain design tradeoffs and governance, not to imply product commitments or turnkey support.

Coveo excels at relevance through its machine learning capabilities, which already incorporate vector searchfor both semantic search (via the Semantic Encoder) and real-time personalization. The Coveo Machine Learning (Coveo ML) platform includes:

  • Automatic Relevance Tuning (ART): Dynamically adjusts ranking based on user interactions and query behavior. ART learns from click and search events to automatically boost the most relevant content. Note: Real-time personalization models, which use vector search, are a key component of the overall relevance strategy in commerce.
  • Query Suggestions (QS): Recommends relevant queries as users type, based on characters typed and historical search patterns that resulted in clicks.
  • Product Recommendations: Provides personalized product suggestions across the e-commerce journey. This model uses vector search for similarity and recommendations.
  • Listing Page Optimizer: Uses ML to dynamically organize product listings for maximum conversion.
  • Dynamic Navigation Experience (DNE): Image searches can sometimes bring up a broad set of results. This model automatically reorganizes the sidebar filters (like size, color, or material) to show the most useful options for narrowing down that specific set of visual matches.

These powerful capabilities are difficult to replicate. Rather than replace Coveo, we extend it with dedicated vector search infrastructure for image similarity. For clients with highly specialized, proprietary visual search needs who want to “Bring Your Own Model” (BYOM), Coveo’s composable architecture makes it easy to integrate with powerful external services like AWS Bedrock. This approach allows you to leverage your existing Coveo infrastructure while extending it with dedicated, state-of-the-art vector search capabilities for image similarity.

Architecture Overview

This pattern is useful when a team has an immediate requirement and is prepared to own the surrounding orchestration, validation, and operations.

Our hybrid architecture leverages each service’s strengths:

ComponentRoleWhy This Choice
CoveoText search, facets, ML-powered relevance, analyticsIndustry-leading relevance tuning
OpenSearchVector similarity search (k-NN)Purpose-built for high-dimensional vectors
Bedrock TitanEmbedding generationManaged multimodal embeddings

Understanding Vector Embeddings

Vector embeddings convert content into numerical representations that capture semantic meaning. Two images of blue denim shirts will have similar embeddings, even with different labels.

Amazon Bedrock Titan Multimodal Embeddings generates 1024-dimensional vectors. Similarity is calculated using cosine similarity—1.0 means identical, 0.0 means completely different.

Key specifications:

  • Model ID: amazon.titan-embed-image-v1
  • Dimension: 1024
  • Max image size: 2048 x 2048 pixels
  • Cost: ~$0.0001 per image

Implementation

The complete implementation is available in our GitHub repository.

Generating Embeddings

See backend/embedding_generator/generate_embeddings.py for the complete batch processing implementation.

Python

See backend/embedding_generator/generate_embeddings.py for the complete batch processing implementation.

OpenSearch k-NN Configuration

OpenSearch’s k-NN plugin uses HNSW (Hierarchical Navigable Small World) algorithm for efficient similarity search:

Python

Full index setup is in backend/embedding_generator/generate_embeddings.py.

Search Flow

The Lambda handler orchestrates the search:

  1. Decode uploaded image
  2. Generate embedding via Bedrock
  3. Query OpenSearch for similar vectors
  4. Filter by similarity threshold (60%)
  5. Enrich results with Coveo metadata

Python

Why Enrich with Coveo?

OpenSearch stores minimal metadata for similarity search. Coveo provides:

  • Rich product data: Full descriptions, availability, pricing
  • Faceted navigation: Filter by category, color, material
  • Personalization: Results tailored to user behavior via ART
  • Analytics: Track searches and clicks for continuous improvement

Cost Analysis

For a 10,000 product catalog:

ServiceMonthly Cost
Bedrock Titan (one-time indexing)~$1.00
OpenSearch t3.small.search~$26.00
Lambda (100K invocations)~$2.00
S3 (10 GB images)~$0.23
Total~$30/month

Next Steps

In the next article, we’ll automate embedding generation using Coveo’s Index Pipeline Extensions (IPE), ensuring every new product is automatically indexed for visual search.

Resources