Skip to main content
AI-powered search uses embedding models to retrieve search results based on the meaning and context of a query, not just matching keywords. Unlike LLMs, embedding models are lightweight, fast, and inexpensive to run. This tutorial uses OpenAI as the embedding provider because it is the simplest to set up. Meilisearch supports many other providers including Cohere, Mistral, Gemini, Cloudflare, Voyage, AWS Bedrock, and more.
This tutorial requires an OpenAI API key.

Create a new index

First, create a new Meilisearch project. If this is your first time using Meilisearch, follow the quick start then come back to this tutorial. Next, create a kitchenware index and add this kitchenware products dataset to it. It will take Meilisearch a few moments to process your request, but you can continue to the next step while your data is indexing.

Configure an embedder

In this step, you will configure an OpenAI embedder. Meilisearch uses embedders to convert documents and queries into embeddings, numerical vectors that capture their semantic meaning. Once configured, Meilisearch generates and caches all embeddings automatically. Open a blank file in your text editor. You will build your embedder configuration one step at a time.

Choose an embedder name

In your blank file, create your embedder object:
{
  "products-openai": {}
}
products-openai is the name of your embedder for this tutorial. You can name embedders any way you want, but try to keep it simple, short, and easy to remember.

Choose an embedder source

Meilisearch relies on third-party embedding models to generate embeddings. These services are referred to as the embedder source. Add a new source field to your embedder object:
{
  "products-openai": {
    "source": "openAi"
  }
}

Choose an embedder model

Embedding models vary in size, cost, and quality. Add a new model field to your embedder object:
{
  "products-openai": {
    "source": "openAi",
    "model": "text-embedding-3-small"
  }
}
text-embedding-3-small is a cost-effective model for general usage. OpenAI also offers text-embedding-3-large for higher accuracy at a higher cost.

Create your API key

Log into OpenAI, or create an account if this is your first time using it. Generate a new API key using OpenAI’s web interface. Add the apiKey field to your embedder:
{
  "products-openai": {
    "source": "openAi",
    "model": "text-embedding-3-small",
    "apiKey": "OPEN_AI_API_KEY"
  }
}
Replace OPEN_AI_API_KEY with your own API key.
You may use any key tier for this tutorial. Use at least Tier 2 keys in production environments.

Design a document template

Documents can be complex objects with many fields. A document template tells Meilisearch which fields to include when generating the embedding, using Liquid syntax. A good template should be short and only include the most relevant information. Add the following documentTemplate to your embedder:
{
  "products-openai": {
    "source": "openAi",
    "model": "text-embedding-3-small",
    "apiKey": "OPEN_AI_API_KEY",
    "documentTemplate": "An object used in a kitchen named '{{doc.name}}'"
  }
}
This template gives general context (An object used in a kitchen) and adds the information specific to each document (doc.name, with values like wooden spoon or rolling pin). For more advanced templates, see document template best practices.

Send the configuration to Meilisearch

Your embedder object is ready. Send it to Meilisearch by updating your index settings:
curl \
  -X PATCH 'MEILISEARCH_URL/indexes/kitchenware/settings/embedders' \
  -H 'Content-Type: application/json' \
  --data-binary '{
    "products-openai": {
      "source": "openAi",
      "apiKey": "OPEN_AI_API_KEY",
      "model": "text-embedding-3-small",
      "documentTemplate": "An object used in a kitchen named '\''{{doc.name}}'\''"
    }
  }'
Replace MEILISEARCH_URL with the address of your Meilisearch project, and OPEN_AI_API_KEY with your OpenAI API key. Meilisearch will automatically batch your documents and send them to OpenAI for embedding generation. Embeddings are cached, so only new or modified documents are processed on subsequent indexing operations. Hybrid searches are very similar to basic text searches. Query the /search endpoint with a request containing both the q and the hybrid parameters:
curl \
  -X POST 'MEILISEARCH_URL/indexes/kitchenware/search' \
  -H 'Content-Type: application/json' \
  --data-binary '{
    "q": "kitchen utensils made of wood",
    "hybrid": {
      "embedder": "products-openai"
    }
  }'
Meilisearch runs both keyword and semantic search, then merges the results using its smart scoring system. The most relevant results appear first, whether they matched by exact keywords or by meaning.

Next steps

Choose an embedder

Compare providers and pick the right one for your use case

Custom hybrid ranking

Tune semanticRatio to balance keyword and semantic results

Guides for other embedding providers

This tutorial used OpenAI, but Meilisearch works with many providers. Each guide below walks you through the full configuration:

Cohere

Cloud-hosted multilingual embeddings

Mistral

Mistral’s embedding API

Google Gemini

Google’s embedding models

Cloudflare

Cloudflare Workers AI embeddings

Voyage AI

Specialized embedding models

Jina

Multilingual embedding models

AWS Bedrock

Amazon’s embedding service

HuggingFace

HuggingFace Inference Endpoints

REST embedder

Connect any embedding API