System prompt strategies
The system prompt (set through workspace settings) defines the LLM’s overall behavior. Beyond basic guardrails, you can shape response quality with specific instructions.Be specific about the domain
Generic prompts produce generic answers. Tell the LLM exactly what it is and what data it works with:Define answer structure
Tell the LLM how to format responses. This improves consistency and readability:Control response length
Without guidance, LLMs tend to produce long responses. Set explicit length expectations:Configure index chat settings
Each index has chat-specific settings that control how documents are prepared for the LLM. Configure these through the index chat settings endpoint:Write a good index description
Thedescription field tells the LLM what kind of data the index contains. The LLM uses this to decide whether to search the index and how to interpret results:
Limit retrieved attributes
By default, Meilisearch sends all document attributes to the LLM. This can include irrelevant data that confuses the model or wastes tokens. UseattributesToRetrieve to send only what matters:
Tune search parameters for chat
Conversational queries are often longer and more natural than keyword searches. Adjust search parameters to match:- Higher
semanticRatio(0.6-0.8): natural language questions benefit from semantic search more than keyword matching - Lower
limit(3-5): the LLM processes fewer, more relevant documents better than many loosely related ones - Broader matching strategy: use
"matchingStrategy": "last"(the default) to match as many terms as possible
Optimize document templates for chat
If your index uses an embedder, thedocumentTemplate affects both embedding quality and the text the LLM sees during conversational search. A good template for chat should be readable as natural language:
Test and iterate
After configuring prompts and settings, test with realistic queries to evaluate quality:- Factual questions: “What is the lightest 2-person tent you carry?” (should cite specific products with weights)
- Comparison questions: “Should I get the TrailRunner Pro or the SpeedHike 3?” (should compare features)
- Vague questions: “I need something for a weekend trip” (should ask clarifying questions or give broad recommendations)
- Out-of-scope questions: “What is the weather forecast?” (should decline politely)
- Is the answer grounded in the search results?
- Is the response length appropriate?
- Does the formatting match your instructions?
- Are the recommended documents relevant to the question?
Next steps
Configure guardrails
Restrict responses to indexed data and defined topics
Configure index chat settings
Full reference for index-level chat configuration
Document template best practices
Write effective templates for embedding and chat