Common error scenarios
| Scenario | HTTP status | Cause |
|---|---|---|
| LLM provider unreachable | 502 or 504 | Network issue or provider outage |
| LLM rate limited | 429 | Too many requests to the LLM provider |
| No search results | 200 (empty) | Query does not match any documents |
| Invalid workspace | 404 | Workspace name does not exist |
| Invalid model | 400 | Model identifier not recognized by the provider |
| Context too long | 400 | Conversation history exceeds the model’s context window |
Handle LLM provider errors
When the LLM provider is unavailable or returns an error, the chat completions endpoint forwards the error. Wrap your requests in error handling to provide a fallback:Fall back to regular search
When conversational search fails, you can fall back to a standard keyword or hybrid search. This ensures users still get results:Handle empty search results
When the LLM cannot find relevant documents, it may hallucinate an answer or give a vague response. Use guardrails in your system prompt to handle this:_meiliSearchSources tool call. If the sources array is empty, display a helpful message:
Handle rate limiting
LLM providers enforce rate limits based on requests per minute or tokens per minute. When you hit these limits, implement backoff:- Use a higher-tier API key with your LLM provider
- Implement client-side debouncing to avoid sending requests on every keystroke
- Cache responses for repeated questions
Manage context window limits
Long conversations can exceed the LLM’s context window. When this happens, the provider returns an error. Trim older messages from the conversation history to stay within limits:Display errors in your UI
When an error occurs, give users clear feedback and actionable next steps. Avoid exposing raw error messages or stack traces:| Error type | User-facing message |
|---|---|
| Provider down | ”AI search is temporarily unavailable. Showing regular search results.” |
| Rate limited | ”High demand right now. Please wait a moment and try again.” |
| No results | ”No results found. Try different keywords or a broader question.” |
| Network error | ”Connection issue. Check your internet and try again.” |
| Context too long | ”This conversation is getting long. Start a new conversation for best results.” |
Next steps
Configure guardrails
Reduce hallucination with system prompts
Stream chat responses
Implement real-time streaming for chat responses