Skip to content

MistralLLMConfig

Configuration for the Mistral LLM provider.

Defined in: src/providers/llm/mistral/MistralLLM.ts:60

Configuration for the Mistral LLM provider.

Remarks

Extends OpenAICompatibleLLMConfig with the convenience alias mistralApiKey. Provide either mistralApiKey/apiKey (direct API access) or proxyUrl (server-side proxy). At least one must be set.

Peer dependency: None (uses native fetch with the OpenAI chat completions format).

Example

// Direct API access
const config: MistralLLMConfig = {
  mistralApiKey: 'mis-...',
  model: 'mistral-small-latest',
  stream: true,
  temperature: 0.7,
};

// Via server-side proxy
const proxyConfig: MistralLLMConfig = {
  proxyUrl: 'http://localhost:3000/api/proxy/mistral',
  model: 'mistral-large-latest',
};

See

OpenAICompatibleLLMConfig for inherited properties (apiKey, proxyUrl, endpoint, etc.).

Extends

Properties

PropertyTypeDefault valueDescriptionInherited fromDefined in
apiKey?stringundefinedAPI key or authentication token for the provider. Remarks For client-side usage, consider using a proxy server to keep API keys secure. The SDK provides Express, Next.js, and Node adapters for this purpose.OpenAICompatibleLLMConfig.apiKeysrc/core/types/providers.ts:67
authType?"token" | "bearer"Provider-specific (typically 'token' for Deepgram, ignored for REST providers)Authentication type for providers that support multiple auth mechanisms. Remarks Controls how the apiKey is sent to the provider: - 'token' — WebSocket subprotocol ['token', apiKey] or header Authorization: Token <key>. This is the default for Deepgram providers. - 'bearer' — WebSocket subprotocol ['bearer', token] or header Authorization: Bearer <token>. Use this for OAuth tokens or providers that expect Bearer auth. REST/SDK providers (Anthropic, OpenAI) handle auth through their SDK constructors and ignore this field.OpenAICompatibleLLMConfig.authTypesrc/core/types/providers.ts:111
debug?booleanfalseWhether to enable debug logging for this provider. Remarks When true, the provider emits detailed internal logs. This is separate from the SDK-level LoggingConfig.OpenAICompatibleLLMConfig.debugsrc/core/types/providers.ts:122
endpoint?stringundefinedCustom endpoint URL to override the provider’s default API endpoint. Remarks Useful for self-hosted instances, proxy servers, or development environments.OpenAICompatibleLLMConfig.endpointsrc/core/types/providers.ts:75
maxRetries?number3Maximum number of retries for failed API requests.OpenAICompatibleLLMConfig.maxRetriessrc/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:79
maxTokens?numberundefinedMaximum number of tokens to generate in the response. Remarks For voice applications, lower values (100-300) help keep responses concise and reduce TTS latency.OpenAICompatibleLLMConfig.maxTokenssrc/core/types/providers.ts:677
mistralApiKey?stringundefinedMistral API key. Convenience alias for apiKey. Remarks If both mistralApiKey and apiKey are set, mistralApiKey takes precedence. Obtain a key from the Mistral console.-src/providers/llm/mistral/MistralLLM.ts:70
modelstringundefinedModel identifier for the provider. Example 'gpt-4'`, `'llama-3.3-70b-versatile'`, `'gemini-2.0-flash'OpenAICompatibleLLMConfig.modelsrc/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:73
proxyUrl?stringundefinedURL of a CompositeVoice proxy server endpoint for this provider. Remarks When set, requests are routed through the proxy which injects the real API key server-side. This keeps API keys out of the browser. For WebSocket providers the HTTP URL is automatically converted to ws(s)://. At least one of apiKey or proxyUrl must be set for providers that require authentication (all except NativeSTT, NativeTTS, and WebLLM). Example proxyUrl: 'http://localhost:3000/api/proxy/deepgram'OpenAICompatibleLLMConfig.proxyUrlsrc/core/types/providers.ts:93
stopSequences?string[]undefinedSequences that cause the LLM to stop generating. Remarks When the model generates any of these sequences, generation halts. Useful for controlling response boundaries.OpenAICompatibleLLMConfig.stopSequencessrc/core/types/providers.ts:715
stream?booleanundefinedWhether to stream the LLM response token by token. Remarks When true, the provider yields tokens incrementally via an async iterable. Streaming is essential for low-latency voice applications as it allows TTS to begin synthesizing before the full response is generated.OpenAICompatibleLLMConfig.streamsrc/core/types/providers.ts:706
systemPrompt?stringundefinedSystem prompt providing instructions and context to the LLM. Remarks Sets the behavior and persona of the assistant. For voice applications, include instructions to keep responses brief and conversational.OpenAICompatibleLLMConfig.systemPromptsrc/core/types/providers.ts:696
temperature?numberundefinedTemperature for controlling generation randomness. Remarks Values from 0 (deterministic) to 2 (highly creative). Lower values produce more focused responses; higher values increase variety.OpenAICompatibleLLMConfig.temperaturesrc/core/types/providers.ts:668
timeout?numberundefinedRequest timeout in milliseconds. Remarks Applies to HTTP requests (REST providers) and connection establishment (WebSocket providers). Set to 0 for no timeout.OpenAICompatibleLLMConfig.timeoutsrc/core/types/providers.ts:131
topP?numberundefinedTop-P (nucleus) sampling parameter. Remarks Limits token selection to the smallest set whose cumulative probability exceeds this value. Values from 0 to 1. Often used as an alternative to temperature.OpenAICompatibleLLMConfig.topPsrc/core/types/providers.ts:687

© 2026 CompositeVoice. All rights reserved.

Font size
Contrast
Motion
Transparency