Skip to content

Mistral

Use Mistral models as the LLM provider in a CompositeVoice pipeline.

Use MistralLLM when you need strong multilingual support, especially for European languages, or want a cost-effective alternative to larger models.

Prerequisites

  • A Mistral API key or a CompositeVoice proxy server
  • No additional dependencies required. MistralLLM uses native fetch internally.

Basic setup

import { CompositeVoice, MistralLLM, NativeSTT, NativeTTS } from '@lukeocodes/composite-voice';

const agent = new CompositeVoice({
  providers: [
    new NativeSTT({ language: 'en-US' }),
    new MistralLLM({
      proxyUrl: '/api/proxy/mistral',
      model: 'mistral-small-latest',
      systemPrompt: 'You are a concise voice assistant. Keep answers under two sentences.',
    }),
    new NativeTTS(),
  ],
});

await agent.initialize();
await agent.startListening();

Configuration options

OptionTypeDefaultDescription
modelstring'mistral-small-latest'Model identifier. See model variants below.
systemPromptstringSystem-level instructions for the assistant.
temperaturenumberRandomness (0 = deterministic, 2 = creative).
maxTokensnumberMaximum tokens per response.
topPnumberNucleus sampling threshold (0—1).
streambooleantrueStream tokens incrementally.
proxyUrlstringCompositeVoice proxy endpoint. Recommended for browsers.
mistralApiKeystringMistral API key. Convenience alias for apiKey.
apiKeystringDirect API key. mistralApiKey takes precedence if both are set.

Model variants

ModelSpeedNotes
mistral-small-latestFastDefault. Good speed-to-quality ratio for voice.
mistral-medium-latestModerateBalanced capability.
mistral-large-latestSlowerMost capable Mistral model.

Complete example

import {
  CompositeVoice,
  MicrophoneInput,
  MistralLLM,
  DeepgramSTT,
  DeepgramTTS,
  BrowserAudioOutput,
} from '@lukeocodes/composite-voice';

const agent = new CompositeVoice({
  providers: [
    new MicrophoneInput(),
    new DeepgramSTT({
      proxyUrl: '/api/proxy/deepgram',
      language: 'en',
      options: { model: 'nova-3', smartFormat: true },
    }),
    new MistralLLM({
      proxyUrl: '/api/proxy/mistral',
      model: 'mistral-small-latest',
      temperature: 0.7,
      maxTokens: 256,
      systemPrompt: 'Tu es un assistant vocal amical. Reponds brievement.',
    }),
    new DeepgramTTS({
      proxyUrl: '/api/proxy/deepgram',
      voice: 'aura-2-thalia-en',
    }),
    new BrowserAudioOutput(),
  ],
  conversationHistory: { enabled: true, maxTurns: 10 },
});

await agent.initialize();
await agent.startListening();

Tips

  • Mistral excels at multilingual tasks. French and other European languages produce especially good results.
  • mistral-small-latest is best for voice. It provides the fastest responses while maintaining quality for conversational use cases.
  • MistralLLM uses native fetch — no @mistralai/mistralai or openai package needed.
  • Model names use the -latest suffix. This always points to the most recent version of that model tier.

© 2026 CompositeVoice. All rights reserved.

Font size
Contrast
Motion
Transparency