Skip to content

OpenAICompatibleLLMConfig

Configuration for any OpenAI-compatible LLM provider.

Defined in: src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:67

Configuration for any OpenAI-compatible LLM provider.

Remarks

Provide either apiKey (direct API access) or proxyUrl (server-side proxy). At least one must be set; if both are provided, proxyUrl takes precedence and requests are sent through the proxy, which injects the real API key server-side.

Example

// Direct API access
const config: OpenAICompatibleLLMConfig = {
  apiKey: 'sk-...',
  model: 'gpt-4',
  baseURL: 'https://api.openai.com/v1',
  stream: true,
};

// Via server-side proxy
const proxyConfig: OpenAICompatibleLLMConfig = {
  proxyUrl: 'http://localhost:3000/api/proxy/openai',
  model: 'gpt-4',
};

See

LLMProviderConfig for inherited base properties (temperature, maxTokens, systemPrompt, etc.).

Extends

Extended by

Properties

PropertyTypeDefault valueDescriptionOverridesInherited fromDefined in
apiKey?stringundefinedAPI key or authentication token for the provider. Remarks For client-side usage, consider using a proxy server to keep API keys secure. The SDK provides Express, Next.js, and Node adapters for this purpose.-LLMProviderConfig.apiKeysrc/core/types/providers.ts:67
authType?"token" | "bearer"Provider-specific (typically 'token' for Deepgram, ignored for REST providers)Authentication type for providers that support multiple auth mechanisms. Remarks Controls how the apiKey is sent to the provider: - 'token' — WebSocket subprotocol ['token', apiKey] or header Authorization: Token <key>. This is the default for Deepgram providers. - 'bearer' — WebSocket subprotocol ['bearer', token] or header Authorization: Bearer <token>. Use this for OAuth tokens or providers that expect Bearer auth. REST/SDK providers (Anthropic, OpenAI) handle auth through their SDK constructors and ignore this field.-LLMProviderConfig.authTypesrc/core/types/providers.ts:111
debug?booleanfalseWhether to enable debug logging for this provider. Remarks When true, the provider emits detailed internal logs. This is separate from the SDK-level LoggingConfig.-LLMProviderConfig.debugsrc/core/types/providers.ts:122
endpoint?stringundefinedCustom endpoint URL to override the provider’s default API endpoint. Remarks Useful for self-hosted instances, proxy servers, or development environments.-LLMProviderConfig.endpointsrc/core/types/providers.ts:75
maxRetries?number3Maximum number of retries for failed API requests.--src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:79
maxTokens?numberundefinedMaximum number of tokens to generate in the response. Remarks For voice applications, lower values (100-300) help keep responses concise and reduce TTS latency.-LLMProviderConfig.maxTokenssrc/core/types/providers.ts:677
modelstringundefinedModel identifier for the provider. Example 'gpt-4'`, `'llama-3.3-70b-versatile'`, `'gemini-2.0-flash'LLMProviderConfig.model-src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:73
proxyUrl?stringundefinedURL of a CompositeVoice proxy server endpoint for this provider. Remarks When set, requests are routed through the proxy which injects the real API key server-side. This keeps API keys out of the browser. For WebSocket providers the HTTP URL is automatically converted to ws(s)://. At least one of apiKey or proxyUrl must be set for providers that require authentication (all except NativeSTT, NativeTTS, and WebLLM). Example proxyUrl: 'http://localhost:3000/api/proxy/deepgram'-LLMProviderConfig.proxyUrlsrc/core/types/providers.ts:93
stopSequences?string[]undefinedSequences that cause the LLM to stop generating. Remarks When the model generates any of these sequences, generation halts. Useful for controlling response boundaries.-LLMProviderConfig.stopSequencessrc/core/types/providers.ts:715
stream?booleanundefinedWhether to stream the LLM response token by token. Remarks When true, the provider yields tokens incrementally via an async iterable. Streaming is essential for low-latency voice applications as it allows TTS to begin synthesizing before the full response is generated.-LLMProviderConfig.streamsrc/core/types/providers.ts:706
systemPrompt?stringundefinedSystem prompt providing instructions and context to the LLM. Remarks Sets the behavior and persona of the assistant. For voice applications, include instructions to keep responses brief and conversational.-LLMProviderConfig.systemPromptsrc/core/types/providers.ts:696
temperature?numberundefinedTemperature for controlling generation randomness. Remarks Values from 0 (deterministic) to 2 (highly creative). Lower values produce more focused responses; higher values increase variety.-LLMProviderConfig.temperaturesrc/core/types/providers.ts:668
timeout?numberundefinedRequest timeout in milliseconds. Remarks Applies to HTTP requests (REST providers) and connection establishment (WebSocket providers). Set to 0 for no timeout.-LLMProviderConfig.timeoutsrc/core/types/providers.ts:131
topP?numberundefinedTop-P (nucleus) sampling parameter. Remarks Limits token selection to the smallest set whose cumulative probability exceeds this value. Values from 0 to 1. Often used as an alternative to temperature.-LLMProviderConfig.topPsrc/core/types/providers.ts:687

© 2026 CompositeVoice. All rights reserved.

Font size
Contrast
Motion
Transparency