OpenAILLMConfig
Configuration for the OpenAI LLM provider.
Defined in: src/providers/llm/openai/OpenAILLM.ts:53
Configuration for the OpenAI LLM provider.
Remarks
Extends OpenAICompatibleLLMConfig with the optional organizationId field for multi-organization OpenAI accounts.
Example
// Direct API access
const config: OpenAILLMConfig = {
apiKey: 'sk-...',
model: 'gpt-4',
organizationId: 'org-...',
stream: true,
systemPrompt: 'You are a helpful voice assistant.',
};
// Via server-side proxy (recommended for browser apps)
const proxyConfig: OpenAILLMConfig = {
proxyUrl: 'http://localhost:3000/api/proxy/openai',
model: 'gpt-4o-mini',
};
See
OpenAICompatibleLLMConfig for inherited properties (apiKey, proxyUrl, endpoint, etc.).
Extends
Properties
| Property | Type | Default value | Description | Inherited from | Defined in |
|---|---|---|---|---|---|
apiKey? | string | undefined | API key or authentication token for the provider. Remarks For client-side usage, consider using a proxy server to keep API keys secure. The SDK provides Express, Next.js, and Node adapters for this purpose. | OpenAICompatibleLLMConfig.apiKey | src/core/types/providers.ts:67 |
authType? | "token" | "bearer" | Provider-specific (typically 'token' for Deepgram, ignored for REST providers) | Authentication type for providers that support multiple auth mechanisms. Remarks Controls how the apiKey is sent to the provider: - 'token' — WebSocket subprotocol ['token', apiKey] or header Authorization: Token <key>. This is the default for Deepgram providers. - 'bearer' — WebSocket subprotocol ['bearer', token] or header Authorization: Bearer <token>. Use this for OAuth tokens or providers that expect Bearer auth. REST/SDK providers (Anthropic, OpenAI) handle auth through their SDK constructors and ignore this field. | OpenAICompatibleLLMConfig.authType | src/core/types/providers.ts:111 |
debug? | boolean | false | Whether to enable debug logging for this provider. Remarks When true, the provider emits detailed internal logs. This is separate from the SDK-level LoggingConfig. | OpenAICompatibleLLMConfig.debug | src/core/types/providers.ts:122 |
endpoint? | string | undefined | Custom endpoint URL to override the provider’s default API endpoint. Remarks Useful for self-hosted instances, proxy servers, or development environments. | OpenAICompatibleLLMConfig.endpoint | src/core/types/providers.ts:75 |
maxRetries? | number | 3 | Maximum number of retries for failed API requests. | OpenAICompatibleLLMConfig.maxRetries | src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:79 |
maxTokens? | number | undefined | Maximum number of tokens to generate in the response. Remarks For voice applications, lower values (100-300) help keep responses concise and reduce TTS latency. | OpenAICompatibleLLMConfig.maxTokens | src/core/types/providers.ts:677 |
model | string | undefined | Model identifier for the provider. Example 'gpt-4'`, `'llama-3.3-70b-versatile'`, `'gemini-2.0-flash' | OpenAICompatibleLLMConfig.model | src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:73 |
organizationId? | string | undefined | OpenAI organization ID for multi-organization accounts. Remarks If your OpenAI API key belongs to multiple organizations, set this to route requests to a specific organization. Passed as the organization option to the OpenAI SDK constructor. | - | src/providers/llm/openai/OpenAILLM.ts:64 |
proxyUrl? | string | undefined | URL of a CompositeVoice proxy server endpoint for this provider. Remarks When set, requests are routed through the proxy which injects the real API key server-side. This keeps API keys out of the browser. For WebSocket providers the HTTP URL is automatically converted to ws(s)://. At least one of apiKey or proxyUrl must be set for providers that require authentication (all except NativeSTT, NativeTTS, and WebLLM). Example proxyUrl: 'http://localhost:3000/api/proxy/deepgram' | OpenAICompatibleLLMConfig.proxyUrl | src/core/types/providers.ts:93 |
stopSequences? | string[] | undefined | Sequences that cause the LLM to stop generating. Remarks When the model generates any of these sequences, generation halts. Useful for controlling response boundaries. | OpenAICompatibleLLMConfig.stopSequences | src/core/types/providers.ts:715 |
stream? | boolean | undefined | Whether to stream the LLM response token by token. Remarks When true, the provider yields tokens incrementally via an async iterable. Streaming is essential for low-latency voice applications as it allows TTS to begin synthesizing before the full response is generated. | OpenAICompatibleLLMConfig.stream | src/core/types/providers.ts:706 |
systemPrompt? | string | undefined | System prompt providing instructions and context to the LLM. Remarks Sets the behavior and persona of the assistant. For voice applications, include instructions to keep responses brief and conversational. | OpenAICompatibleLLMConfig.systemPrompt | src/core/types/providers.ts:696 |
temperature? | number | undefined | Temperature for controlling generation randomness. Remarks Values from 0 (deterministic) to 2 (highly creative). Lower values produce more focused responses; higher values increase variety. | OpenAICompatibleLLMConfig.temperature | src/core/types/providers.ts:668 |
timeout? | number | undefined | Request timeout in milliseconds. Remarks Applies to HTTP requests (REST providers) and connection establishment (WebSocket providers). Set to 0 for no timeout. | OpenAICompatibleLLMConfig.timeout | src/core/types/providers.ts:131 |
topP? | number | undefined | Top-P (nucleus) sampling parameter. Remarks Limits token selection to the smallest set whose cumulative probability exceeds this value. Values from 0 to 1. Often used as an alternative to temperature. | OpenAICompatibleLLMConfig.topP | src/core/types/providers.ts:687 |