GeminiLLM
Google Gemini LLM provider.
Defined in: src/providers/llm/gemini/GeminiLLM.ts:112
Google Gemini LLM provider.
Remarks
A thin subclass of OpenAICompatibleLLM that configures defaults for Google’s Gemini API. All generation logic (streaming, non-streaming, abort handling, proxy support) is inherited from the base class.
Gemini models offer strong multimodal capabilities and competitive performance. The gemini-2.0-flash model (default) provides fast inference with good quality for voice assistant use cases.
Example
import { GeminiLLM } from 'composite-voice';
const llm = new GeminiLLM({
geminiApiKey: process.env.GEMINI_API_KEY,
model: 'gemini-2.0-flash',
systemPrompt: 'You are a helpful voice assistant.',
});
await llm.initialize();
const stream = await llm.generate('What is the tallest mountain?');
for await (const chunk of stream) {
process.stdout.write(chunk);
}
await llm.dispose();
See
- GeminiLLMConfig for configuration options.
- OpenAICompatibleLLM for the base class.
- OpenAILLM for the OpenAI alternative.
Extends
Constructors
Constructor
new GeminiLLM(config, logger?): GeminiLLM;
Defined in: src/providers/llm/gemini/GeminiLLM.ts:127
Creates a new Gemini LLM provider instance.
Parameters
| Parameter | Type | Description |
|---|---|---|
config | GeminiLLMConfig | Gemini provider configuration. Must include at least geminiApiKey/apiKey or proxyUrl. |
logger? | Logger | Optional custom logger instance. |
Returns
GeminiLLM
Remarks
The constructor resolves the API key (preferring geminiApiKey over apiKey) and applies Gemini-specific defaults for baseURL and model.
Overrides
OpenAICompatibleLLM.constructor
Properties
| Property | Modifier | Type | Default value | Description | Overrides | Inherited from | Defined in |
|---|---|---|---|---|---|---|---|
config | public | GeminiLLMConfig | undefined | LLM-specific provider configuration. | OpenAICompatibleLLM.config | - | src/providers/llm/gemini/GeminiLLM.ts:113 |
initialized | protected | boolean | false | Tracks whether initialize has completed successfully. | - | OpenAICompatibleLLM.initialized | src/providers/base/BaseProvider.ts:97 |
logger | protected | Logger | undefined | Scoped logger instance for this provider. | - | OpenAICompatibleLLM.logger | src/providers/base/BaseProvider.ts:94 |
providerName | readonly | "GeminiLLM" | 'OpenAICompatibleLLM' | Display name used in log messages and errors. | OpenAICompatibleLLM.providerName | - | src/providers/llm/gemini/GeminiLLM.ts:114 |
roles | readonly | readonly ProviderRole[] | undefined | LLM providers cover the 'llm' pipeline role by default. | - | OpenAICompatibleLLM.roles | src/providers/base/BaseLLMProvider.ts:77 |
type | readonly | ProviderType | undefined | Communication transport this provider uses ('rest' or 'websocket'). | - | OpenAICompatibleLLM.type | src/providers/base/BaseProvider.ts:74 |
Accessors
isProxyMode
Get Signature
get protected isProxyMode(): boolean;
Defined in: src/providers/base/BaseProvider.ts:286
Whether the provider is in proxy mode.
Returns
boolean
true when proxyUrl is set.
Inherited from
OpenAICompatibleLLM.isProxyMode
Methods
assertAuth()
protected assertAuth(): void;
Defined in: src/providers/base/BaseProvider.ts:272
Validate that auth is configured (either apiKey or proxyUrl).
Returns
void
Remarks
Call this in onInitialize() for any provider that requires external authentication. Native providers (NativeSTT, NativeTTS) and in-browser providers (WebLLM) should NOT call this method.
Throws
ProviderInitializationError Thrown when neither apiKey nor proxyUrl is set.
Inherited from
OpenAICompatibleLLM.assertAuth
assertReady()
protected assertReady(): void;
Defined in: src/providers/base/BaseProvider.ts:255
Guard that throws if the provider has not been initialized.
Returns
void
Remarks
Call at the start of any method that requires the provider to be ready.
Throws
Error Thrown with a descriptive message when initialized is false.
Inherited from
OpenAICompatibleLLM.assertReady
buildHeaders()
protected buildHeaders(): Record<string, string>;
Defined in: src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:184
Build provider-specific headers merged into every request.
Returns
Record<string, string>
An object of additional headers.
Remarks
Override in subclasses to inject provider-specific headers. The returned headers are merged on top of the base headers (Authorization, Content-Type).
Example
// In OpenAILLM:
protected override buildHeaders(): Record<string, string> {
if (this.config.organizationId) {
return { 'OpenAI-Organization': this.config.organizationId };
}
return {};
}
Inherited from
OpenAICompatibleLLM.buildHeaders
dispose()
dispose(): Promise<void>;
Defined in: src/providers/base/BaseProvider.ts:154
Clean up resources and dispose of the provider.
Returns
Promise<void>
Remarks
Delegates to the subclass hook onDispose and resets the initialized flag. If the provider is not initialized, the call is a no-op.
Throws
Re-throws any error raised by onDispose.
Inherited from
generate()
generate(prompt, options?): Promise<AsyncIterable<string, any, any>>;
Defined in: src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:232
Generate a response from a single user prompt.
Parameters
| Parameter | Type | Description |
|---|---|---|
prompt | string | The user’s text input. |
options? | LLMGenerationOptions | Optional generation overrides. |
Returns
Promise<AsyncIterable<string, any, any>>
An async iterable of text chunks.
Remarks
Required by the LLMProvider interface. Subclasses must implement this.
Inherited from
generateFromMessages()
generateFromMessages(messages, options?): Promise<AsyncIterable<string, any, any>>;
Defined in: src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:251
Generate an LLM response from a multi-turn conversation.
Parameters
| Parameter | Type |
|---|---|
messages | LLMMessage[] |
options? | LLMGenerationOptions |
Returns
Promise<AsyncIterable<string, any, any>>
Remarks
Converts messages to OpenAI’s ChatCompletionMessageParam format and dispatches to either the streaming or non-streaming code path.
Inherited from
OpenAICompatibleLLM.generateFromMessages
generateWithTools()
generateWithTools(messages, options?): Promise<AsyncIterable<LLMStreamChunk, any, any>>;
Defined in: src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:303
Generate with tool support using the OpenAI-compatible function calling format.
Parameters
| Parameter | Type |
|---|---|
messages | LLMMessage[] |
options? | LLMGenerationOptions & { tools?: LLMToolDefinition[]; } |
Returns
Promise<AsyncIterable<LLMStreamChunk, any, any>>
Remarks
Converts LLMToolDefinition to OpenAI’s { type: "function", function: {...} } format. Streaming responses yield LLMStreamChunk discriminated unions that separate text from tool invocations. All OpenAI-compatible providers (OpenAI, Groq, Gemini, Mistral) support this format.
Inherited from
OpenAICompatibleLLM.generateWithTools
getConfig()
getConfig(): LLMProviderConfig;
Defined in: src/providers/base/BaseLLMProvider.ts:247
Get a shallow copy of the current LLM configuration.
Returns
A new LLMProviderConfig object.
Inherited from
initialize()
initialize(): Promise<void>;
Defined in: src/providers/base/BaseProvider.ts:127
Initialize the provider, making it ready for use.
Returns
Promise<void>
Remarks
Calls the subclass hook onInitialize. If the provider has already been initialized the call is a no-op.
Throws
ProviderInitializationError Thrown when onInitialize rejects. The original error is wrapped with the provider class name for diagnostics.
Inherited from
OpenAICompatibleLLM.initialize
isReady()
isReady(): boolean;
Defined in: src/providers/base/BaseProvider.ts:178
Check whether the provider has been initialized and is ready.
Returns
boolean
true when initialize has completed successfully and dispose has not yet been called.
Inherited from
isToolCall()
isToolCall(_chunk): boolean;
Defined in: src/providers/base/BaseLLMProvider.ts:179
Check whether a response chunk represents a tool call.
Parameters
| Parameter | Type | Description |
|---|---|---|
_chunk | unknown | A response chunk to inspect. |
Returns
boolean
true when the chunk represents a tool call.
Remarks
The default implementation returns false. Tool-aware providers override this to detect tool invocations in the response stream.
Inherited from
OpenAICompatibleLLM.isToolCall
mergeOptions()
protected mergeOptions(options?): LLMGenerationOptions;
Defined in: src/providers/base/BaseLLMProvider.ts:224
Merge per-call generation options with the provider’s config defaults.
Parameters
| Parameter | Type | Description |
|---|---|---|
options? | LLMGenerationOptions | Optional per-call overrides. |
Returns
A merged LLMGenerationOptions object.
Remarks
Values supplied in options take precedence over values in config. Only defined values are included in the result, allowing providers to distinguish “not set” from explicit values.
Inherited from
OpenAICompatibleLLM.mergeOptions
onConfigUpdate()
protected onConfigUpdate(_config): void;
Defined in: src/providers/base/BaseProvider.ts:242
Hook called after updateConfig merges new values.
Parameters
| Parameter | Type | Description |
|---|---|---|
_config | Partial<BaseProviderConfig> | The partial configuration that was merged. |
Returns
void
Remarks
The default implementation is a no-op. Override in subclasses to react to runtime configuration changes (e.g. reconnect with a new API key).
Inherited from
OpenAICompatibleLLM.onConfigUpdate
onDispose()
protected onDispose(): Promise<void>;
Defined in: src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:227
Dispose of the HTTP client.
Returns
Promise<void>
Inherited from
onInitialize()
protected onInitialize(): Promise<void>;
Defined in: src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:194
Initialize the HTTP client for the OpenAI-compatible API.
Returns
Promise<void>
Throws
ProviderInitializationError Thrown if neither apiKey nor proxyUrl is configured.
Inherited from
OpenAICompatibleLLM.onInitialize
processMessages()
processMessages(messages, options?): Promise<AsyncIterable<string, any, any>>;
Defined in: src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:237
Process a conversation and generate a response.
Parameters
| Parameter | Type | Description |
|---|---|---|
messages | LLMMessage[] | Ordered array of LLMMessage objects representing the conversation history. |
options? | LLMGenerationOptions | Optional generation overrides. |
Returns
Promise<AsyncIterable<string, any, any>>
An AsyncIterable that yields text chunks as they arrive.
Remarks
Interface: Receive Text -> Send Text. The primary handler method. Returns an AsyncIterable that yields text chunks. When streaming is enabled, multiple chunks are yielded as tokens arrive. When streaming is disabled, a single chunk containing the full response is yielded.
Inherited from
OpenAICompatibleLLM.processMessages
processText()
processText(prompt, options?): Promise<AsyncIterable<string, any, any>>;
Defined in: src/providers/base/BaseLLMProvider.ts:160
Process a single text prompt (convenience wrapper).
Parameters
| Parameter | Type | Description |
|---|---|---|
prompt | string | The user’s input text. |
options? | LLMGenerationOptions | Optional generation overrides. |
Returns
Promise<AsyncIterable<string, any, any>>
An AsyncIterable that yields text chunks as they arrive.
Remarks
Converts the prompt to a messages array (optionally prepending a system message from config) and delegates to processMessages.
Inherited from
OpenAICompatibleLLM.processText
promptToMessages()
protected promptToMessages(prompt): LLMMessage[];
Defined in: src/providers/base/BaseLLMProvider.ts:195
Convert a plain-text prompt into an LLMMessage array.
Parameters
| Parameter | Type | Description |
|---|---|---|
prompt | string | The user’s input text. |
Returns
A messages array suitable for processMessages.
Remarks
If the provider’s config includes a systemPrompt, it is prepended as a system message. The prompt itself becomes a user message.
Inherited from
OpenAICompatibleLLM.promptToMessages
resolveApiKey()
protected resolveApiKey(): string;
Defined in: src/providers/base/BaseProvider.ts:325
Resolve the API key for this provider.
Returns
string
The configured API key, or 'proxy' in proxy mode.
Remarks
Returns 'proxy' in proxy mode so that SDK clients (which require a non-empty API key string) can be instantiated without the real key.
Inherited from
OpenAICompatibleLLM.resolveApiKey
resolveAuthHeader()
protected resolveAuthHeader(defaultAuthType?): string | undefined;
Defined in: src/providers/base/BaseProvider.ts:366
Resolve Authorization header value for the configured auth type.
Parameters
| Parameter | Type | Default value | Description |
|---|---|---|---|
defaultAuthType | "token" | "bearer" | 'token' | The default auth type for this provider. |
Returns
string | undefined
The Authorization header value, or undefined in proxy mode.
Remarks
Returns the header value for REST or server-side WebSocket connections:
'token'→'Token <apiKey>''bearer'→'Bearer <apiKey>'
Returns undefined in proxy mode.
Inherited from
OpenAICompatibleLLM.resolveAuthHeader
resolveBaseUrl()
protected resolveBaseUrl(defaultUrl?): string | undefined;
Defined in: src/providers/base/BaseProvider.ts:307
Resolve the base URL for this provider.
Parameters
| Parameter | Type | Description |
|---|---|---|
defaultUrl? | string | The provider’s default API URL. Pass undefined to let the underlying SDK use its own default. |
Returns
string | undefined
The resolved URL, or undefined when all sources are unset.
Remarks
Priority: proxyUrl > endpoint > defaultUrl.
For WebSocket providers (this.type === 'websocket'), the proxy URL’s http(s) scheme is automatically converted to ws(s).
When no URL is configured and defaultUrl is undefined, the return value is undefined — this lets SDK-based providers (Anthropic, OpenAI) fall back to their own built-in defaults.
Inherited from
OpenAICompatibleLLM.resolveBaseUrl
resolveWsProtocols()
protected resolveWsProtocols(defaultAuthType?): string[] | undefined;
Defined in: src/providers/base/BaseProvider.ts:343
Resolve WebSocket subprotocol for authentication.
Parameters
| Parameter | Type | Default value | Description |
|---|---|---|---|
defaultAuthType | "token" | "bearer" | 'token' | The default auth type for this provider. |
Returns
string[] | undefined
Subprotocol array for new WebSocket(url, protocols), or undefined.
Remarks
Returns the subprotocol array for direct mode based on authType:
'token'→['token', apiKey](Deepgram default)'bearer'→['bearer', apiKey](OAuth/Bearer tokens)
Returns undefined in proxy mode (no client-side auth needed).
Inherited from
OpenAICompatibleLLM.resolveWsProtocols
updateConfig()
updateConfig(config): void;
Defined in: src/providers/base/BaseProvider.ts:201
Merge partial configuration updates into the current config.
Parameters
| Parameter | Type | Description |
|---|---|---|
config | Partial<BaseProviderConfig> | A partial configuration object whose keys will overwrite existing values. |
Returns
void
Remarks
After merging, the subclass hook onConfigUpdate is called so providers can react to changed values at runtime.