Skip to content

GroqLLM

Groq LLM provider for ultra-fast inference.

Defined in: src/providers/llm/groq/GroqLLM.ts:108

Groq LLM provider for ultra-fast inference.

Remarks

A thin subclass of OpenAICompatibleLLM that configures defaults for Groq’s API. All generation logic (streaming, non-streaming, abort handling, proxy support) is inherited from the base class.

Groq supports a wide range of open-source models including LLaMA, Mixtral, and Gemma, all served through their custom LPU hardware for extremely fast token generation.

Example

import { GroqLLM } from 'composite-voice';

const llm = new GroqLLM({
  groqApiKey: process.env.GROQ_API_KEY,
  model: 'llama-3.3-70b-versatile',
  systemPrompt: 'You are a fast and helpful voice assistant.',
});
await llm.initialize();

const stream = await llm.generate('Explain photosynthesis in one sentence.');
for await (const chunk of stream) {
  process.stdout.write(chunk);
}

await llm.dispose();

See

Extends

Constructors

Constructor

new GroqLLM(config, logger?): GroqLLM;

Defined in: src/providers/llm/groq/GroqLLM.ts:123

Creates a new Groq LLM provider instance.

Parameters

ParameterTypeDescription
configGroqLLMConfigGroq provider configuration. Must include at least groqApiKey/apiKey or proxyUrl.
logger?LoggerOptional custom logger instance.

Returns

GroqLLM

Remarks

The constructor resolves the API key (preferring groqApiKey over apiKey) and applies Groq-specific defaults for baseURL and model.

Overrides

OpenAICompatibleLLM.constructor

Properties

PropertyModifierTypeDefault valueDescriptionOverridesInherited fromDefined in
configpublicGroqLLMConfigundefinedLLM-specific provider configuration.OpenAICompatibleLLM.config-src/providers/llm/groq/GroqLLM.ts:109
initializedprotectedbooleanfalseTracks whether initialize has completed successfully.-OpenAICompatibleLLM.initializedsrc/providers/base/BaseProvider.ts:97
loggerprotectedLoggerundefinedScoped logger instance for this provider.-OpenAICompatibleLLM.loggersrc/providers/base/BaseProvider.ts:94
providerNamereadonly"GroqLLM"'OpenAICompatibleLLM'Display name used in log messages and errors.OpenAICompatibleLLM.providerName-src/providers/llm/groq/GroqLLM.ts:110
rolesreadonlyreadonly ProviderRole[]undefinedLLM providers cover the 'llm' pipeline role by default.-OpenAICompatibleLLM.rolessrc/providers/base/BaseLLMProvider.ts:77
typereadonlyProviderTypeundefinedCommunication transport this provider uses ('rest' or 'websocket').-OpenAICompatibleLLM.typesrc/providers/base/BaseProvider.ts:74

Accessors

isProxyMode

Get Signature

get protected isProxyMode(): boolean;

Defined in: src/providers/base/BaseProvider.ts:286

Whether the provider is in proxy mode.

Returns

boolean

true when proxyUrl is set.

Inherited from

OpenAICompatibleLLM.isProxyMode

Methods

assertAuth()

protected assertAuth(): void;

Defined in: src/providers/base/BaseProvider.ts:272

Validate that auth is configured (either apiKey or proxyUrl).

Returns

void

Remarks

Call this in onInitialize() for any provider that requires external authentication. Native providers (NativeSTT, NativeTTS) and in-browser providers (WebLLM) should NOT call this method.

Throws

ProviderInitializationError Thrown when neither apiKey nor proxyUrl is set.

Inherited from

OpenAICompatibleLLM.assertAuth


assertReady()

protected assertReady(): void;

Defined in: src/providers/base/BaseProvider.ts:255

Guard that throws if the provider has not been initialized.

Returns

void

Remarks

Call at the start of any method that requires the provider to be ready.

Throws

Error Thrown with a descriptive message when initialized is false.

Inherited from

OpenAICompatibleLLM.assertReady


buildHeaders()

protected buildHeaders(): Record<string, string>;

Defined in: src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:184

Build provider-specific headers merged into every request.

Returns

Record<string, string>

An object of additional headers.

Remarks

Override in subclasses to inject provider-specific headers. The returned headers are merged on top of the base headers (Authorization, Content-Type).

Example

// In OpenAILLM:
protected override buildHeaders(): Record<string, string> {
  if (this.config.organizationId) {
    return { 'OpenAI-Organization': this.config.organizationId };
  }
  return {};
}

Inherited from

OpenAICompatibleLLM.buildHeaders


dispose()

dispose(): Promise<void>;

Defined in: src/providers/base/BaseProvider.ts:154

Clean up resources and dispose of the provider.

Returns

Promise<void>

Remarks

Delegates to the subclass hook onDispose and resets the initialized flag. If the provider is not initialized, the call is a no-op.

Throws

Re-throws any error raised by onDispose.

Inherited from

OpenAICompatibleLLM.dispose


generate()

generate(prompt, options?): Promise<AsyncIterable<string, any, any>>;

Defined in: src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:232

Generate a response from a single user prompt.

Parameters

ParameterTypeDescription
promptstringThe user’s text input.
options?LLMGenerationOptionsOptional generation overrides.

Returns

Promise<AsyncIterable<string, any, any>>

An async iterable of text chunks.

Remarks

Required by the LLMProvider interface. Subclasses must implement this.

Inherited from

OpenAICompatibleLLM.generate


generateFromMessages()

generateFromMessages(messages, options?): Promise<AsyncIterable<string, any, any>>;

Defined in: src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:251

Generate an LLM response from a multi-turn conversation.

Parameters

ParameterType
messagesLLMMessage[]
options?LLMGenerationOptions

Returns

Promise<AsyncIterable<string, any, any>>

Remarks

Converts messages to OpenAI’s ChatCompletionMessageParam format and dispatches to either the streaming or non-streaming code path.

Inherited from

OpenAICompatibleLLM.generateFromMessages


generateWithTools()

generateWithTools(messages, options?): Promise<AsyncIterable<LLMStreamChunk, any, any>>;

Defined in: src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:303

Generate with tool support using the OpenAI-compatible function calling format.

Parameters

ParameterType
messagesLLMMessage[]
options?LLMGenerationOptions & { tools?: LLMToolDefinition[]; }

Returns

Promise<AsyncIterable<LLMStreamChunk, any, any>>

Remarks

Converts LLMToolDefinition to OpenAI’s { type: "function", function: {...} } format. Streaming responses yield LLMStreamChunk discriminated unions that separate text from tool invocations. All OpenAI-compatible providers (OpenAI, Groq, Gemini, Mistral) support this format.

Inherited from

OpenAICompatibleLLM.generateWithTools


getConfig()

getConfig(): LLMProviderConfig;

Defined in: src/providers/base/BaseLLMProvider.ts:247

Get a shallow copy of the current LLM configuration.

Returns

LLMProviderConfig

A new LLMProviderConfig object.

Inherited from

OpenAICompatibleLLM.getConfig


initialize()

initialize(): Promise<void>;

Defined in: src/providers/base/BaseProvider.ts:127

Initialize the provider, making it ready for use.

Returns

Promise<void>

Remarks

Calls the subclass hook onInitialize. If the provider has already been initialized the call is a no-op.

Throws

ProviderInitializationError Thrown when onInitialize rejects. The original error is wrapped with the provider class name for diagnostics.

Inherited from

OpenAICompatibleLLM.initialize


isReady()

isReady(): boolean;

Defined in: src/providers/base/BaseProvider.ts:178

Check whether the provider has been initialized and is ready.

Returns

boolean

true when initialize has completed successfully and dispose has not yet been called.

Inherited from

OpenAICompatibleLLM.isReady


isToolCall()

isToolCall(_chunk): boolean;

Defined in: src/providers/base/BaseLLMProvider.ts:179

Check whether a response chunk represents a tool call.

Parameters

ParameterTypeDescription
_chunkunknownA response chunk to inspect.

Returns

boolean

true when the chunk represents a tool call.

Remarks

The default implementation returns false. Tool-aware providers override this to detect tool invocations in the response stream.

Inherited from

OpenAICompatibleLLM.isToolCall


mergeOptions()

protected mergeOptions(options?): LLMGenerationOptions;

Defined in: src/providers/base/BaseLLMProvider.ts:224

Merge per-call generation options with the provider’s config defaults.

Parameters

ParameterTypeDescription
options?LLMGenerationOptionsOptional per-call overrides.

Returns

LLMGenerationOptions

A merged LLMGenerationOptions object.

Remarks

Values supplied in options take precedence over values in config. Only defined values are included in the result, allowing providers to distinguish “not set” from explicit values.

Inherited from

OpenAICompatibleLLM.mergeOptions


onConfigUpdate()

protected onConfigUpdate(_config): void;

Defined in: src/providers/base/BaseProvider.ts:242

Hook called after updateConfig merges new values.

Parameters

ParameterTypeDescription
_configPartial<BaseProviderConfig>The partial configuration that was merged.

Returns

void

Remarks

The default implementation is a no-op. Override in subclasses to react to runtime configuration changes (e.g. reconnect with a new API key).

Inherited from

OpenAICompatibleLLM.onConfigUpdate


onDispose()

protected onDispose(): Promise<void>;

Defined in: src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:227

Dispose of the HTTP client.

Returns

Promise<void>

Inherited from

OpenAICompatibleLLM.onDispose


onInitialize()

protected onInitialize(): Promise<void>;

Defined in: src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:194

Initialize the HTTP client for the OpenAI-compatible API.

Returns

Promise<void>

Throws

ProviderInitializationError Thrown if neither apiKey nor proxyUrl is configured.

Inherited from

OpenAICompatibleLLM.onInitialize


processMessages()

processMessages(messages, options?): Promise<AsyncIterable<string, any, any>>;

Defined in: src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:237

Process a conversation and generate a response.

Parameters

ParameterTypeDescription
messagesLLMMessage[]Ordered array of LLMMessage objects representing the conversation history.
options?LLMGenerationOptionsOptional generation overrides.

Returns

Promise<AsyncIterable<string, any, any>>

An AsyncIterable that yields text chunks as they arrive.

Remarks

Interface: Receive Text -> Send Text. The primary handler method. Returns an AsyncIterable that yields text chunks. When streaming is enabled, multiple chunks are yielded as tokens arrive. When streaming is disabled, a single chunk containing the full response is yielded.

Inherited from

OpenAICompatibleLLM.processMessages


processText()

processText(prompt, options?): Promise<AsyncIterable<string, any, any>>;

Defined in: src/providers/base/BaseLLMProvider.ts:160

Process a single text prompt (convenience wrapper).

Parameters

ParameterTypeDescription
promptstringThe user’s input text.
options?LLMGenerationOptionsOptional generation overrides.

Returns

Promise<AsyncIterable<string, any, any>>

An AsyncIterable that yields text chunks as they arrive.

Remarks

Converts the prompt to a messages array (optionally prepending a system message from config) and delegates to processMessages.

Inherited from

OpenAICompatibleLLM.processText


promptToMessages()

protected promptToMessages(prompt): LLMMessage[];

Defined in: src/providers/base/BaseLLMProvider.ts:195

Convert a plain-text prompt into an LLMMessage array.

Parameters

ParameterTypeDescription
promptstringThe user’s input text.

Returns

LLMMessage[]

A messages array suitable for processMessages.

Remarks

If the provider’s config includes a systemPrompt, it is prepended as a system message. The prompt itself becomes a user message.

Inherited from

OpenAICompatibleLLM.promptToMessages


resolveApiKey()

protected resolveApiKey(): string;

Defined in: src/providers/base/BaseProvider.ts:325

Resolve the API key for this provider.

Returns

string

The configured API key, or 'proxy' in proxy mode.

Remarks

Returns 'proxy' in proxy mode so that SDK clients (which require a non-empty API key string) can be instantiated without the real key.

Inherited from

OpenAICompatibleLLM.resolveApiKey


resolveAuthHeader()

protected resolveAuthHeader(defaultAuthType?): string | undefined;

Defined in: src/providers/base/BaseProvider.ts:366

Resolve Authorization header value for the configured auth type.

Parameters

ParameterTypeDefault valueDescription
defaultAuthType"token" | "bearer"'token'The default auth type for this provider.

Returns

string | undefined

The Authorization header value, or undefined in proxy mode.

Remarks

Returns the header value for REST or server-side WebSocket connections:

  • 'token''Token <apiKey>'
  • 'bearer''Bearer <apiKey>'

Returns undefined in proxy mode.

Inherited from

OpenAICompatibleLLM.resolveAuthHeader


resolveBaseUrl()

protected resolveBaseUrl(defaultUrl?): string | undefined;

Defined in: src/providers/base/BaseProvider.ts:307

Resolve the base URL for this provider.

Parameters

ParameterTypeDescription
defaultUrl?stringThe provider’s default API URL. Pass undefined to let the underlying SDK use its own default.

Returns

string | undefined

The resolved URL, or undefined when all sources are unset.

Remarks

Priority: proxyUrl > endpoint > defaultUrl.

For WebSocket providers (this.type === 'websocket'), the proxy URL’s http(s) scheme is automatically converted to ws(s).

When no URL is configured and defaultUrl is undefined, the return value is undefined — this lets SDK-based providers (Anthropic, OpenAI) fall back to their own built-in defaults.

Inherited from

OpenAICompatibleLLM.resolveBaseUrl


resolveWsProtocols()

protected resolveWsProtocols(defaultAuthType?): string[] | undefined;

Defined in: src/providers/base/BaseProvider.ts:343

Resolve WebSocket subprotocol for authentication.

Parameters

ParameterTypeDefault valueDescription
defaultAuthType"token" | "bearer"'token'The default auth type for this provider.

Returns

string[] | undefined

Subprotocol array for new WebSocket(url, protocols), or undefined.

Remarks

Returns the subprotocol array for direct mode based on authType:

  • 'token'['token', apiKey] (Deepgram default)
  • 'bearer'['bearer', apiKey] (OAuth/Bearer tokens)

Returns undefined in proxy mode (no client-side auth needed).

Inherited from

OpenAICompatibleLLM.resolveWsProtocols


updateConfig()

updateConfig(config): void;

Defined in: src/providers/base/BaseProvider.ts:201

Merge partial configuration updates into the current config.

Parameters

ParameterTypeDescription
configPartial<BaseProviderConfig>A partial configuration object whose keys will overwrite existing values.

Returns

void

Remarks

After merging, the subclass hook onConfigUpdate is called so providers can react to changed values at runtime.

Inherited from

OpenAICompatibleLLM.updateConfig

© 2026 CompositeVoice. All rights reserved.

Font size
Contrast
Motion
Transparency