SDK docs for tracking LLM usage in a few lines.
Install the package, initialize it once, and report token usage after each OpenAI, Anthropic, or other provider call. The SDK batches automatically and keeps the integration lightweight.
1. Install the package
Add the SDK to the service where your LLM calls run.
2. Initialize once
Call llmetrics.init at process startup with your API key.
3. Track after each model call
Send feature, provider, model, and token counts after every request.
4. Flush before shutdown
Call llmetrics.flush in short-lived jobs or before your process exits.
Quickstart
Track a request right after the model call returns.
The main flow is initialize once, then call llmetrics.track with the token counts returned by your provider SDK.
import OpenAI from 'openai';
import { llmetrics } from '@llmetrics/sdk';
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
llmetrics.init({
apiKey: process.env.LLMETRICS_API_KEY!,
});
const response = await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Summarize this lesson.' }],
});
llmetrics.track({
feature: 'lesson-generation',
provider: 'openai',
model: response.model,
inputTokens: response.usage.prompt_tokens,
outputTokens: response.usage.completion_tokens,
userId: 'user_123',
meta: { promptVersion: 2 },
});Installation
Add the SDK with your package manager of choice.
Command
npm install @llmetrics/sdkCommand
yarn add @llmetrics/sdkCommand
pnpm add @llmetrics/sdkInitialization
Initialize once with a dashboard API key.
Call llmetrics.init during startup. If you forget the API key, initialization throws immediately so the integration fails loudly in development.
| Field | Type | Default / Required | Description |
|---|---|---|---|
| apiKey | string | Required | Your LLMetrics API key from the dashboard. |
| flushIntervalMs | number | 1500 | How often queued events are flushed automatically. |
| maxQueueSize | number | 50 | Flush immediately once the queue reaches this size. |
| timeoutMs | number | 2000 | Timeout for each ingest request. |
| debug | boolean | false | Logs flush failures to the console. |
Tracking
Use queued tracking by default, async tracking in short-lived runtimes.
llmetrics.track(event)
Fire-and-forget tracking. Events are added to an in-memory queue and flushed automatically based on queue size or interval. This call never throws.
llmetrics.trackAsync(event)
Sends immediately and throws on failure. This is the safer choice in serverless handlers, cron jobs, and other runtimes that may exit before the queue flushes.
await llmetrics.trackAsync({
feature: 'summarize',
provider: 'anthropic',
model: response.model,
inputTokens: response.usage.input_tokens,
outputTokens: response.usage.output_tokens,
});
await llmetrics.flush();Reference
Event payload fields
The dashboard groups by feature, provider, and model, so those fields should be stable identifiers from your application and provider response objects.
| Field | Type | Default / Required | Description |
|---|---|---|---|
| feature | string | Yes | Logical feature name used for dashboard grouping. |
| provider | string | Yes | Provider slug such as openai or anthropic. |
| model | string | Yes | Model identifier returned by your provider. |
| inputTokens | number | Yes | Prompt or input token count. |
| outputTokens | number | Yes | Completion or output token count. |
| userId | string | No | Your internal user identifier for per-user cost analysis. |
| ts | number | No | Unix timestamp in milliseconds. Defaults to Date.now(). |
| meta | object | No | Extra metadata stored alongside the event. |