See exactly what your LLM usage costs.

Track token spend, detect spikes, and optimize model usage across your AI features. SDK in 60 seconds.

Total spend
$142.38
Events
84,291
Input tokens
12.4M
Output tokens
3.1M
Mar 1Mar 7Mar 14

The problem

LLM costs grow silently.

Token usage spikes

A single bad prompt or runaway loop can spike your bill overnight with no warning.

Model choices drift

Engineers swap models without realizing the cost difference. Expensive models sneak into cheap feature paths.

Your bill surprises you

By the time you see the invoice, the damage is done. There's no way to trace which feature caused it.

The solution

Full visibility into every token.

Real-time cost tracking

See spend update as events come in.

Cost by feature

Know exactly which AI feature costs what.

Cost by model

Compare GPT-4o vs Claude vs Gemini side by side.

Trend visualization

Daily spend charts with full history.

Spike alerts

Email and Slack alerts when spend exceeds thresholds.

Multi-org support

Separate workspaces for every product or team.

SDK in 60 seconds

One init call, one track call. Done.

Usage limits

Set per-feature thresholds before you overspend.

How it works

Up and running in minutes.

01

Install the SDK

One npm install and you're set. Works with any Node.js or edge runtime.

02

Send usage events

Call metrics.track() after each LLM response. Fire-and-forget — it won't slow your app.

03

See live dashboard

Open your dashboard and watch spend, tokens, and feature breakdowns update in real time.

Developer-first

Two calls. That's it.

No agents, no wrappers. Drop it into your existing LLM calls.

lesson.ts
import { llmetrics } from "@llmetrics/sdk";

llmetrics.init({
  apiKey: process.env.LLMETRICS_API_KEY,
});

// Call your LLM as normal...
const response = await openai.chat.completions.create({ ... });

// Then track it. Fire-and-forget.
llmetrics.track({
  feature: "lesson-generation",
  provider: "openai",
  model: "gpt-4o-mini",
  inputTokens: response.usage.prompt_tokens,
  outputTokens: response.usage.completion_tokens,
});

Supports 100+ models across OpenAI, Anthropic, and more — pricing synced daily.

Pricing

Simple, transparent pricing.

Start free. Upgrade when you need more.

Free

$0/forever
  • 10,000 events/mo
  • 7-day retention
  • 2 API keys
  • No alert rules
  • No team members
Get started

Pro

Popular
$49/per month
  • 500,000 events/mo
  • 30-day retention
  • 10 API keys
  • Alert rules
  • No team members
Start with Pro

Team

$199/per month
  • 5,000,000 events/mo
  • 180-day retention
  • 50 API keys
  • Alert rules
  • Team member invites
Start with Team

Start tracking your LLM costs today.

Free to start. No credit card required. SDK setup takes under 5 minutes.

Get started for free →