Dashboard →

AIWatch

AI cost monitoring proxy. Route your LLM API calls through Luxkern to track spend, latency, and token usage across every model and provider in real time.

Quickstart

Point your existing OpenAI or Anthropic SDK at the Luxkern proxy — one line change, zero code refactor:

python
# Python — just change base_url
client = OpenAI(
base_url="https://proxy.luxkern.com/v1", # <-- swap this line
api_key="sk-..."
)
js
// Node.js — same idea
"text-[var(--color-info)] font-medium">const client = "text-[var(--color-info)] font-medium">new OpenAI({
baseURL: "https://proxy.luxkern.com/v1", // <-- swap "text-[var(--color-info)] font-medium">this line
apiKey: "sk-...",
});

What you get

  • Per-request cost and token breakdown
  • Latency percentiles (p50 / p95 / p99)
  • Budget alerts and spend caps
  • Model and provider comparison charts

Open AIWatch Dashboard · Full API Reference