← Back to blog
logdrain

How to Centralize Logs

Centralize your application logs in under 5 minutes using curl, Node.js fetch, and Python requests. Step-by-step tutorial with working code.

loggingcentralizationtutorialnodejspythoncurl

How to Centralize Your Logs in 5 Minutes



Your logs are in 4 different places. Your API writes to stdout and gets captured by Docker. Your background worker writes to /var/log/worker.log. Your cron job pipes to a file that gets rotated weekly. And your frontend error tracker dumps to a third-party dashboard you last checked in March. When something breaks at 2 AM, you are opening 4 terminals, running grep with different timestamp formats, and trying to mentally reconstruct the sequence of events across services. Last Tuesday, a payment webhook failure took 53 minutes to diagnose -- not because the fix was hard, but because the relevant log line was in the worker log while you spent 40 minutes grepping the API output.

Centralized logging means every log line from every service lands in one searchable place. The diagnosis that took 53 minutes becomes a 3-minute search query. The setup takes less time than reading this sentence out loud five times.

What You Need



Two things:

  • A Luxkern account (free, no credit card) and your API key from the dashboard
  • An HTTP client -- curl, fetch, or requests


  • The LogDrain ingest endpoint accepts JSON over HTTPS. Each request carries a logs array with 1 to 1,000 entries. Three fields are required per entry: timestamp, level, and message. Everything else -- service, userId, requestId, custom fields -- is optional metadata that becomes searchable in the dashboard.

    export LOGDRAIN_API_KEY="your-api-key-here"


    Start with curl to Verify Your Setup



    Before writing any application code, confirm your API key works with a single curl command:

    curl -X POST https://logdrain.luxkern.com/ingest \
      -H "Content-Type: application/json" \
      -H "Authorization: Bearer $LOGDRAIN_API_KEY" \
      -d '{
        "logs": [
          {
            "timestamp": "'$(date -u +"%Y-%m-%dT%H:%M:%S.000Z")'",
            "level": "info",
            "message": "Test log from curl -- setup verified",
            "service": "manual-test",
            "environment": "development"
          }
        ]
      }'


    You will get back {"accepted": 1, "rejected": 0}. Open the LogDrain dashboard at app.luxkern.com/logdrain and you will see your test entry within 2 seconds. If you get a 401, double-check your API key. If you get a 400, your JSON is malformed.

    Now send a batch of 3 entries in one request -- batching reduces HTTP overhead by 65% compared to sending entries individually at high volume:

    curl -X POST https://logdrain.luxkern.com/ingest \
      -H "Content-Type: application/json" \
      -H "Authorization: Bearer $LOGDRAIN_API_KEY" \
      -d '{
        "logs": [
          {
            "timestamp": "2026-07-01T14:30:00.000Z",
            "level": "info",
            "message": "Application started",
            "service": "api",
            "version": "2.1.0"
          },
          {
            "timestamp": "2026-07-01T14:30:01.000Z",
            "level": "info",
            "message": "Database connected in 45ms",
            "service": "api",
            "host": "db-primary.internal"
          },
          {
            "timestamp": "2026-07-01T14:30:05.000Z",
            "level": "warn",
            "message": "Redis connection slow: 2300ms (threshold: 1000ms)",
            "service": "api"
          }
        ]
      }'


    Three entries ingested in one round-trip. The dashboard shows them immediately, filterable by service, level, and any metadata field you included.

    Integrate with Node.js



    For JavaScript and TypeScript services, use the built-in fetch API. Zero dependencies to install.

    // lib/logdrain.js
    const LOGDRAIN_URL = "https://logdrain.luxkern.com/ingest";
    const API_KEY = process.env.LOGDRAIN_API_KEY;

    function log(level, message, meta = {}) { if (!API_KEY) return;

    const entry = { timestamp: new Date().toISOString(), level, message, service: process.env.SERVICE_NAME || "unknown", environment: process.env.NODE_ENV || "development", ...meta, };

    // Fire-and-forget: do not await in the request path fetch(LOGDRAIN_URL, { method: "POST", headers: { "Content-Type": "application/json", Authorization: Bearer ${API_KEY}, }, body: JSON.stringify({ logs: [entry] }), }).catch((err) => { if (process.env.NODE_ENV !== "production") { console.error("LogDrain send failed:", err.message); } }); }

    const info = (msg, meta) => log("info", msg, meta); const warn = (msg, meta) => log("warn", msg, meta); const error = (msg, meta) => log("error", msg, meta); const debug = (msg, meta) => log("debug", msg, meta);

    module.exports = { log, info, warn, error, debug };


    Use it in your routes:

    const { info, error } = require("./lib/logdrain");

    app.get("/api/orders/:id", async (req, res) => { const { id } = req.params; info("Order lookup", { orderId: id, ip: req.ip });

    try { const order = await db.orders.findById(id); if (!order) { warn("Order not found", { orderId: id }); return res.status(404).json({ error: "Not found" }); } res.json(order); } catch (err) { error("Order lookup failed", { orderId: id, error: err.message, stack: err.stack }); res.status(500).json({ error: "Internal error" }); } });


    The fire-and-forget pattern is deliberate. The fetch call runs in the background. It does not add latency to your API response. If it fails (network blip, LogDrain briefly unavailable), your application continues serving requests normally. The log is lost, but your user gets their response. This is the correct tradeoff for 99.9% of log entries.

    For high-volume services processing more than 1,000 requests per minute, batch logs in memory and flush periodically:

    // lib/logdrain-batched.js
    class LogDrain {
      #buffer = [];
      #timer;
      #maxBatch;
      #endpoint;
      #apiKey;

    constructor({ flushIntervalMs = 3000, maxBatchSize = 100 } = {}) { this.#endpoint = "https://logdrain.luxkern.com/ingest"; this.#apiKey = process.env.LOGDRAIN_API_KEY; this.#maxBatch = maxBatchSize; this.#timer = setInterval(() => this.flush(), flushIntervalMs);

    process.on("SIGTERM", async () => { await this.flush(); process.exit(0); }); }

    log(level, message, meta = {}) { this.#buffer.push({ timestamp: new Date().toISOString(), level, message, service: process.env.SERVICE_NAME || "unknown", ...meta, });

    if (this.#buffer.length >= this.#maxBatch) this.flush(); }

    info(msg, meta) { this.log("info", msg, meta); } warn(msg, meta) { this.log("warn", msg, meta); } error(msg, meta) { this.log("error", msg, meta); }

    async flush() { if (this.#buffer.length === 0) return; const batch = this.#buffer.splice(0);

    try { const res = await fetch(this.#endpoint, { method: "POST", headers: { "Content-Type": "application/json", Authorization: Bearer ${this.#apiKey}, }, body: JSON.stringify({ logs: batch }), }); if (!res.ok) console.error(LogDrain error: ${res.status}); } catch (err) { console.error("LogDrain flush failed:", err.message); if (this.#buffer.length < 10_000) this.#buffer.unshift(...batch); } } }

    module.exports = { logger: new LogDrain() };


    This batching client accumulates entries and flushes every 3 seconds or when the buffer hits 100 entries, whichever comes first. It re-queues failed batches up to a 10,000-entry safety cap to prevent memory leaks during extended outages.

    Integrate with Python



    For Python services, FastAPI workers, Django management commands, and cron scripts:

    # lib/logdrain.py
    import os
    import requests
    from datetime import datetime, timezone

    LOGDRAIN_URL = "https://logdrain.luxkern.com/ingest" API_KEY = os.environ.get("LOGDRAIN_API_KEY", "") SERVICE = os.environ.get("SERVICE_NAME", "python-worker")

    def log(level: str, message: str, meta): if not API_KEY: return

    entry = { "timestamp": datetime.now(timezone.utc).isoformat(), "level": level, "message": message, "service": SERVICE,
    meta, }

    try: resp = requests.post( LOGDRAIN_URL, json={"logs": [entry]}, headers={ "Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json", }, timeout=5, ) resp.raise_for_status() except requests.RequestException as e: print(f"LogDrain error: {e}")

    def info(msg, kw): log("info", msg, kw) def warn(msg, kw): log("warn", msg, kw) def error(msg, kw): log("error", msg, kw) def debug(msg, kw): log("debug", msg, kw)


    Usage in a FastAPI route:

    from lib.logdrain import info, error

    @app.post("/api/process") async def process_job(job_id: str): info("Job started", job_id=job_id, queue="default")

    try: result = await heavy_computation(job_id) info("Job completed", job_id=job_id, duration_ms=result.duration) return {"status": "done"} except Exception as e: error("Job failed", job_id=job_id, error=str(e)) raise


    The timeout=5 parameter is critical. Without it, a hanging connection to LogDrain blocks your Python process indefinitely. Five seconds is generous -- the ingest endpoint typically responds in under 200ms.

    Add Request Correlation Across Services



    The single highest-value metadata field you can add is a request ID. When every log line from a single request shares the same ID, you can trace the complete lifecycle of an operation across services:

    import { randomUUID } from "crypto";
    import { info, error } from "./lib/logdrain.js";

    // Middleware: generate or propagate request IDs app.use((req, res, next) => { req.requestId = req.headers["x-request-id"] || randomUUID(); res.setHeader("x-request-id", req.requestId); next(); });

    app.post("/api/checkout", async (req, res) => { const rid = req.requestId; info("Checkout started", { requestId: rid, userId: req.user.id, items: req.body.items.length });

    const payment = await processPayment(req.body); info("Payment processed", { requestId: rid, paymentId: payment.id, amount: payment.amount });

    const order = await createOrder(req.body, payment); info("Order created", { requestId: rid, orderId: order.id });

    res.json({ orderId: order.id }); });


    In the LogDrain dashboard, search for requestId:abc-123 and you see every log line from that checkout flow in chronological order, regardless of which service produced it. A checkout that touches your API, payment service, and notification worker shows up as one continuous trace. The 53-minute debugging session from the introduction becomes a 3-minute search.

    Set Up Error Alerts



    Centralized logs are half the value. Automated alerts are the other half. In the LogDrain dashboard, navigate to Alerts and create a rule:

  • Filter: service:api AND level:error
  • Threshold: more than 10 matches in 5 minutes
  • Channel: Slack webhook (or email, or PagerDuty)


  • This triggers when your API error rate spikes. You get notified in Slack before users start filing support tickets. A typical SaaS serving 50,000 requests per day sees about 15-20 errors in normal operation. A spike to 10 errors in 5 minutes (120/hour) indicates a real problem, not noise.

    You can also set up alerts on specific patterns. For example, message:"payment failed" AND level:error with a threshold of 3 in 10 minutes catches payment processing issues early, before they snowball into revenue loss.

    Common Mistakes That Cost You Time



    Logging secrets. Never put API keys, passwords, or tokens in log entries. Add a sanitization step or use an allowlist of fields. One leaked credential in a log entry creates a security incident that is far worse than whatever you were debugging.

    Awaiting log calls in the hot path. Your log call should never add latency to API responses. Use fire-and-forget for individual logs, or buffer-and-flush for batched clients. If LogDrain is unreachable, your app keeps serving users.

    Skipping the service name. Without a service field, all your logs blend into a single undifferentiated stream. Always tag with the service name. When you are searching for why the payment webhook failed, you do not want to sift through 10,000 API gateway access logs.

    Logging too much in production. Set your production log level to info or warn. Debug-level logging at scale wastes ingestion quota and drowns signal in noise. Reserve debug for development and staging.

    For deeper background on structured logging patterns and transport architecture, read What Is Log Draining in Node.js. If you are evaluating whether Luxkern LogDrain or Datadog is the right fit for your team size, our comparison for small teams has a detailed pricing breakdown -- Datadog starts at $0.10 per ingested GB with a 15-day retention, while LogDrain's free tier includes 500MB/day with 30-day retention.

    Your logs are now in one place. The next 2 AM incident will take 3 minutes to diagnose instead of 53. That is not a productivity improvement -- it is sleep you get back.

    Try Luxkern LogDrain free -- no credit card required.