← Back to blog
radar

Is Anthropic Down? How to Know in Real-Time (Not From Twitter)

When Anthropic has an incident, most developers find out from users. Here's how to know 14 minutes before the official status page updates.

radarsentinelanthropic
March 15, 2026. 2:14 PM UTC. Anthropic's Claude API started returning 503 errors on roughly 30% of requests. At 2:18 PM, error rates climbed to 60%. The first developer noticed something was wrong at 2:21 PM when their chatbot stopped responding. They checked status.anthropic.com — it showed "All Systems Operational."

At 2:38 PM, Anthropic acknowledged the incident on their status page. At 3:01 PM, service was restored. Total incident duration: 47 minutes. Time between first errors and official acknowledgment: 24 minutes. Time developers spent wondering "is it Anthropic or is it my code?": too long.

This pattern repeats every few weeks. The provider has an issue. Your users notice first. You spend 15-30 minutes debugging your own code before realizing it's not you.

There's a faster way to know.

Why status pages are always late



Every major AI provider — Anthropic, OpenAI, Google — runs a status page. They all have the same problem: the status page updates *after* a human at the provider confirms the incident. That confirmation process takes 10-25 minutes on average.

Here's the typical timeline:

| Time | What happens | |------|-------------| | T+0min | Provider API starts failing | | T+2min | Your error rates spike | | T+5min | Your users start complaining | | T+8min | You start debugging your code | | T+15min | You suspect it might be the provider | | T+20min | Provider's internal monitoring triggers | | T+25min | Provider engineer acknowledges | | T+30min | Status page updated |

By the time the status page says "Investigating," you've already wasted 25 minutes debugging code that was never broken.

The fundamental problem: status pages are provider-operated. They update on the provider's timeline, not yours. You need detection that works on *your* timeline.

Community detection: know in 8 minutes



Radar works differently from a status page. Instead of waiting for the provider to acknowledge an issue, Radar detects incidents by aggregating real-time error signals from the Luxkern developer community.

The detection mechanism:

  • Every Luxkern user's AIWatch proxy reports error rates to Radar (anonymized — no prompt data, just error codes and latency)
  • When error rates spike across multiple independent accounts simultaneously, Radar triggers a community detection
  • If 15+ accounts report elevated errors from the same provider within a 5-minute window, Radar publishes a provider incident


  • This typically fires 8-14 minutes before the provider's own status page updates. During the March 15 incident, Radar detected the issue at 2:22 PM — 16 minutes before Anthropic acknowledged it.

    The key insight: your errors are never just *your* errors. When a provider is down, hundreds of developers see the same thing at the same time. Radar uses this correlation to detect incidents that no single developer could confirm alone.

    Setting up instant notifications



    You can check radar.luxkern.com manually, but the real value is getting notified the moment an incident is detected.

    Slack notification



    curl -X POST https://app.luxkern.com/api/radar/settings \
      -H "Authorization: Bearer lxk_live_xxx" \
      -H "Content-Type: application/json" \
      -d '{
        "providers": ["anthropic", "openai"],
        "notify_slack": true,
        "notify_email": true,
        "min_severity": "medium"
      }'


    When Radar detects an Anthropic incident, you get a Slack message within 60 seconds:

    🔴 Radar: Anthropic API — Elevated error rates detected
       Error rate: 34% (normal: 0.3%)
       Affected: 23 developers reporting
       Detected: 2026-03-15 14:22 UTC
       Official status: Not yet acknowledged
       → View on radar.luxkern.com


    Webhook integration



    For programmatic response, set up a webhook:

    curl -X POST https://app.luxkern.com/api/radar/settings \
      -H "Authorization: Bearer lxk_live_xxx" \
      -H "Content-Type: application/json" \
      -d '{
        "webhook_url": "https://myapp.com/hooks/provider-incidents",
        "providers": ["anthropic", "openai"]
      }'


    Your webhook receives structured data you can act on:

    {
      "provider": "anthropic",
      "severity": "high",
      "error_rate_pct": 34.2,
      "affected_developers": 23,
      "detected_at": "2026-03-15T14:22:00Z",
      "official_acknowledged": false,
      "recommended_action": "enable_fallback"
    }


    Automating your response



    Knowing about an incident in 8 minutes is good. Automatically responding to it is better.

    Here's a Node.js webhook handler that automatically activates a fallback when Radar detects a provider outage:

    app.post("/hooks/provider-incidents", async (req, res) => {
      const incident = req.body;

    if (incident.provider === "anthropic" && incident.severity === "high") { // Switch to fallback model await redis.set("ai:model", "gpt-4o-mini"); await redis.set("ai:fallback_active", "true"); await redis.set("ai:fallback_reason", Anthropic incident: ${incident.detected_at});

    // Notify the team await slack.send({ channel: "#ops", text: ⚠️ Anthropic incident detected by Radar. Switched to GPT-4o-mini fallback., });

    // Log for audit console.log([radar] Fallback activated: ${JSON.stringify(incident)}); }

    res.sendStatus(200); });


    When the incident resolves (Radar sends a resolution webhook), you switch back:

    app.post("/hooks/provider-resolved", async (req, res) => {
      const resolution = req.body;

    if (resolution.provider === "anthropic") { await redis.del("ai:fallback_active"); await redis.set("ai:model", "claude-sonnet-4-6");

    await slack.send({ channel: "#ops", text: ✅ Anthropic incident resolved. Switched back to Claude Sonnet., }); }

    res.sendStatus(200); });


    With this setup, your users never notice the outage. The fallback activates within seconds of detection, and switches back automatically when the incident resolves.

    Cross-referencing with your own monitoring



    Radar tells you about *provider* incidents. But sometimes the error isn't the provider — it's your code. The way to tell the difference:

    If Radar shows an active incident for the same provider at the same time → it's the provider. Stop debugging your code. Wait for resolution or activate your fallback.

    If Radar shows no incidents but your error rates are spiking → it's your code. Start debugging. Check your recent deployments, API keys, and rate limits.

    This cross-reference eliminates the most expensive question in incident response: "Is it us or is it them?" With Sentinel, this correlation happens automatically. Sentinel checks Radar before generating a diagnosis, so when it tells you "root cause: Anthropic API degradation," you can trust it.

    Historical incident patterns



    Tracking provider incidents over time reveals patterns that help you prepare:

    Based on the Anthropic API status history, the most common incident types are:

  • Rate limiting storms (40% of incidents) — average duration 12 minutes, triggered by capacity constraints during peak hours
  • Elevated error rates (30%) — average duration 23 minutes, often caused by internal deployments
  • Complete outages (15%) — average duration 41 minutes, rare but severe
  • Latency degradation (15%) — average duration 18 minutes, response times 3-5x normal


  • Most incidents happen between 14:00-18:00 UTC (US business hours), which is when usage peaks. If your users are primarily in Europe, the worst provider incidents happen in your afternoon.

    The cost of not detecting early



    The math is simple. During the March 15 incident:

  • Without Radar: 47 minutes of degraded service. ~$340 in wasted API retries. 23 support tickets. 45 minutes of engineering time debugging the wrong thing.
  • With Radar: 8 minutes to detection. Automatic fallback activated in 12 seconds. Zero support tickets. Zero wasted engineering time.


  • Radar is included free with every Luxkern plan. The community detection improves as more developers join — every new user makes the detection faster and more accurate for everyone.

    Check radar.luxkern.com right now to see if your providers are healthy. Then set up Slack notifications so you never have to check manually again.