Anthropic publishes operational information through two distinct surfaces, and they are not interchangeable. The public Statuspage is what most engineers know — real-time status indicators, active incidents, recent history. The Trust Center is what most engineers do not know — the compliance and security artifacts published for procurement teams, auditors, and security reviewers.
They serve different audiences and answer different questions. This post is a short guide to which one to read for which question.
The two surfaces, briefly
Statuspage (status.claude.com)
- Audience: engineers operating on top of Claude, during incidents.
- Cadence: real time.
- Content: per-component status indicators, active incidents, incident history, postmortems for major events.
- Underlying tech: Atlassian Statuspage.
- Public API: yes —
summary.json,incidents.json, etc.
Trust Center (trust.anthropic.com)
- Audience: security reviewers, procurement, auditors.
- Cadence: changes when policies and certifications change — typically quarterly or when new artifacts are added.
- Content: SOC 2 reports, ISO certifications, data processing agreements, sub-processor lists, security policy summaries, vulnerability disclosure information.
- Underlying tech: typically a hosted compliance platform (Vanta, Drata, etc., depending on the year).
- Public API: no, but artifacts are typically downloadable as PDFs after NDA.
The two pages do not share content. They almost never overlap. Knowing which to consult for which question saves a lot of reading.
When to read the Statuspage
You are reading the Statuspage if your question is “is the service working right now” or anything close to it. Specifically:
- “Why is my API returning 529?”
- “Is claude.ai down or is it me?”
- “What is the recent history of incidents on the API?”
- “Did Anthropic publish a postmortem for the outage last Tuesday?”
The Statuspage updates within minutes of a real incident. Its incident archive goes back months. Its API is unauthenticated and intended for third-party reuse — including by dashboards like this one.
For real-time decisions during an outage, the Statuspage is the canonical surface. Anything we publish here is downstream of what they publish there.
When to read the Trust Center
You are reading the Trust Center if your question is “should I deploy Claude into a production system that has a compliance review attached” or anything close to it. Specifically:
- “Does Anthropic have SOC 2 Type II?”
- “Where is data processed and stored?”
- “What sub-processors does Anthropic use?”
- “What is Anthropic’s vulnerability disclosure policy?”
- “What does the data-processing addendum look like?”
- “Does Anthropic train on customer data?”
These are not questions a status dashboard can answer. They are not even questions that change minute-to-minute. They are the kind of questions a procurement or security team asks once, gets a documented answer to, and revisits annually.
The Trust Center exists because compliance reviews need a centralized, evidence-rich, auditable answer to those questions. Standalone PDFs sent over email do not survive an annual security review; a Trust Center that maintains current artifacts does.
What the Statuspage does not contain
A few common questions the Statuspage will not answer, even though it is tempting to look there first:
- Long-term reliability statistics. The Statuspage shows recent incidents but does not aggregate annual uptime numbers in a clean way. Computing them requires processing the incident feed yourself (the same feed we use, with the same math we document on Methodology).
- Per-customer impact analysis. The Statuspage classifies impact at the population level. It does not tell you whether your specific account was affected.
- Roadmap or change announcements. The Statuspage announces incidents, not features or deprecations. Those go through other channels (release notes, the Anthropic blog, customer email).
- Compliance-relevant security posture. Trust Center territory.
What the Trust Center does not contain
Likewise, things you should not look for on the Trust Center:
- Real-time service health. The Trust Center is about policies and certifications, not transient operational state. You will never find an active-incident notification there.
- Per-incident detail. Even after an incident is over, the Trust Center does not document specific past incidents. Postmortems live on the Statuspage.
- Latency or performance metrics. Trust Center artifacts include security and compliance properties, not performance data.
- Self-service operational integration. The Trust Center artifacts are typically PDFs and are not designed to feed automated tooling.
Reading both during a procurement decision
If you are evaluating Claude for production use as part of a formal procurement, both surfaces matter, and the order is roughly:
- Trust Center first. Confirm the compliance posture matches your requirements. Get the SOC 2 report, the DPA, the sub-processor list. This is the threshold question; if the artifacts do not match your needs, the operational details do not matter.
- Statuspage second. Read the recent incident history to calibrate what “operating Claude in production” feels like. Look at the last 90 days of incidents — count, severity, duration, components affected. This builds your operational expectations for the relationship.
- Third-party data third. Dashboards like this one, plus comparative readings of other providers, give you context for whether Claude’s recent operational profile is industry-typical or outlier-y. Not part of the formal procurement, but useful for setting realistic expectations with your stakeholders.
A note on the gap
There is a real gap between the two surfaces — operational information that is not quite real-time enough for the Statuspage and not quite policy-level enough for the Trust Center. Things like:
- Aggregate monthly uptime rolled up to a single number.
- Performance baselines and percentile latency statistics.
- Capacity planning information.
- Detailed regional availability.
Some providers fill this gap with a “performance dashboard” or “service quality report” that publishes monthly. Anthropic does not currently publish one publicly. Some of what would go in such a dashboard is recoverable from the Statuspage feed (the math is on our Methodology page); some is not publicly available.
This dashboard is, in part, an attempt to fill that gap from the outside. Our 30-day uptime numbers, our 17-country latency baseline, our community report aggregation — these are signals that would naturally live in an Anthropic-published “service quality” surface, if one existed. We compute them from public inputs and publish them here because someone has to, and the third-party position has the advantage of being able to publish without internal review cycles.
The short version
For “is Claude up right now,” read the Statuspage.
For “is Claude appropriate for our compliance posture,” read the Trust Center.
For everything in between — recent reliability trends, regional latency, what users are seeing — read this dashboard or build your own. The two official surfaces are necessary but not sufficient. The third-party layer exists because the gap between real-time and policy-level is genuinely interesting, and worth filling.