Every Anthropic incident on the public Statuspage moves through up to five states, in this order: Investigating → Identified → Monitoring → Resolved → Postmortem. Each state’s update is written for a slightly different audience, with a slightly different purpose. Reading them as a sequence — instead of just glancing at the most recent — extracts more information than most people realize is there.
This is a working reader’s guide.
The five states, in order
Investigating — first acknowledgment
Posted within minutes of an on-call engineer being paged. The text is intentionally minimal: “We are investigating reports of [symptom].” The audience is users who are currently broken; the goal is to acknowledge the problem exists and let them stop wondering whether it is them.
What you can extract:
- Confirmation that the symptom is real. If you are seeing the same thing the post describes, you can stop debugging your own code.
- Affected scope, in shorthand. Often the post will name the component or surface — “claude.ai may be slow,” “API users may experience elevated errors.” This is the first hint at what subset is affected.
What you cannot extract:
- Root cause. Genuinely unknown at this stage, even to the engineer posting.
- ETA. Anthropic does not post speculative ETAs. The absence of one is not evasion; it is honesty.
Investigating is most useful as a confirmation signal. If you arrive at the page and the latest update is Investigating from 4 minutes ago, you have just learned that yes, the problem you are seeing is real, and someone is working on it. That is the entire information content.
Identified — they know what is happening
Usually posted 5–30 minutes after Investigating, sometimes longer for genuinely novel failures. The post identifies the cause at a coarse level — not a precise root cause, but a category: “we have identified an issue with our model serving capacity,” “we have identified a regression in our recent deployment,” “we have identified a third-party dependency issue.”
What you can extract:
- The category of the problem. Capacity vs. deployment vs. third-party is a meaningful distinction. A capacity issue often resolves on its own as load shifts; a deployment regression usually requires a rollback; a third-party issue means Anthropic is partly waiting on someone else.
- Whether the impact is likely to persist. Capacity issues are often short. Deployment regressions can stretch into hours during the rollback window.
What you still cannot extract:
- Exactly when it will be fixed.
Identifieddoes not mean “fixed” or even “fixing in progress with confidence.” It means “the on-call has a hypothesis.”
Monitoring — fix deployed, watching
The post says some variation of “we have implemented a fix and are monitoring.” This is the inflection point. From a user’s perspective, the symptom should now be gone or subsiding. The provider is in the verification window.
What you can extract:
- The fix is in place. Your retries are likely starting to succeed. Your error rate should be coming down.
- A short tail is still possible. Some users on cached errors, long-lived connections, or stale routing may continue to see the symptom for several minutes after
Monitoring. Do not panic if your dashboard takes a few minutes longer to recover than this state suggests.
What you cannot extract:
- Confidence level. “Monitoring” means “we believe the fix is working but we want to verify.” There is some non-zero chance of a regression that requires going back to
Identified. It is rare but it does happen.
Resolved — full recovery confirmed
Posted when monitoring has confirmed that the symptom is fully gone for everyone. Often includes a sentence like “All systems are operating normally.”
What you can extract:
- Permission to disengage. If you were watching, you can stop watching. If you were on a fallback path, you can switch back.
- Approximate duration of the incident. The gap between the first
Investigatingpost andResolvedis the public-facing incident duration.
What you may want to wait for:
- Postmortem. If you are writing your own incident report or trying to understand the root cause for your own architecture decisions, the
Resolvedpost is too early. Wait for the postmortem state, if there is one.
Postmortem — the writeup
Not every incident gets a postmortem state on the public Statuspage. Anthropic posts them for the more impactful events, typically several days to a couple weeks after Resolved. The text is significantly longer — sometimes a few paragraphs — and explains what happened, why, what was done to fix it, and what is being done to prevent recurrence.
What you can extract:
- The actual root cause. This is the only state where you should expect a real explanation. The earlier states are operationally useful but not technically specific.
- The detection-to-mitigation timeline. Postmortems often include rough timestamps for “first internal alert,” “engineer paged,” “fix identified,” “fix rolled out.” These are the numbers you want for your own architecture.
- The remediation plan. “We are improving our [X]” — what they are doing to make this less likely. If you depend on the same surface, this is forward-looking information about the system’s evolution.
What you should not over-read:
- Blame, missing pieces, future commitments. Postmortems are professional documents written by people who do not want to over-promise. Read them as a careful, slightly diplomatic account of what happened, not as a complete forensic record.
Reading them in sequence, not just the latest
The most common mistake is to glance at the current state of an incident and stop there. The sequence is more informative than any single state.
A useful reading pattern, in order:
- What is the current state? Tells you whether to act now or wait.
- How long has it been in the current state? A long time in
Investigatingmeans the cause is non-obvious. A long time inMonitoringmeans the fix is being watched carefully — possibly because the engineer is not fully confident. - How long was the previous state? Investigation that took 2 hours implies a hard problem. Identification that came 5 minutes after investigation implies a familiar shape.
- What does the gap between
Investigatingand now look like in your own metrics? The official timeline and your error metric should roughly agree. If they do not, your error metric may be measuring something the incident does not cover (a different region, a different component).
What postmortems do not include
Two things to be aware of, because their absence is sometimes mistaken for omission.
No customer impact numbers. Postmortems will say “users experienced elevated error rates” but will not say “X% of requests failed for Y minutes.” Internal numbers are not published. If you need impact numbers for your own analysis, you will have to derive them from your own metrics.
No personnel detail. Postmortems do not name engineers, do not describe paging chains, and do not share details of internal coordination. These are intentional editorial choices, common across the industry.
If you are building your own incident-response practice and want to model on Anthropic’s, the public postmortems are a good template for tone but a thin source of operational detail. Cross-reference with public engineering blogs (when they exist) for the engineering-side context.
Why postmortems are scarce on minor incidents
Most minor impact incidents resolve at Resolved and never get a Postmortem state. This is a reasonable editorial choice: a 20-minute degraded performance event with no broad impact does not warrant a multi-paragraph public writeup. Postmortems are reserved for major and critical events.
For users tracking the system’s evolution: this means the public record of postmortems is biased toward the worst events. The day-to-day operational picture — how Anthropic handles routine minor events — is mostly invisible from the outside. Most providers operate this way.
Where to read postmortems alongside the dashboard
Anthropic’s Statuspage incident archive is the canonical archive. You can also follow our RSS feed — when an incident moves to Postmortem, the RSS item description picks up the new update text automatically. Subscribing to the feed is a reasonable way to passively collect postmortems as they are published, without having to remember to check the page.
Read the sequence. Each state is doing different work. The sum is more useful than any single update.