Most “is Claude slow today” answers on the internet are based on one person’s connection, in one city, at one moment. That is a valid data point but not a useful baseline. To know whether the slowdown you are seeing is real, you need a continuous record of what “normal” looks like from a wide enough cross-section of the world that one bad ISP cannot dominate the picture.
This dashboard runs that record. Every five minutes, a Vercel cron job hits check-host.net and asks for an HTTPS probe of https://claude.ai from 17 countries, with up to three redundant probe nodes per country. The best valid result per country is recorded. Over thirty days that is roughly 8,640 measurements per country and 147,000 measurements globally.
This post walks through what that data looks like, the patterns that only emerge with a wide sample, and the regional weirdnesses that we have learned to expect.
The 17 countries, and why those
The current sample is: US, BR, GB, DE, FR, NL, FI, IT, TR, AE, IN, SG, JP, HK, CN, VN, ID. Three biases shape the list.
Coverage of where Claude is widely used. US, GB, DE, FR, IN, SG, JP — these are the regions where developer adoption is highest based on public traffic estimates. They are the floor of “the global experience.”
Cities not behind a single CDN edge already. Picking ten US cities tells you almost nothing — they all hit the same Cloudfront edge and the differences are noise. We sample two North American probes (US, BR) instead of ten.
Places where access policy is non-trivial. UAE, Turkey, China, Vietnam, Indonesia. These are countries where the question “can I reach claude.ai right now” has a non-trivial answer that varies materially over time. Sampling them is more useful than yet another European probe.
We are not trying to build a global CDN performance benchmark. We are trying to answer “is Claude reachable from places people use Claude from.”
What the baseline looks like
Across the last thirty days of measurements, the typical TTFB (time-to-first-byte) per country breaks roughly into three tiers:
- Tier 1, sub-200ms: US, GB, DE, NL, FR, FI. These regions terminate against a Cloudfront edge in-region with low jitter. The dashboard shows them green almost continuously.
- Tier 2, 200ms–500ms: BR, IT, AE, IN, SG, JP, HK, ID. Regional edges exist but the path is longer or noisier. The dashboard shows occasional amber.
- Tier 3, variable, often 500ms–2s, sometimes timeout: TR, CN, VN. These countries have the most operationally interesting numbers, and the most volatility.
A few patterns inside each tier are worth flagging.
Singapore is more reliable than Hong Kong
A small consistent gap. Singapore typically clocks 230–280ms TTFB; Hong Kong typically 320–380ms with much wider distribution. Both are AWS-served edges; the difference is upstream networking variability rather than the destination edge.
India sits at the high end of Tier 2
Mumbai-area probes are consistently 350–450ms TTFB. This is the gap between Indian developer demand and the geographic distribution of edges that serve claude.ai — a meaningful chunk of the path is still trans-oceanic. When Anthropic eventually adds a closer edge, India’s average should drop to Tier 1 levels.
Brazil is bimodal
São Paulo splits cleanly between two regimes: about 280ms when the route is direct to a US East edge, about 480ms when traffic backhauls through Florida. Over thirty days the distribution is visibly two-humped. The dashboard’s “best of three nodes per country” rule means we usually catch the better path.
China is unique
The CN probe timestamps frequently show timeout (-1ms) — sometimes for hours, sometimes for days. When it does succeed, latency is in the 800ms–2000ms range. None of this is novel; it is the normal state of cross-border connectivity from mainland China to a US-anchored service. We include it because the binary “is claude.ai reachable from China right now” answer is itself useful — to mainland users deciding whether to stand up a VPN, and to Anthropic-watchers tracking whether and when the connectivity environment changes.
Turkey and Vietnam are erratic
Turkey is in Tier 3 not because of consistent badness but because the variance is enormous: a clean day can clock 250ms, the next day can clock 1500ms or sustained timeouts. Vietnam shows similar swings. Both regions have route-quality issues that no amount of edge proximity can fix.
Anomalies that only show up with multi-node sampling
We sample up to three nodes per country, and we keep the best. The reason is not greed for low numbers — it is signal isolation.
If only one node per country were sampled and that node was having a bad day, every five-minute measurement from that country for those hours would show a degradation that does not exist for users routed differently. Single-node dashboards report dozens of false outages per month for this reason. The fix is multi-node sampling and best-result selection.
Concretely, in the last thirty days of data:
- Roughly 40 single-node “outages” were filtered out — moments when one of three probe nodes for a country reported a timeout while the other one or two reported normal latency. None of these were reflected on the dashboard, because the country’s best valid result was used.
- Three multi-node outages were retained — moments when every probe node for a country reported a timeout simultaneously. Two of those correlated with publicly logged Statuspage incidents. The third did not, and was likely a check-host.net regional issue rather than an Anthropic issue.
Multi-node sampling will not save you from a real outage of the destination. It will save you from probe-side noise. For a public dashboard the false positive rate matters more than the absolute latency number, because every false positive is a moment of “is it me or is it Claude” anxiety for a user.
How to read a sudden latency change
If the latency widget on the homepage suddenly turns red for one or two countries while the rest stay green, three explanations dominate:
- A regional carrier route change. ISPs in the affected country shifted traffic onto a longer or congested path. Claude itself is fine; the problem is upstream of Anthropic. Usually clears within a few hours.
- A check-host.net probe-side regional issue. The measurement infrastructure itself had a hiccup. Multi-node sampling catches most of these, but not all. Resolves on its own.
- An actual destination-side issue affecting that region. Rare. Usually reflected in the Statuspage component grid within a few minutes.
If the latency widget turns red across many countries simultaneously, that is a strong signal of a destination-side event, and you will almost always see the Statuspage component grid go yellow or red on the same dashboard within minutes.
If only one country goes red, do not panic. If half of them do, look at the component grid.
Why we do not measure the API directly
A reasonable question: why probe claude.ai, the marketing host, rather than api.anthropic.com?
Two reasons. Quota neutrality — api.anthropic.com requires authentication and consumes API quota for any meaningful probe. We are not going to spend our quota, or anyone else’s, on measurement. Edge alignment — the marketing host runs on the same global edge stack as the chat web app, which is the surface most users actually care about when they ask “is Claude slow.” A probe of claude.ai is a faithful proxy for “how fast does the chat page load,” which is the experience most users have in mind when they ask the question.
Probing the API host directly would require holding test credentials, paying for measurement-only requests, and routing those requests through Anthropic’s backend. We treat the marketing host as the closer-of-two-evils proxy, and we say so on the Methodology page.
What this dataset cannot tell you
Latency to a marketing edge is a strong signal for “is the front door open.” It is a weak signal for “are model invocations completing in reasonable time.”
A region with green latency can still be experiencing slow inference. A region with red latency probably has bigger problems than inference speed. Always read the latency widget alongside the Statuspage component grid (which reflects Anthropic’s own assessment of API health) and the community report counts (which reflect what users are seeing).
The latency table is one input. None of the three inputs alone is a full answer. Together, they almost always tell you what is going on.
Where the data lives
Every measurement is stored in a Cloudflare KV bucket with a 30-minute TTL — the dashboard always shows the most recent successful sweep. We do not currently expose the historical 30-day series as a public download, but we are working on it. If you want a slice of the data for research or for your own tooling, contact us with the use case; we can usually accommodate.
Until that exists, the check-host.net public probe API is also yours to use. The full list of probe nodes we sample is on the Methodology page, and the API is unauthenticated. Building your own version of this dashboard is straightforward; the hard part was deciding which 17 countries to sample and committing to the multi-node selection rule.
The numbers are not exotic. The discipline of recording them every five minutes, on a public page, with a documented method, is what makes them useful.