Methodology
How “Is AI Dumber Today?” works — and what it doesn’t tell you.
What We Measure
This site aggregates subjective, self-reported user perceptions of AI model performance. Users submit reports when they feel a model is slower, lower quality, or experiencing outages. Reports are bucketed into rolling time windows and displayed as trend charts.
How Reports Are Collected
- Anyone can submit a report for a specific provider, model variant, and interface (web, API, mobile, CLI).
- Reports include an optional free-text description of the issue.
- No login or personal data is required; we store minimal metadata (timestamp, provider, variant, interface, description).
Status Calculation
Provider status (“Normal”, “Minor Issues”, “Degraded”) is derived from the volume and recency of reports compared to a historical baseline. The baseline represents typical report volume for each time-of-day bucket.
Limitations
- Data is subjective. A spike in reports may reflect user frustration rather than a measurable service degradation.
- There is no verification layer — reports are not cross-checked against official provider status pages or automated benchmarks.
- Low traffic periods may produce noisy signals; high-traffic events (viral posts, media coverage) can create artificial spikes.
- This site is a community signal, not a diagnostic tool.
Independence & Affiliation
Is AI Dumber Today? is not affiliated with, endorsed by, or sponsored by any AI provider. We do not receive data, funding, or direction from OpenAI, Google, Anthropic, xAI, or any other company whose models appear on this site.
Tracked Providers
- ChatGPT by OpenAI
- Anthropic Claude by Anthropic
- Google Gemini by Google
- Grok by xAI