So, here's a classic tech story with a political twist. Anthropic's Claude AI had a rough Monday. The platform experienced a series of technical hiccups that left users frustrated, which is never a good look. But the timing here is what makes it interesting—this wasn't just a random server glitch. It happened during a week when everyone suddenly wanted to check out Claude, thanks to some dramatic news from Washington.
Let's break down what actually went wrong with the tech. According to Claude's own status page, the first wave of trouble started at 11:49 UTC. The company was investigating elevated errors across claude.ai, its developer console, and Claude Code. Some API methods just stopped working. They got a fix in place and marked the incident resolved by 15:47 UTC.
But that wasn't the end of it. A second, shorter incident specifically affecting the Claude Opus 4.6 model popped up at 14:35 UTC and was fixed by 14:42 UTC, though it wasn't officially marked resolved until 15:50 UTC. Then, a third wave of elevated errors hit both Claude Opus 4.6 and the main claude.ai site starting around 16:50 to 17:08 UTC. It was a messy day. Downdetector recorded 1,952 problem reports from U.S. users in that 24-hour window, which gives you a sense of how many people were running into issues.
Now, why were so many people trying to use Claude on a Monday? Well, over the weekend, Claude climbed to the number one spot on Apple's App Store's free apps chart in the U.S. That's a big deal. Other AI apps like OpenAI's ChatGPT and Alphabet Inc.'s (GOOGL) Google Gemini ranked second and fourth, respectively. So what sparked this surge in downloads?
It appears to be a case of the 'Streisand Effect'—trying to hide something only makes people more curious. The surge followed the Trump administration's order for federal agencies to stop using Anthropic's technology. Defense Secretary Pete Hegseth went further, labeling the company a supply-chain risk. When the government tells you not to use something, a lot of people immediately want to see what the fuss is about. So, user interest spiked at the exact moment the platform was perhaps least prepared for it.
Here's where the story gets even more tangled. Despite the very public ban and the label as a security risk, Claude's reach inside the government apparently hasn't been fully severed. According to reports, U.S. Central Command continued to rely on the AI system during a major air campaign against Iran mere hours after the presidential ban was announced. They were reportedly using it to assist with intelligence analysis, target identification, and battlefield simulations.
This isn't a new fight. The White House and Anthropic have been locked in a months-long dispute over the potential military use of the company's AI systems. On Friday, Trump directed federal agencies to halt collaboration with Anthropic, and the Defense Department formally labeled the firm a security concern. Yet, in a real-world military scenario, the tool was still in use. It paints a picture of a ban that might be more about politics than immediate operational reality, or perhaps highlights the difficulty of instantly unwinding complex technological dependencies.
So, you have a perfect storm: a politically charged ban drives a surge of public curiosity, which in turn stresses the platform's infrastructure, leading to visible outages. Meanwhile, the very institution that issued the ban might still be using the tool it just condemned. It's a messy situation that shows how hard it is to separate cutting-edge AI from the geopolitical currents it now swims in. For Anthropic, the challenge is no longer just building a reliable AI—it's managing its sudden, controversial fame.












