Here's a classic government technology story: the right hand bans something while the left hand is actively using it. According to reports, U.S. Central Command used Anthropic's Claude AI during a major Trump-era air operation against Iran. The twist? This happened just hours after President Trump himself ordered federal agencies to stop using the company's technology.
Think about the timeline for a second. A ban is announced. A major military operation is launched. And the supposedly banned technology is reportedly right there in the command center, helping with intelligence assessments, identifying targets, and simulating battle scenarios. It's the kind of bureaucratic whiplash that makes you wonder who's really in charge of the tech stack.
Claude's Deep Military Roots
The Wall Street Journal, citing sources, reported that Claude wasn't just a last-minute addition. It's apparently deeply embedded across military operations. This wasn't its first rodeo either; the AI tool had been used previously in high-profile missions, including the operation to capture Venezuelan President Nicolas Maduro.
This creates an awkward situation. The Trump administration and Anthropic had been at odds for months over the Pentagon's use of its AI models. The Defense Department had labeled Anthropic a security threat and a supply chain risk. Hence the order to cut ties. But in the world of combat operations, when you have a tool that works, you don't just throw it away because of a new memo—especially when lives and missions are on the line.












