Here's a legal fight that reads like a corporate thriller, but with more amicus briefs and fewer car chases. Nearly 150 retired judges have just stepped into a high-stakes battle between the Pentagon and Anthropic (ANTH), the artificial intelligence company. Their argument? The Defense Department might have gotten a little too creative with its rulebook.
The core of the dispute is a label: "supply chain risk." The Pentagon slapped this designation on Anthropic, a move the company is challenging in court. In a legal brief filed Tuesday, the 149 retired federal and state judges contend the agency "misinterpreted the statute and ignored the necessary procedures" when it did so.
But here's the twist that makes this particularly interesting. The judges are quick to point out that Anthropic isn't actually trying to win defense contracts. "No one is trying to force the Department to contract with Anthropic," they wrote. Instead, they frame it as a company simply asking not to be punished on its way out. "Anthropic is asking only that it not be punished on its way out the door," the brief states.
So why the blacklist? The roots go back to the Trump administration. Former President Donald Trump ordered federal agencies to stop using Anthropic's technology, reportedly calling it a "radical left AI company." Defense Secretary Pete Hegseth later urged military contractors and partners to avoid any commercial ties with Anthropic, citing national security concerns.
Anthropic's response has been to push back hard in court. The company says the designation isn't just wrong—it's legally unsound and sets a "dangerous precedent." Its legal team argues the move is causing "real and irreparable harm," including pressuring its customers to jump ship to competing AI providers. It's the business equivalent of being put in timeout for a game you weren't even trying to play.
The irony is that, ban or no ban, the military has apparently found the technology useful. Earlier this month, U.S. Central Command reportedly used Anthropic's Claude AI in a Trump-era air operation against Iran to support intelligence and target planning. Claude had also aided prior missions, like the operation to capture Venezuelan President Nicolás Maduro. It's a bit like being told you can't sit at the lunch table, but then someone passes you a sandwich under it.
On the business side, Anthropic's growth has been significant, with its rapid revenue growth providing a boost to Amazon Web Services (AMZN). The company's annualized revenue hit $19 billion, driven by strong adoption of its Claude models by both enterprises and consumers.
Internally, Anthropic found last year that Claude increased employee productivity and enabled new projects. However, it also came with trade-offs: reduced collaboration and concerns about skill erosion and the long-term relevance of certain jobs. Employees reported happily offloading repetitive tasks to AI, but some couldn't shake the nagging fear that the technology might one day make their own roles less essential. It's the classic AI dilemma—productivity today, uncertainty tomorrow.
For now, the retired judges have thrown their weight behind the argument that the Pentagon's process was flawed. The question for the court isn't whether Anthropic should get defense work, but whether the government followed its own rules when deciding to show the company the door.












