So here's a fun government-contractor dispute. The U.S. Department of War says it's not currently negotiating with Anthropic AI. Anthropic, meanwhile, says it's planning to sue the government. This is not typically how productive partnerships begin.
Under Secretary of War Emil Michael took to the social media platform X to make the department's position clear, stating there are "no active negotiation" with the AI startup and telling people to "end all speculation." This public statement is the latest move in a standoff that started when the Pentagon formally notified Anthropic that its AI products pose a risk to the U.S. supply chain.
Think about that for a second. The Pentagon is essentially saying, "Your technology is a national security problem." For a company, that's about the worst official review you can get from the Defense Department, short of an actual indictment.
The irony is thick enough to cut with a knife. Despite a directive from the Trump administration ordering federal agencies to stop using Anthropic's technology, reports surfaced that U.S. Central Command used it during a major air operation against Iran just hours later. So one part of the government was banning it while another part was apparently using it in a live mission. You can't make this stuff up.
On the other side of the table is Dario Amodei, CEO of the privately held Anthropic. He's been trying to get a Pentagon contract back on track after talks fell apart. Now, facing the "supply chain risk" label, he's announced the company will challenge the U.S. government in court.
Anthropic, the creator of the Claude family of large language models, now holds a dubious distinction: it's the only American company ever to be publicly named a supply chain risk by the government. That's a unique kind of market positioning—just not the kind you put in a press release.












