So here's a fun legal puzzle: What happens when the government tries to ban an AI company, and a federal judge says it looks more like punishment than policy?
That's essentially what happened in San Francisco this week, where District Judge Rita Lin sided with Anthropic (ANTH) in its request for a preliminary injunction against the Trump administration. The judge didn't mince words, calling the government's actions "illegal First Amendment retaliation" in a 42-page ruling that reads like a civics lesson with teeth.
The decision temporarily halts the government's efforts to blacklist the AI company and prevents enforcement of a directive from President Donald Trump that bans federal agencies from using Anthropic's Claude models. Think of it as a legal pause button while the court figures out whether this whole thing passes constitutional muster.
"These broad measures do not appear to be directed at the government's stated national security interests," Judge Lin wrote. "If the concern is the integrity of the operational chain of command, the Department of War [Defense] could just stop using Claude. Instead, these measures appear designed to punish Anthropic."
That's the core of the argument: Is this about security, or is it about silencing a company that disagrees with the administration? The judge seems to think it's the latter.
She went further, noting that the Defense Department's designation of Anthropic as a "supply chain risk" is "both contrary to the law and arbitrary and capricious." That's legal-speak for "this doesn't make sense and you're not following the rules."
"Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government," the ruling states.
Let's rewind for a second. This whole conflict started when the Department of Defense declared Anthropic a supply chain risk. That's a serious label—it's what you slap on foreign companies that might be spying on you or sabotaging your systems. Once you're tagged with it, defense contractors have to avoid your technology like it's carrying the digital plague.
Anthropic, being an American company that presumably doesn't want to be treated like a foreign adversary, filed a lawsuit. Their argument: This designation could cause "significant harm" to their business, and it seems more about politics than actual security concerns.
The court held a virtual hearing earlier this week where both sides made their cases. During the hearing, Lin apparently grilled the U.S. government about its motives for labeling Anthropic a national security threat. The government's answers apparently didn't satisfy her.
Now, it's important to understand what this ruling actually does. It's a preliminary injunction—a temporary measure that says "hold up, let's think about this before we do something irreversible." It doesn't mean Anthropic has won the case. It just means they've convinced the judge they're likely to win when everything is finally decided. The final resolution could still take several months.
Anthropic, unsurprisingly, is pleased. "We're grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits," a company spokesperson said in a statement. "While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI."
The Department of Defense didn't provide a direct statement to MarketDash, but Under Secretary of War Emil Michaels had plenty to say on social media platform X. "There are dozens of factual errors in the 42 page judgment rushed out in 48 hours DURING A TIME OF CONFLICT that seeks to upend @POTUS' role as Commander In Chief and disrupt the @SecWar's full ability to conduct military operations with the partners it chooses. A disgrace," Michaels wrote.
So there you have it: The government says the judge got it wrong during a critical time. The judge says the government appears to be punishing a company for its views. And somewhere in the middle is an AI company that just got a temporary reprieve from being treated like a foreign adversary.
What happens next? More legal wrangling, more arguments about national security versus free speech, and eventually a final decision that could set important precedents about how far the government can go in restricting which companies defense contractors can work with. For now, Anthropic gets to keep doing business without the "supply chain risk" label hanging over its head—at least until the next court date.













