Here's a tense courtroom drama playing out in San Francisco: a federal judge has given the artificial intelligence company Anthropic until 6 p.m. Pacific time today to prove that various arms of the U.S. government have actually stopped using its technology. This isn't about a contract dispute over missed deadlines—it's about the Department of Defense labeling the company a national security risk and blacklisting its Claude AI model.
Judge Rita Lin, presiding over the case, noted during a virtual hearing that agencies like the Office of Personnel Management and the Nuclear Regulatory Commission have already terminated their use of Anthropic's tech. Now, she wants a formal declaration from the company stating that's the case. The DOD's lawyers, for their part, have until 6 p.m. Wednesday to provide any counter-evidence. It's a procedural step, but one that underscores how quickly this legal fight is moving.
The backdrop is Anthropic's request for a preliminary injunction. The company wants the court to temporarily halt the DOD's blacklisting decision, as well as a related order from the Trump administration prohibiting federal agencies from using its technology. Anthropic's argument is straightforward: this designation is causing "severe, immediate and irreparable financial and reputational harm." In other words, being labeled a national security risk is very bad for business, especially when your customers include the government.
On the other side, the DOD isn't backing down. Its core argument is that Anthropic poses a legitimate national security supply chain risk. The agency escalated this point in a court filing last week, highlighting what it sees as a new red flag: Anthropic's hiring of foreign personnel, including workers from China. The filing states the company employs "a large number of foreign nationals to build and support its LLM products, including many from the People's Republic of China (PRC), which increases the degree of adversarial risk should those employees comply with the PRC's National Intelligence Law." It's the classic tech talent dilemma—the need for global expertise versus potential security concerns—writ large in a federal lawsuit.
This isn't just a two-party fight. More than 30 employees from Google, Microsoft-backed OpenAI, and Google DeepMind have filed an amicus brief supporting Anthropic's lawsuit. It's a notable show of solidarity from within the AI industry, suggesting this case is being watched as a bellwether for how the government interacts with—and potentially restricts—private AI firms.
The DOD's attorneys are pushing back hard on the injunction request. They've asked the court to deny it, arguing that "a private company should not be allowed to make any decisions regarding how military missions are conducted." Eric Hamilton, an attorney representing the DOD, put it bluntly: "Anthropic revealed itself to be an untrustworthy and unreliable partner in recent negotiations." He also urged that if the court does grant Anthropic injunctive relief, it should immediately pause that order during an appeal, requesting at minimum a seven-day stay. Hamilton emphasized any court order should make clear the government is not obligated to continue using Anthropic's services.
So what exactly is Anthropic asking for? The company wants to return to the "status quo" of the morning of February 27. That date is key—it marks the deadline from which this lawsuit stems, when Anthropic was required to remove safety restrictions on Claude for potential use by the DOD in areas like autonomous weapons and domestic surveillance. The company's attorney argued in court: "There's a range of lawful actions that the defendants could have taken. What they can't do is engage in unconstitutional retaliation for our protected speech. They can't impose an immediate prospective debarment of Anthropic for all future government contracting that is not supported by any lawful executive authority." The core legal question, then, is whether the government's actions are a legitimate security measure or an unlawful punishment.
For now, Judge Lin has taken the matter under submission and expects to deliver a verdict on the injunction request in the next few days. Anthropic declined to comment, while the DOD stated it does not comment on pending litigation. The clock is ticking on today's deadline, and the outcome could set a significant precedent for how national security concerns are balanced against the operations of private AI companies seeking government business.












