Marketdash

The Pentagon's AI Dilemma: Using Banned Tech Hours After Trump's Order

MarketDash
U.S. military reportedly used Anthropic's Claude AI in a major Iran strike just hours after the president ordered federal agencies to stop using it, highlighting a messy clash between tech policy and battlefield needs.

Get Alphabet Inc. (Class C) Alerts

Weekly insights + SMS alerts

Here's a classic government technology story: the right hand bans something while the left hand is actively using it. According to reports, U.S. Central Command used Anthropic's Claude AI during a major Trump-era air operation against Iran. The twist? This happened just hours after President Trump himself ordered federal agencies to stop using the company's technology.

Think about the timeline for a second. A ban is announced. A major military operation is launched. And the supposedly banned technology is reportedly right there in the command center, helping with intelligence assessments, identifying targets, and simulating battle scenarios. It's the kind of bureaucratic whiplash that makes you wonder who's really in charge of the tech stack.

Claude's Deep Military Roots

The Wall Street Journal, citing sources, reported that Claude wasn't just a last-minute addition. It's apparently deeply embedded across military operations. This wasn't its first rodeo either; the AI tool had been used previously in high-profile missions, including the operation to capture Venezuelan President Nicolas Maduro.

This creates an awkward situation. The Trump administration and Anthropic had been at odds for months over the Pentagon's use of its AI models. The Defense Department had labeled Anthropic a security threat and a supply chain risk. Hence the order to cut ties. But in the world of combat operations, when you have a tool that works, you don't just throw it away because of a new memo—especially when lives and missions are on the line.

Get Alphabet Inc. (Class C) Alerts

Weekly insights + SMS (optional)

Enter OpenAI, Stage Right

While one AI company was being shown the door, another was walking right in. Just hours after the ban on Anthropic was declared, OpenAI announced a deal to deploy its AI tools in the Pentagon's classified systems. It's a perfect illustration of how fast the competitive landscape shifts in Washington.

The whole feud reportedly stems from Anthropic's refusal during contract negotiations to allow the Pentagon unrestricted use of its AI. This was compounded by the company's lobbying against the administration's broader AI policy. It's a fight about control: who gets to decide how powerful AI is used in life-and-death situations?

This tension isn't isolated to one company. It reflects a much broader unease in the tech industry about military contracts. More than 100 employees from Alphabet Inc. (GOOG) and OpenAI had previously jointly demanded "red lines" in Pentagon AI contracts. They're essentially asking for rules of engagement before the algorithms ever see a battlefield.

So what you have is a messy Venn diagram: a military that relies on cutting-edge AI, a tech industry wrestling with the ethics of its own creations, and a White House trying to set policy while operations are already underway. The report about Claude being used hours after the ban isn't just a fun fact—it's a snapshot of that entire conflict playing out in real time.

The Pentagon's AI Dilemma: Using Banned Tech Hours After Trump's Order

MarketDash
U.S. military reportedly used Anthropic's Claude AI in a major Iran strike just hours after the president ordered federal agencies to stop using it, highlighting a messy clash between tech policy and battlefield needs.

Get Alphabet Inc. (Class C) Alerts

Weekly insights + SMS alerts

Here's a classic government technology story: the right hand bans something while the left hand is actively using it. According to reports, U.S. Central Command used Anthropic's Claude AI during a major Trump-era air operation against Iran. The twist? This happened just hours after President Trump himself ordered federal agencies to stop using the company's technology.

Think about the timeline for a second. A ban is announced. A major military operation is launched. And the supposedly banned technology is reportedly right there in the command center, helping with intelligence assessments, identifying targets, and simulating battle scenarios. It's the kind of bureaucratic whiplash that makes you wonder who's really in charge of the tech stack.

Claude's Deep Military Roots

The Wall Street Journal, citing sources, reported that Claude wasn't just a last-minute addition. It's apparently deeply embedded across military operations. This wasn't its first rodeo either; the AI tool had been used previously in high-profile missions, including the operation to capture Venezuelan President Nicolas Maduro.

This creates an awkward situation. The Trump administration and Anthropic had been at odds for months over the Pentagon's use of its AI models. The Defense Department had labeled Anthropic a security threat and a supply chain risk. Hence the order to cut ties. But in the world of combat operations, when you have a tool that works, you don't just throw it away because of a new memo—especially when lives and missions are on the line.

Get Alphabet Inc. (Class C) Alerts

Weekly insights + SMS (optional)

Enter OpenAI, Stage Right

While one AI company was being shown the door, another was walking right in. Just hours after the ban on Anthropic was declared, OpenAI announced a deal to deploy its AI tools in the Pentagon's classified systems. It's a perfect illustration of how fast the competitive landscape shifts in Washington.

The whole feud reportedly stems from Anthropic's refusal during contract negotiations to allow the Pentagon unrestricted use of its AI. This was compounded by the company's lobbying against the administration's broader AI policy. It's a fight about control: who gets to decide how powerful AI is used in life-and-death situations?

This tension isn't isolated to one company. It reflects a much broader unease in the tech industry about military contracts. More than 100 employees from Alphabet Inc. (GOOG) and OpenAI had previously jointly demanded "red lines" in Pentagon AI contracts. They're essentially asking for rules of engagement before the algorithms ever see a battlefield.

So what you have is a messy Venn diagram: a military that relies on cutting-edge AI, a tech industry wrestling with the ethics of its own creations, and a White House trying to set policy while operations are already underway. The report about Claude being used hours after the ban isn't just a fun fact—it's a snapshot of that entire conflict playing out in real time.