Marketdash

The Pentagon Picks Elon Musk's Grok AI, But Not Everyone's Convinced It's Safe

MarketDash
Federal agencies are raising red flags about the safety of xAI's tools, even as the Department of Defense selects its Grok chatbot for classified work.

Get Amazon.com Alerts

Weekly insights + SMS alerts

So, here's a fun government procurement story. Elon Musk's artificial intelligence startup, xAI, is getting some serious side-eye from various federal agencies. The concern? Whether its AI tools are actually safe and reliable enough to use. This has sparked a whole internal debate about which AI models the U.S. government should be deploying, and it's getting political.

According to a recent report, officials from multiple agencies have been raising these safety flags for months. The plot thickens when you see what happened next: the Pentagon went ahead and picked xAI's chatbot, Grok, for use in classified settings anyway.

Why the mixed signals? Well, the debate over which AI to use isn't just about technical specs anymore. It's apparently gotten political. Some senior U.S. officials reportedly view Anthropic—a major AI rival backed by Amazon.com, Inc. (AMZN)—with suspicion. They see its safety-focused stances and its ties to major Democratic donors as potentially making the company too "woke" to be a reliable government provider.

Enter Grok. The Pentagon reportedly chose it precisely because of its looser controls and Musk's well-known firm stance on free speech. Of course, that very looseness is what has other officials worried about potential risks. You can't make this stuff up.

Musk himself has been fanning the flames of this industry rivalry. Just this week, he escalated a feud after Anthropic accused Chinese firms like DeepSeek of copying its Claude model. Musk fired back, claiming, "Anthropic is guilty of stealing training data at a massive scale and has had to pay multi-billion-dollar settlements for their theft. This is just a fact." It's a messy, high-stakes world out there in AI land.

All this scrutiny is hitting xAI at a moment of internal change. Toby Pohlen, a co-founder of the company, recently announced his departure. Leadership shakeups are never nothing, and this one could have implications for xAI's direction. The company has already been on a wild ride, including a massive trillion-dollar merger with SpaceX that shook up the tech industry.

So, to recap: the government is worried an AI tool might not be safe, but is using it for classified work anyway because the alternative might be too politically aligned. Meanwhile, the company behind the tool is losing a founder and its CEO is in a public spat with a competitor. Just another day in the fascinating, and slightly bewildering, world of AI and federal contracts.

The Pentagon Picks Elon Musk's Grok AI, But Not Everyone's Convinced It's Safe

MarketDash
Federal agencies are raising red flags about the safety of xAI's tools, even as the Department of Defense selects its Grok chatbot for classified work.

Get Amazon.com Alerts

Weekly insights + SMS alerts

So, here's a fun government procurement story. Elon Musk's artificial intelligence startup, xAI, is getting some serious side-eye from various federal agencies. The concern? Whether its AI tools are actually safe and reliable enough to use. This has sparked a whole internal debate about which AI models the U.S. government should be deploying, and it's getting political.

According to a recent report, officials from multiple agencies have been raising these safety flags for months. The plot thickens when you see what happened next: the Pentagon went ahead and picked xAI's chatbot, Grok, for use in classified settings anyway.

Why the mixed signals? Well, the debate over which AI to use isn't just about technical specs anymore. It's apparently gotten political. Some senior U.S. officials reportedly view Anthropic—a major AI rival backed by Amazon.com, Inc. (AMZN)—with suspicion. They see its safety-focused stances and its ties to major Democratic donors as potentially making the company too "woke" to be a reliable government provider.

Enter Grok. The Pentagon reportedly chose it precisely because of its looser controls and Musk's well-known firm stance on free speech. Of course, that very looseness is what has other officials worried about potential risks. You can't make this stuff up.

Musk himself has been fanning the flames of this industry rivalry. Just this week, he escalated a feud after Anthropic accused Chinese firms like DeepSeek of copying its Claude model. Musk fired back, claiming, "Anthropic is guilty of stealing training data at a massive scale and has had to pay multi-billion-dollar settlements for their theft. This is just a fact." It's a messy, high-stakes world out there in AI land.

All this scrutiny is hitting xAI at a moment of internal change. Toby Pohlen, a co-founder of the company, recently announced his departure. Leadership shakeups are never nothing, and this one could have implications for xAI's direction. The company has already been on a wild ride, including a massive trillion-dollar merger with SpaceX that shook up the tech industry.

So, to recap: the government is worried an AI tool might not be safe, but is using it for classified work anyway because the alternative might be too politically aligned. Meanwhile, the company behind the tool is losing a founder and its CEO is in a public spat with a competitor. Just another day in the fascinating, and slightly bewildering, world of AI and federal contracts.