Marketdash

VC Vinod Khosla Breaks With Anthropic Over AI Weapons Stance: 'Putin Won't Fight Fair'

MarketDash
Venture capitalist Vinod Khosla publicly criticized AI safety startup Anthropic for refusing to develop autonomous weapons, as the company faces a Pentagon blacklist while rival OpenAI secures a defense deal.

Get Amazon.com Alerts

Weekly insights + SMS alerts

Here's a Silicon Valley drama that reads like a tech thriller: venture capitalist Vinod Khosla just publicly broke with one of his portfolio companies over autonomous weapons. The company? AI safety darling Anthropic. The issue? Whether we should build killer robots.

"Putin won't fight fair so we should have autonomous AI weapons for sure," Khosla posted on X on Friday. He added that while he admires Anthropic for sticking to its principles, he just doesn't agree with the principle itself.

Think about that for a second. A prominent VC is essentially saying, "I respect your moral stance, but I think you're morally wrong about not wanting to build autonomous killing machines." It's the kind of disagreement that doesn't happen every day in tech circles.

The timing here is everything. Khosla's comments came right after reports that Sam Altman had told OpenAI employees a deal with the U.S. Department of Defense was in the works. And then, on the same day, Defense Secretary Pete Hegseth formally blacklisted Anthropic as a "supply chain risk."

So let's connect the dots: Anthropic walks away from a Pentagon deal over ethical concerns, gets blacklisted, and then its main competitor swoops in to take the contract. OpenAI reportedly agreed to embed engineers on-site with human oversight over use of force—terms Anthropic couldn't accept.

Why couldn't Anthropic accept them? CEO Dario Amodei said the company "cannot in good conscience" agree to Pentagon terms that failed to block its Claude AI from being used for mass surveillance of Americans or fully autonomous weapons. That principled stand prompted Under Secretary Emil Michael to publicly call Amodei a "liar."

Here's where it gets even more awkward. Just a day before OpenAI took the deal, employees from OpenAI and Alphabet (GOOGL) jointly signed an open letter titled 'We Will Not Be Divided,' demanding leadership resist the Department of Defense's terms—the exact same terms OpenAI just accepted.

So now we have: one AI company taking government money while its employees protested against it, another company getting blacklisted for refusing that same money, and a prominent investor publicly siding with the government over his own portfolio company. All over the question of whether AI should be allowed to make life-and-death decisions without human intervention.

Khosla's argument boils down to realpolitik: if our adversaries are going to develop this technology, we need it too. Anthropic's position is more about drawing ethical lines in the sand. And caught in the middle? The Pentagon, which now has one AI company on its team and another on its blacklist.

What's fascinating here isn't just the disagreement about autonomous weapons—it's watching how Silicon Valley's relationship with government contracts is evolving in real time. Some companies will take the money with oversight conditions. Others won't touch it with a ten-foot pole. And the investors? They're picking sides in what's becoming one of tech's most divisive debates.

VC Vinod Khosla Breaks With Anthropic Over AI Weapons Stance: 'Putin Won't Fight Fair'

MarketDash
Venture capitalist Vinod Khosla publicly criticized AI safety startup Anthropic for refusing to develop autonomous weapons, as the company faces a Pentagon blacklist while rival OpenAI secures a defense deal.

Get Amazon.com Alerts

Weekly insights + SMS alerts

Here's a Silicon Valley drama that reads like a tech thriller: venture capitalist Vinod Khosla just publicly broke with one of his portfolio companies over autonomous weapons. The company? AI safety darling Anthropic. The issue? Whether we should build killer robots.

"Putin won't fight fair so we should have autonomous AI weapons for sure," Khosla posted on X on Friday. He added that while he admires Anthropic for sticking to its principles, he just doesn't agree with the principle itself.

Think about that for a second. A prominent VC is essentially saying, "I respect your moral stance, but I think you're morally wrong about not wanting to build autonomous killing machines." It's the kind of disagreement that doesn't happen every day in tech circles.

The timing here is everything. Khosla's comments came right after reports that Sam Altman had told OpenAI employees a deal with the U.S. Department of Defense was in the works. And then, on the same day, Defense Secretary Pete Hegseth formally blacklisted Anthropic as a "supply chain risk."

So let's connect the dots: Anthropic walks away from a Pentagon deal over ethical concerns, gets blacklisted, and then its main competitor swoops in to take the contract. OpenAI reportedly agreed to embed engineers on-site with human oversight over use of force—terms Anthropic couldn't accept.

Why couldn't Anthropic accept them? CEO Dario Amodei said the company "cannot in good conscience" agree to Pentagon terms that failed to block its Claude AI from being used for mass surveillance of Americans or fully autonomous weapons. That principled stand prompted Under Secretary Emil Michael to publicly call Amodei a "liar."

Here's where it gets even more awkward. Just a day before OpenAI took the deal, employees from OpenAI and Alphabet (GOOGL) jointly signed an open letter titled 'We Will Not Be Divided,' demanding leadership resist the Department of Defense's terms—the exact same terms OpenAI just accepted.

So now we have: one AI company taking government money while its employees protested against it, another company getting blacklisted for refusing that same money, and a prominent investor publicly siding with the government over his own portfolio company. All over the question of whether AI should be allowed to make life-and-death decisions without human intervention.

Khosla's argument boils down to realpolitik: if our adversaries are going to develop this technology, we need it too. Anthropic's position is more about drawing ethical lines in the sand. And caught in the middle? The Pentagon, which now has one AI company on its team and another on its blacklist.

What's fascinating here isn't just the disagreement about autonomous weapons—it's watching how Silicon Valley's relationship with government contracts is evolving in real time. Some companies will take the money with oversight conditions. Others won't touch it with a ten-foot pole. And the investors? They're picking sides in what's becoming one of tech's most divisive debates.