Here's a thought experiment for Silicon Valley: What happens if you simultaneously try to automate a huge chunk of the professional workforce and tell the U.S. military you won't work with them? According to Palantir Technologies Inc. (PLTR) CEO Alex Karp, you get your technology seized by the government.
Karp delivered this blunt assessment at a recent industry summit. He didn't sugarcoat it. "If Silicon Valley believes we're going to take everyone's white collar jobs AND screw the military…If you don't think that's going to lead to the nationalization of our technology — you're retarded," Karp said.
It's a stark warning that cuts to the heart of the tech industry's current identity crisis. On one side, you have the relentless commercial push for AI that promises to reshape or replace human roles. On the other, you have a complicated, often fraught, relationship with the U.S. defense and intelligence apparatus that funds a lot of early-stage innovation.
Karp's own company, Palantir, has built its business largely on government contracts, so his perspective comes from being deep in that world. His CTO, Shyam Sankar, has pushed a more optimistic view on the jobs front, arguing on a podcast last year that AI gives workers "superpowers" rather than just eliminating their jobs. But Karp's summit comments suggest a broader, more political concern about the industry's trajectory.
He's not the only CEO sounding alarms. Back in January, Anthropic CEO Dario Amodei published a lengthy essay arguing that the risks from AI, including to the labor market, are not being taken seriously enough. He warned of a coming labor market "shock" that could be different from past technological disruptions.
"New technologies often bring labor market shocks, and in the past, humans have always recovered from them, but I am concerned that this is because these previous shocks affected only a small fraction of the full possible range of human abilities, leaving room for humans to expand to new tasks," Amodei wrote.
This theoretical debate has very real, very immediate consequences in Washington. The tension Karp described is already playing out. In late February, the White House ordered federal agencies to phase out Anthropic's technology after the company refused to drop contractual limits on how its AI could be used for mass surveillance and in autonomous weapons systems. The Defense Secretary labeled Anthropic a supply-chain risk to national security.
So, what did the Pentagon do? It pivoted. OpenAI CEO Sam Altman announced recently that his company had shifted to working on classified Pentagon projects, calling the move urgent. It's a clear example of the dynamic Karp is talking about: a tech company steps back from certain military work on ethical grounds, and the government simply finds another vendor that's willing to step in. In Karp's view, if this pattern continues alongside massive white-collar job displacement, the government's response might not just be to find a new vendor—it might be to take control.
Karp's warning is ultimately about power and perception. If the tech industry is seen as both an economic disruptor, threatening the jobs of the politically influential professional class, and a national security liability, refusing to assist the military, it paints a target on its back. The argument is that this combination could justify extreme government intervention. It's a reminder that in the high-stakes game of tech and geopolitics, business decisions are never just business decisions.













