Here's a story about what happens when the people building the robots start worrying about what the robots might do. Over the weekend, Caitlin Kalinowski—the leader of OpenAI's robotics division—said she quit. Her reason? She thinks the company didn't spend enough time talking about whether AI should be used to spy on Americans without a warrant, or to let weapons make lethal decisions without a human in the loop.
That's a pretty specific set of concerns. And it's especially interesting because it comes just as OpenAI is getting deeper into classified work with the Pentagon. The company has a new arrangement that, according to CEO Sam Altman, includes two big limits: no domestic mass surveillance, and a requirement that humans stay in control of any use of force. So on paper, the guardrails Kalinowski is worried about are already there. But for her, it seems the conversation around them wasn't.
In a post on X, she put it plainly: "I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn't an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got. This was about principle, not people. I have deep respect for Sam and the team, and I'm proud of what we built together."
So it's not that she's against AI in national security—she says it can matter. It's that she wants those two lines drawn in the sand to be debated more carefully. And she's framing this as a values thing, not a personal beef with Altman or the team.
Why This Resignation Matters
Kalinowski's exit isn't just one person's career move. It touches on the exact fault lines that are shaping how top AI labs deal with the U.S. national security world: surveillance at home, and autonomy in weapons. She's saying those issues didn't get the weight she expected.
Meanwhile, Altman has been talking about how OpenAI's stance has shifted. They used to avoid classified gigs; now they're taking them on with the Department of War. He calls the shift urgent and more complex than earlier work. He also mentioned that OpenAI previously passed on classified opportunities that rival lab Anthropic accepted.
Under the Pentagon deal, OpenAI kept those two guardrails—no domestic mass spying, human control over force—while adding operational stuff like putting engineers on-site to watch how models behave. Altman says the company will build technical constraints to keep systems within expected limits, and that the Department of War wants those protections too.












