Marketdash

AI Engineers Draw Their 'Red Lines': Google and OpenAI Staff Push Back on Pentagon Deals

MarketDash
Employees at Google and OpenAI are publicly challenging their companies' military AI contracts, demanding ethical guardrails similar to those championed by Anthropic.

Get Alphabet Inc. (Class C) Alerts

Weekly insights + SMS alerts

Here's a story about what happens when the people who build the most powerful technology in the world start asking their bosses: "What are we building this for, exactly?"

Employees at Alphabet's Google (GOOGL) and OpenAI are making some noise. They're not happy about their companies' plans to work with the Pentagon on artificial intelligence, and they're putting it in writing. Over 100 Google staffers sent a letter to management calling for the establishment of clear ethical boundaries—or "red lines"—in government contracts. They're taking a page from the playbook of AI startup Anthropic, which has been very public about drawing such lines.

The Google employees' specific worries? They don't want the company's Gemini AI product used by the U.S. military for mass surveillance of American citizens or for operating autonomous weapons without a human in the loop. The letter went to Jeff Dean, the head scientist of Google's AI division, Google DeepMind. Interestingly, Dean has previously expressed support for Anthropic's stance against using AI for American surveillance.

But the Google letter isn't a solo act. In a show of cross-company solidarity, several OpenAI and Google employees also signed a separate open letter with a defiant title: "We Will Not Be Divided." This letter criticizes the Pentagon's negotiation strategies and calls on tech leaders to push back against the current demands from the Department of War.

So why now? This employee activism appears to be a direct response to a very public standoff. Anthropic CEO Dario Amodei recently refused to accept updated contract language from the Pentagon, stating the AI startup "cannot in good conscience accede" to terms that did not adequately block the model's potential use for mass surveillance or fully autonomous weapons. The employees seem to be saying, "If Anthropic can take this stand, why can't we?"

The Pentagon, for its part, is pushing back on the narrative. Pentagon spokesman Sean Parnell took to the social media platform X to clarify the military's position, stating it "has no interest in using AI to conduct mass surveillance of Americans (which is illegal)" nor for developing weapons that operate without human involvement.

This isn't an isolated incident of internal dissent at Google. Earlier this month, a group of full-time employees signed another open letter urging the company to sever ties with the Department of Homeland Security (DHS), Immigration and Customs Enforcement (ICE), and Customs and Border Protection (CBP). That letter cited a lack of transparency from leadership, including CEO Sundar Pichai, and also demanded greater disclosure, worker safety protections, and a company-wide meeting to address concerns.

It's a fascinating moment. The engineers and scientists who create these powerful AI systems are increasingly willing to use their collective voice to question not just how the technology works, but for whom it works and to what end. They're demanding a seat at the table when it comes to defining the ethical boundaries of their own creations. The question for Google and OpenAI leadership is how they'll respond: Will they listen to the people building the tools, or will they follow the money and the mandates from Washington?

AI Engineers Draw Their 'Red Lines': Google and OpenAI Staff Push Back on Pentagon Deals

MarketDash
Employees at Google and OpenAI are publicly challenging their companies' military AI contracts, demanding ethical guardrails similar to those championed by Anthropic.

Get Alphabet Inc. (Class C) Alerts

Weekly insights + SMS alerts

Here's a story about what happens when the people who build the most powerful technology in the world start asking their bosses: "What are we building this for, exactly?"

Employees at Alphabet's Google (GOOGL) and OpenAI are making some noise. They're not happy about their companies' plans to work with the Pentagon on artificial intelligence, and they're putting it in writing. Over 100 Google staffers sent a letter to management calling for the establishment of clear ethical boundaries—or "red lines"—in government contracts. They're taking a page from the playbook of AI startup Anthropic, which has been very public about drawing such lines.

The Google employees' specific worries? They don't want the company's Gemini AI product used by the U.S. military for mass surveillance of American citizens or for operating autonomous weapons without a human in the loop. The letter went to Jeff Dean, the head scientist of Google's AI division, Google DeepMind. Interestingly, Dean has previously expressed support for Anthropic's stance against using AI for American surveillance.

But the Google letter isn't a solo act. In a show of cross-company solidarity, several OpenAI and Google employees also signed a separate open letter with a defiant title: "We Will Not Be Divided." This letter criticizes the Pentagon's negotiation strategies and calls on tech leaders to push back against the current demands from the Department of War.

So why now? This employee activism appears to be a direct response to a very public standoff. Anthropic CEO Dario Amodei recently refused to accept updated contract language from the Pentagon, stating the AI startup "cannot in good conscience accede" to terms that did not adequately block the model's potential use for mass surveillance or fully autonomous weapons. The employees seem to be saying, "If Anthropic can take this stand, why can't we?"

The Pentagon, for its part, is pushing back on the narrative. Pentagon spokesman Sean Parnell took to the social media platform X to clarify the military's position, stating it "has no interest in using AI to conduct mass surveillance of Americans (which is illegal)" nor for developing weapons that operate without human involvement.

This isn't an isolated incident of internal dissent at Google. Earlier this month, a group of full-time employees signed another open letter urging the company to sever ties with the Department of Homeland Security (DHS), Immigration and Customs Enforcement (ICE), and Customs and Border Protection (CBP). That letter cited a lack of transparency from leadership, including CEO Sundar Pichai, and also demanded greater disclosure, worker safety protections, and a company-wide meeting to address concerns.

It's a fascinating moment. The engineers and scientists who create these powerful AI systems are increasingly willing to use their collective voice to question not just how the technology works, but for whom it works and to what end. They're demanding a seat at the table when it comes to defining the ethical boundaries of their own creations. The question for Google and OpenAI leadership is how they'll respond: Will they listen to the people building the tools, or will they follow the money and the mandates from Washington?