So here's California, doing what California does: stepping into a national policy debate with a big, splashy move of its own. On Monday, Governor Gavin Newsom decided it was time to put some guardrails on the AI wild west, at least for anyone who wants to do business with the state.
He signed an executive order that basically tells AI companies: if you want a contract with California, you're going to have to play by some new rules. The goal is to curb misuse—think things like creating illegal content, baking harmful bias into systems, or violating civil rights. It's a classic use-the-state's-purchasing-power move. California is a massive customer, so its rules tend to matter.
Watermarks, Please: A New Rule for Deepfakes
One of the more concrete parts of the order is about transparency. State agencies are now directed to clearly label AI-generated images and videos with watermarks. It's a direct shot at the growing problem of deepfakes and AI-powered misinformation. The idea is simple: if the state is putting something out, you should know if a machine helped make it.
California Charts Its Own Course Amid Federal Turmoil
This whole thing is happening against a pretty messy federal backdrop. Just recently, the Pentagon slapped a "supply-chain risk" label on Anthropic, the company behind the Claude AI. That label effectively bars contractors from using Anthropic's tech in military work.
Anthropic, which is backed by Amazon.com, Inc. (AMZN) and Alphabet Inc.'s (GOOG) Google, didn't take that lying down. Last week, they got a preliminary injunction from U.S. District Judge Rita Lin. The court said the government's classification was likely unlawful and might have been retaliatory, putting a temporary pause on a directive linked to the previous administration.
California, however, is signaling it might not just fall in line with that court decision. The state is taking an independent stance, suggesting its own risk assessments and rules might differ from what's happening in Washington.






.jpeg)






