Here's a fun thought experiment: imagine an AI so smart it can find security holes in banking software that humans have missed for years. Now imagine that same tool in the wrong hands. That's essentially what U.S. officials just told major banks in a closed-door meeting in Washington, according to reports.
Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell sat down with executives from Bank of America (BAC), Citigroup (C), and Wells Fargo (WFC) earlier this week. The topic? A powerful new artificial intelligence system from Anthropic called Claude Mythos Preview that could, in theory, expose critical cybersecurity weaknesses faster than traditional security methods ever could.
Think about it this way: banks spend billions on cybersecurity, patching known vulnerabilities and hunting for new ones. But what if an AI could scan millions of lines of code and spot flaws that even the best human experts overlook? That's the promise—and the peril—officials are worried about. They cautioned that such capability could create golden opportunities for malicious actors if the tool falls into the wrong hands, putting sensitive financial data at increased risk.
Anthropic, to its credit, seems aware of the double-edged sword. The company has acknowledged these risks and limited access to the model through a restricted initiative dubbed "Project Glasswing," which involves around 40 organizations. One of them is JPMorgan Chase (JPM), which is testing the AI for defensive cybersecurity applications. JPMorgan CEO Jamie Dimon didn't attend the meeting due to prior commitments, but the bank is reportedly involved in early-stage testing to see how AI can bolster its defenses.
Officials stressed the urgency of addressing these AI-related threats across financial systems. Kevin A. Hassett put it bluntly: "We're taking every step we can to make sure that everybody is safe from these potential risks, including Anthropic agreeing to hold back the public release of the model until our officials have figured everything out."
But here's where it gets even more complicated: the U.S. government and Anthropic are currently locked in a legal dispute. The Defense Department has labeled the company a "supply chain risk," following disagreements over restrictions on military use of AI technology. So on one hand, regulators are warning banks about the risks of this AI; on the other, there's a broader tug-of-war between innovation and national security playing out in the background.
It's a classic tech dilemma: the same tool that could help banks defend themselves might also teach hackers how to attack more effectively. And with financial systems on the line, regulators aren't taking any chances. The message to banks is clear: get ready, because the AI era is bringing new kinds of risks to your doorstep—and they're moving fast.











