Here's a twist on the AI investment story you might not have considered: the darker side of artificial intelligence could be creating the next defensive trade. While everyone's talking about AI chips and software, the technology's ability to power sophisticated scams is becoming a multi-billion-dollar problem—and that problem might just fuel a rally in cybersecurity funds.
Think about it this way: AI is great at creating things. It can write poems, generate images, and even mimic human voices with terrifying accuracy. Unfortunately, that last skill is exactly what fraudsters are exploiting. Research shows AI-powered scams are scaling up fast worldwide. According to a report from the Global Anti-Scam Alliance (GASA) in partnership with fraud prevention service Cifas and Tietoevry Banking, UK consumers lost an estimated £9.4 billion between November 2024 and November 2025.
That's not just phishing emails from a "Nigerian prince" anymore. Deepfake technology now lets criminals clone executive voices to authorize fraudulent transfers, fabricate celebrity endorsements for fake investments, and mass-produce personalized investment pitches—all in minutes. When deception becomes this cheap and scalable, the whole concept of digital trust needs a structural rethink.
Cybersecurity ETFs: The Unexpected AI Defense Play
For investors, this creates an interesting dynamic. The surge in AI-driven impersonation attacks could create sustained, structural demand for the tools that fight them: identity verification systems, behavioral analytics, and zero-trust security infrastructure. These aren't nice-to-have features anymore; they're becoming essential plumbing for the digital economy.
And where do you find companies building that plumbing? Cybersecurity ETFs. Funds like the First Trust NASDAQ Cybersecurity ETF (CIBR), the Amplify Cybersecurity ETF (HACK), and the Global X Cybersecurity ETF (BUG) track companies involved in cloud security, network defense, and yes, AI-powered fraud detection. If businesses are forced to completely redesign their verification systems rather than just tack on another security patch, spending in this sector could shift from being discretionary to being mandatory—a much more reliable revenue stream.
Monica Eaton, Founder and CEO of Chargebacks911, says this shift is already happening. "Deepfake scams are no longer fringe experiments but are becoming an industrialized fraud channel," she said. "When criminals can clone a CEO's voice, fabricate a doctor's endorsement, or generate thousands of personalized investment pitches in minutes, traditional fraud controls cannot keep pace."













