So here's a privacy headache that's becoming all too common in the AI world: another chatbot company is getting sued for allegedly sharing users' private conversations with tech giants. This time it's Perplexity AI in the hot seat, facing a class action lawsuit that claims the company was sending users' personal information—including sensitive financial details—to Meta Platforms Inc (META) and Alphabet Inc.'s (GOOG) Google.
The lawsuit, filed in California federal court yesterday, tells a story that's becoming familiar in the age of AI assistants. A Utah man identified as John Doe says he shared personal information about his taxes, investments, and family finances with Perplexity's AI chatbot, believing those conversations were private. According to the complaint, that trust was misplaced—the company allegedly integrated "undetectable" tracking software that automatically sent users' conversations to Meta, Google, and other third parties.
Now, before we get too deep into the allegations, there's an important caveat here. Perplexity's chief communications officer Jesse Dwyer told MarketDash: "We have not been served any lawsuit that matches this description, so we are unable to verify its existence or claims." So we're dealing with allegations at this point, not proven facts. But the lawsuit itself makes some pretty specific claims about how this supposedly worked.
What's interesting here is that the lawsuit doesn't just go after Perplexity. It also accuses Meta and Google of violating state and federal computer privacy and fraud laws. The implication seems to be that if Perplexity was sending data to these companies, they shouldn't have been accepting it. It's a bit like saying: if someone hands you stolen goods, you're still in trouble for taking them.
This isn't Perplexity's first trip to the legal rodeo. Just last month, a federal judge in San Francisco issued a preliminary injunction preventing the company from using its Comet browser's AI agent to enter password-protected areas of Amazon's website and make purchases for users. That lawsuit alleged the company deliberately disguised its AI agent as a regular Google Chrome browser session and wasn't transparent about what it was doing.
And that's not even the only Amazon-related legal trouble for Perplexity. The e-commerce giant has another lawsuit against the company over its "Buy with Pro" e-commerce feature, which Amazon alleges scraped product listings without authorization. So we're looking at a pattern here—multiple legal challenges around how Perplexity interacts with other companies' systems and user data.
But here's the thing: Perplexity isn't alone in facing these kinds of legal headaches. The AI industry as a whole is bumping up against privacy and intellectual property boundaries in ways that are making lawyers very busy.
Take Grammarly, for example. In a class action filed in New York, Julia Angwin—a contributing opinion editor at The New York Times—alleged that Grammarly's AI tool Expert Review used her name and others' without prior consent. Then there's Anthropic, the company behind the Claude chatbot, which is facing a lawsuit from music rights management company BMG. According to reports, BMG alleges Anthropic used lyrics from major artists to train its chatbot without proper authorization.
And in what might be the most disturbing case mentioned, three Tennessee teenagers filed a federal class-action lawsuit against Elon Musk's xAI, claiming its AI chatbot Grok created and spread sexualized images of them without consent. That's a whole different level of privacy violation that shows just how serious these issues can get.
What all these cases have in common is a fundamental question about AI companies' responsibilities when it comes to user data and third-party content. When you tell an AI chatbot about your personal finances, who else is listening? When an AI company trains its models on existing content, what rights do the original creators have? These aren't just technical questions—they're legal ones that courts are now being asked to answer.
The Perplexity case is particularly interesting because it involves not just privacy violations but also alleged deception. The plaintiff claims he believed his conversations were private when they apparently weren't. That's the kind of thing that tends to get judges' attention—the idea that users were misled about how their data would be used.
As these cases work their way through the courts, they're likely to set important precedents for how AI companies handle user data and interact with other platforms. For now, the message to users might be: be careful what you tell your AI assistant, because you might not be the only one hearing it.











