AI company’s breached biometrics, ID document images make deepfake fraud easier

Mercor, an AI company valued at $10 billion, has been the victim of a major data breach which appears to include ID documents along with user face and voice biometrics.
The highly valued startup supplies training data to major artificial intelligence companies including Anthropic, OpenAI and Meta. The three‑year‑old firm said the breach was tied to a wider supply chain attack on the open‑source library LiteLLM.
Mercor told Fortune it was “one of thousands” impacted after malicious code was inserted into LiteLLM, a tool widely used by developers to connect applications to AI services.
The compromise has been linked to a hacking group known as TeamPCP.
Mercor spokesperson Heidi Hagberg said the company “moved promptly” to contain the breach and has engaged third‑party forensic investigators. She added that Mercor is continuing to update customers and contractors as the investigation progresses.
Commenting on the incident, Ben Colman, CEO at Reality Defender, explained the significance. “Thanks to a recent breach, Mercor just handed bad actors the keys to creating deepfakes of countless people,” Colman said on LinkedIn.
Deepfakes and AI-generated impersonations of people are a surging concern for people and organizations alike. Fraudulent activity using deepfaked voices, such as impersonating a CEO, or deepfaked likenesses, are posing significant challenges. “The bad guys don’t need to build their own biometric datasets when they can simply wait for someone else to lose theirs,” Colman continued.
“Without the ability to catch these fakes, enterprises and governments are at substantial risk of reputational damage, breach of confidential data, and the theft of assets, among countless other social engineering attacks that will undoubtedly happen due to this hack in the coming weeks and months.”
Fortune reported that datasets used by Mercor’s clients — and potentially information about confidential AI projects — may have been accessed. The company has not commented on those claims. Wired reports that Meta has paused all its work with Mercor as it looks into the security breach.
Security researchers say the LiteLLM attack was designed to harvest credentials at scale. TeamPCP is believed to have recently begun collaborating with Lapsus$, an extortion‑focused hacking group known for credential theft and social engineering attacks.
Lapsus$ has claimed responsibility for targeting Mercor and has posted samples of what it says is stolen data. These include internal Slack messages, ticketing information and videos showing interactions between Mercor’s AI systems and contractors. The group claims to have obtained up to four terabytes of data, though Mercor has not confirmed the authenticity or scale of the leak.
Security analysts warn that Mercor may be one of the first high‑profile victims of a broader wave of extortion attempts stemming from the LiteLLM compromise. TeamPCP has publicly indicated plans to work with ransomware and extortion groups to target affected organizations, a strategy that mirrors previous large‑scale supply chain incidents.
In 2023, a similar supply chain attack exploiting the MOVEit file‑transfer tool led to breaches across hundreds of organizations and affected nearly one hundred million individuals. The Mercor incident throws a spotlight on the growing risks posed by attacks on widely used open‑source components in the AI ecosystem.
Article Topics
biometric data | cybersecurity | data privacy | data protection | deepfakes | Mercor







Comments