Deepfake detection partnerships span AI, academia, C-suite and celebrity content

The deepfake threat continues to spur partnerships, as providers aim to refine their technology in the face of increasingly sophisticated synthetic media, AI-generated audio and likeness theft.
Reality Defender to train deepfake detection AI on speech engine dataset
Reality Defender has announced a strategic data partnership with the AI voice generator platform PlayAI, which will see it leverage data generated from PlayAI’s voice models to improve the accuracy and resilience of its deepfake detection tools.
A release says the collaboration demonstrates PlayAI’s “commitment to the ethical and responsible use of AI, and the importance of maintaining trust and accountability within the digital landscape.”
“At PlayAI, we believe generative AI is a fundamental advancement for humanity, but requires new technologies to maintain alignment,” says CEO Mahmoud Felfel. “We’re doubling down on this creed by partnering with Reality Defender to add another layer of deepfake protection for our users. We share the same drive for a future where AI and trust go hand in hand.”
The firm presumably feels the need to specify that they are not among those causing the deepfake problem Reality Defender aims to solve. However, their stated offering – “generate AI voices as real as humans. Deploy everywhere – to web, to phone, to apps, and beyond” – certainly sounds like the kind of cheap, easy speech engine technology deepfake warriors warn about.
The AI industry has begun to exhibit a trend in this direction: develop a technology powerful enough to be a threat, then jump on the technology being proposed to counter that threat.
Regardless, the partnership is a win for Reality Defender, which gets to train its patented deepfake detection algorithm on PlayAI’s library of voice audio created with generative AI – fulfilling the proverbial requirement to know one’s enemy.
Reality Defender CEO Ben Colman says the collaboration with PlayAI “strengthens our ability to combat the rising challenge of fraud driven by synthetic voice impersonation.” CTO Ali Shahriyari agrees: “PlayAI’s expertise and position as a leader in generative AI-empowered voice enables our team to deliver advanced detection capabilities that will stay several steps ahead of malicious actors and fraudsters.”
KOBIL integration optimizes real time deepfake detection
KOBIL is collaborating with the Technical University of Darmstadt to improve its deepfake voice detection capabilities.
A release from the digital identity and mobile security firm says the System Security Lab at TU Darmstadt has developed a tool called VoiceRadar, which will be integrated into KOBIL’s Secure SuperApp platform to optimize identity verification and detect deepfake voices in real time. A customized version of the tool will also be available to prospective founders, through the KOBIL Venture Studio in Silicon Valley.
Notes Ismet Koyun, CEO of KOBIL Group, “we live in a time where you can no longer trust everything you see or hear – even if it appears real. In the wrong hands, deepfake technologies can cause immense economic, human and social damage.”
Doppel partners with GetReal Security to monitor malicious content
Doppel and GetReal Security are partnering to address the deepfake threat, specifically to executives. Doppel’s business is providing algorithmic defense against social engineering, while GetReal Security concentrates on “malicious AI-generated content,” namely deepfakes.
According to a release from the firms, deepfakes and synthetic media have been ranked by the World Economic Forum as the top global technology risk for the second year in a row in 2025. Their integrated product will “proactively monitor for deepfakes online, conduct multi-layer analysis to verify the authenticity of digital content, and deliver explainable, evidence-backed results powered by ensemble machine learning, statistical analysis, and advanced forensics.”
“AI has rapidly changed the threat landscape, with deepfakes in particular becoming a huge threat to businesses and their executives,” says Kevin Tian, CEO of Doppel.
Matt Moynahan, CEO of GetReal Security, agrees. “The integrity of digital content is under siege as barriers to creating and distributing synthetic, deceptive media all but disappear,” he says. “This partnership gives enterprises an upper hand – enabling them to spot suspicious content early and respond decisively when it matters most with precise assessment insights into authenticity and supporting forensic evidence.”
Loti AI closes Series A funding with $16.2M investment
Loti AI, a provider of “likeness protection technology” has closed their $16.2 million Series A funding led by Khosla Ventures with additional investments from FUSE, Bling Capital, and Ensemble, according to a release.
This builds on Loti AI’s October 2024 seed round of $6.65 million, which included FUSE, Bling Capital, Khosla Ventures, Ensemble, Alpha Edison and K5 Tokyo Black. Per the release, the funding will drive product development, market expansion, and the scaling of Loti AI’s systems to enhance protection for public figures, brands, and individuals.
“At Loti AI, we are committed to putting people at the center of AI,” says Luke Arrigoni, CEO of Loti AI. “Our vision is a future where personal autonomy and technological progress exist in harmony – fostering creativity and benefiting society. Partnering with Khosla Ventures to solve likeness and licensing challenges online is an important step in prioritizing human agency, and we look forward to continuing this journey together.”
Social media impersonations, deepfakes, voice simulations, and leaks are new threats to high-profile individuals such as celebrities, artists, athletes and politicians. But the threat of likeness appropriation is becoming a problem for everyone, as evidenced by Loti’s offering of a free service that allows individuals to monitor and remove unauthorized content.
“Our thesis around Loti is simple,” says Jon Chu of Khosla Ventures. “Generative AI enables new deepfake technology that creates new risks and challenges around fraud and trust – challenges that celebrities, influencers, and brands are not prepared for today. And Loti has world class technology paired with a category leading product that has proven itself.”
Article Topics
deepfake detection | deepfakes | face biometrics | funding | GetReal Security | Kobil | Loti AI | Reality Defender | synthetic data | synthetic identity fraud | voice biometrics
Comments