Deepfake detection firms unveil €5M funding round, anti-spoof tool and new CTO

Deepfake detection firms are growing and adapting, as the threat of AI-generated synthetic and manipulated media continues to increase. IndentifAI has new funding, Keyless has a new biometric attack prevention tool, and Reality Defender has a new chief technology officer. Pindrop, meanwhile, is interviewing but not hiring – because many of the candidates are deepfakes.
New funding round for IdentifAI led by United Ventures
Italian startup IdentifAI has raised 5 million euros ($5.8 million) to scale internationally and tackle AI-Generated disinformation, according to a post on LinkedIn. The funding round is led by United Ventures.
The investment will support the Cesena-based company’s expansion across key markets in Europe and the U.S., and accelerate research and development on IdentifAI’s platform.
“We’ve built a system that evolves in real time, allowing us to respond rapidly as new generative models enter the market,” says Marco Castaldo, co-founder of IdentifAI, as quoted in EU Startups.
“With AI’s exponential growth, we’re working against the clock and this new funding round couldn’t come at a better time. Our background in cybersecurity is a key differentiator, because it means we understand the inherent danger of a tech, or risks associated with potential deviant uses of it.”
Keyless spoofing prevention tool uses biometrics, device
A release from Keyless announces the launch of its Biometric Attack Prevention technology. The move is spurred by a rise in biometric spoofing; Keyless cites research from Accenture, which found a 223 percent increase in deepfake-enabling tools on dark web forums between 2023 and 2024.
“Traditional biometric systems were never designed to combat the deepfake threats we face today,” says Paolo Gasti, CTO of Keyless. “While detection technologies have evolved, deepfakes will continue to grow more sophisticated. The future of biometric security lies in prevention. By eliminating the tools fraudsters rely on, we can make these attacks unworkable.”
Keyless authentication incorporates both the user’s face and the device used for enrollment; both are needed for a successful authentication. It screens for presentation attacks and for tools commonly used to compromise devices in injection attacks, such as emulators, rooted devices and hooking techniques.
Keyless also monitors device movement and behavior, screening for unnatural patterns inconsistent with real users.
Reality Defender names Alex Lisle as new CTO
Reality Defender continues to grow its presence in deepfake detection, and has hired Alex Lisle as chief technology officer as it looks to scale globally. A press release says Lisle will have “a crucial role in driving Reality Defender’s commitment to delivering cutting-edge deepfake detection technology to developers and enterprises.”
Lisle will be responsible for developing a roadmap to “make Reality Defender’s technology accessible to a broader audience” by expanding its reach beyond enterprise deployments.
“The stakes of digital deception grow higher every day,” says Ben Colman, co-founder and CEO of Reality Defender, calling Lisle “a builder who thinks in systems and scale, and brings the rare combination of strategic vision and hands-on execution that a mission like ours demands.”
Lisle previously led incubation engineering at SecurityScorecard, served as CTO at Hubble Technology, and advanced mobile app and IoT security solutions at Kryptowire. He holds several cybersecurity patents.
“As deepfakes evolve from curiosity to weapon, we’re not just building detection technology; we’re building the trust infrastructure for an AI-first world,” Lisle says. “I immediately aligned with Reality Defender’s mission in making complex security technology universally accessible, and wholeheartedly share the team’s excitement to expand it.”
‘Ivan’ uses same credentials, different faces to apply for job
Just how quickly are deepfakes being weaponized? Ask Pindrop, the voice fraud detection firm that discovered a deepfake scheme was trying to infiltrate its hiring process. For one job posting alone, the firm received more than 800 applications in a matter of days – a conspicuous glut even in tough economic times. Deeper analysis to 300 candidate profiles found that over 100 were entirely fabricated deepfake candidates.
The firm singled out a candidate it labeled Ivan, whose interview raised flags indicating it could be deepfake media. Ivan’s face moved in strange ways. Ivan’s voice occasionally dropped out or did not align with his lip movements. And when the interviewer asked an unexpected technical question, the Pindrop Pulse detection system identified an “unnatural” pause.
An article from Computer Weekly quotes Pindrop CEO Vijay Balasubramaniyan, who notes how the situation courted the surreal: “the deepfake candidate obviously didn’t know it, but the position ‘he’ was applying for was not just a software engineer – it was a software engineer in the deepfake detection team, which is just super meta.”
A second “Ivan” with the same credentials but a different face confirmed that Ivan was part of a deepfake hiring scheme – a deliberate, coordinated attack.
Balasubramaniyan – who appeared as the first guest on the Biometric Update Podcast, in which he discusses the hiring scheme – says that “the cool thing about Pindrop is we pull on a thread and we go deep – that’s how our products got created.” Their continued dive has led them to “clearly documented proxy relays from North Korea.”
And, he says, “we’re now setting up honeypots to interview them.”
Balasubramaniyan also offers some refreshing criticism of AI developers. “They’re developing these things willy-nilly without any concern for safety,” he says, “and that has to change.”
Article Topics
AI fraud | biometrics | deepfake detection | deepfakes | IdentifAI | Keyless | Pindrop | Reality Defender






Comments