Deepfake defense tools integrated by authID and Prove, launched by Evo Tech

authID has formed a strategic relationship with Prove to ensure the integrity of digital identities against the threat of deepfakes.
Prove is integrating authID’s biometric technologies into its platform to block synthetic identities and impersonations during identity proofing and ID verification processes. authID’s selfie biometric identity verification product Proof, its biometric authentication product Verified, and its recently-launched tokenization capability PrivacyKey are all being added to the Prove platform to protect enterprise customers.
Prove CEO Roger Desai says the integration of PrivacyKey adds the needed trust layer “without adding friction.”
“Partnering with Prove, a company that powers identity verification for many of the world’s most trusted financial institutions, is a tremendous validation of our technology and strategic direction,” says Rhon Daguro, CEO of authID, in the announcement. “This partnership is about more than just technology integration, it’s about setting a new standard for secure, privacy-preserving identity verification worldwide.”
Startup Evo Tech has joined the deepfake detection market with the launch of its modular Evolution 1.0 platform for intelligence, law enforcement and investigative organizations.
The platform runs separate AI agents dedicated to evaluating still images, videos, audio data and text to identify forgeries. They use techniques like neural network modeling, facial symmetry analysis, lip-sync timing, acoustic resonance detection and handwriting pattern matching to evaluate content, according to the company announcement. Each agent generates a Reliability Score to inform decision-making.
The platform is also customizable, with real-time or scheduled processing, confidence threshold adjustments and manual overrides.
People can’t tell
False confidence is emerging as a potential challenge in the struggle against deepfakes, as a troubling percentage of people incorrectly believe that they can detect such sophisticated digital trickery. Defensive technologies and laws seem more likely to be effective.
In Singapore, it is more than three in four people are confidant they can detect deepfakes. But the number who were able to do so when that confidence was put to the test was the opposite – one in four — according to a survey from the Cyber Security Agency of Singapore (CSA) reported by The Straits Times. The gap between people’s ability to see deepfakes for what they are and their self-perceived ability threatens to create a kind of “canny valley” fraud victims can easily fall into.
Denmark is addressing the problem by advancing legislation to give people copyright over their likeness, including their facial features and voice, euronews reports.
Giving individuals control over their likeness as a defense against deepfakes was suggested in an article published earlier this year by the Ada Lovelace Institute.
The legislation is the latest in a series of moves on the continent to make deepfake creation punishable under the law. The EU AI Act declares deepfakes a “limited risk” technology, and imposes some transparency requirements, while France and the UK have passed laws criminalizing certain non-consensual deepfakes.
Article Topics
authID | biometrics | deepfake detection | Evo Tech | identity proofing | identity verification | Prove | selfie biometrics






Comments