Biometrics and injection detection for deepfake defense a rising priority

Biometrics integrations with injection attack detection to defend the latest front in the global battle against fraud, deepfakes, is the main theme of the week’s top stories on Biometric Update. New insights into deepfakes were presented to the EAB, Persona’s fraud report pinpoints the deepfake threat and Reality Defender warns financial institutions to listen for fake audio. Pindrop’s CEO expands on the same point in Biometric Update’s new podcast, ROC launches injection attack detection and Nametag introduces its deepfake detection in India.
Meanwhile the U.S. government is apparently prioritizing spending on AI surveillance over deepfake research.
Top biometrics news of the week
Research presented by Applied Face Cognition Lab Founder and Bern University Professor Dr. Meike Ramon during an EAB Lunch Talk shows that super-recognizers aren’t much better at identifying deepfakes than the rest of us, despite their ability to match people’s faces. Another study by Ramon indicates that people process real and synthetic faces the same way, which suggests synthetic identities can help with research and model training.
Persona flags deepfakes as the threat to watch in a fraud insights report, and cites Deloitte’s prediction that deepfake detection revenues will nearly triple from 2023 to next year, reaching $15.7 billion. DC’s Attorney General has warned district residents to be wary of deepfake scam calls amid “a disturbing upward trend” in incidents, and Reality Defender argues that voice AI detection is now “a cornerstone of modern financial security.”
Deepfakes are also the topic for the first episode of Biometric Update’s new podcast, with Pindrop CEO Vijay Balasubramanian explaining how his company’s focus has shifted from the “right human” problem to the “real human” problem.
A $30 million contract awarded to Palantir this month will bring AI analysis to immigration records in an attempt to boost deportation levels towards the amount promised by President Trump during his election campaign. ImmigrationOS will combine data from the USCIS and IRS with information from agencies not typically involved in immigration, triggering questions from the House Oversight Committee.
USCIS is asking for biometrics from immigration applicants, but not through the usual appointment process. A lack of information, including about why biometrics are being collected, has led to speculation that AI is being used to flag visa applicants.
A recent analysis by the Brooking Institute highlights the risk of authoritarian tendencies that follow the deployment of surveillance technologies without adequate oversight, and Just Futures Law reports America’s adoption of AI surveillance technologies like Palantir’s and that apparently used by USCIS is expanding.
The Common Vulnerabilities and Exposures (CVE) Program run by Mitre nearly collapsed under the weight of administrative delays and funding cuts, just as IBM data reveals a third of all security breaches are due to identity attacks. America’s lack of digital identity infrastructure has already cost the country billions in fraud, and those costs are likely to increase with any deterioration of the nation’s cybersecurity infrastructure.
Funding for National Science Foundation research into misinformation, including deepfakes, has been pulled to align with a White House order on “restoring freedom of speech.” Meanwhile, deepfakes are increasingly being used to target American companies with fake instructions from board members to transfer money to fraudsters, according to a Ponemon Institute report.
X is taking the state of Minnesota to court in an attempt to have its law against disseminating deepfakes to influence an election, Reuters reports. The social media platform, owned by a Elon Musk, a self-proclaimed free speech absolutist who has presided over the unexplained shutdown of journalists accounts, alleges Minnesota’s law violates the First Amendment and Section 230.
If U.S. law and the NSF are not going to help, America is going to need its private sector to provide some defense against deepfake fraud.
Software to detect biometric injection attacks – one of the main vectors for fraud attacks with deepfakes and generative AI – has been launched by ROC. The company says its Camera Injection Attack Detection is at least 95 percent effective, flagging anomalies and fraud signals in real time.
Nametag has integrated with Aadhaar through a licensed third-party partner, bringing its biometric identity verification with deepfake detection to the Indian market. The company says it is the first to bring deepfake detection to IDV with India’s national digital ID.
In another widely-read item this week, Rwanda is running a tender for a multi-modal biometric authentication system to support its digital ID system for public and private-sector services. The two-year contract will extend from system development through post-implementation support. The system should support modular implementation and open-source software or open standards to align with MOSIP or OSIA.
Readers also continued to track the friction between the UK government and its digital identity industry as the AVPA, the OSTIA and the ADVP call for limits to the Gov.uk wallet. The groups want the government’s participation in the market to be limited to public services, which should also be open to certified digital identity providers, and for mDLs to work with private sector wallets.
Please let us know about any reports, podcasts or other content we should share with the people in biometrics and digital identity through the comments below or social media.
Article Topics
biometrics | deepfake detection | deepfakes | digital ID | digital identity | fraud prevention | injection attacks | week in review







Comments