FB pixel

Deepfakes declared top AI threat, biometrics and content attribution scheme proposed to detect them

Categories Biometric R&D  |  Biometrics News
Deepfakes declared top AI threat, biometrics and content attribution scheme proposed to detect them
 

Biometrics may be the best way to protect society against the threat of deepfakes, but new solutions are being proposed by the Content Authority Initiative and the AI Foundation.

Deepfakes are the most serious criminal threat posed by artificial intelligence, according to a new report funded by the Dawes Centre for Future Crime at the University College London (UCL), among a list of the top 20 worries for criminal facilitation in the next 15 years.

The study is published in the journal Crime Science, and ranks the 20 AI-enabled crimes based on the harm they could cause.

Fake audio or video content could be used for anything from discrediting a public figure to impersonating an individual to steal money from them. Such content could be difficult to detect and stop, and could also lead to a decline in trust of audio or visual evidence, which is its own social harm.

Other technologies among the most prominent threats include the use of driverless cars as weapons, the use of AI in phishing attacks, hacking of systems controlled by AI, the harvesting of online information to fuel large-scale blackmail, and fake news composed by AI.

“Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated, and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service. This means criminals may be able to outsource the more challenging aspects of their AI-based crime,” says Dr. Matthew Caldwell of UCL Computer Science, a report author.

Biometrics to the rescue?

Mitek Head of Strategy Joe Bloemendaal suggests in an editorial for TechRadar that advances in biometrics will allow companies to defeat professional fraud attempts and deepfakes.

Consumers are still confused about what digital identity is, according to Bloemendaal, leaving many feeling powerless to protect it. By utilizing more advanced methods of control and security than passwords, security questions, and digital signatures, that trust could be restored.

Biometrics and related other advanced technologies are in a constant race against malicious actors, however, Bloemendaal writes. Recently, they have begun producing digital identity credentials and ID documents which are nearly flawless. Combinations of facial and behavioral biometrics are being tested by researchers to detect even the most sophisticated deepfakes.

Initiative proposes content attribution system

The Content Authority Initiative has published a white paper called “Setting the Standard for Content Attribution” which proposes a system for establishing the true provenance of published materials.

The white paper is co-authored by 11 people, including representatives of Adobe, Microsoft, The New York Times, the BBC and CBC. They gathered to develop an open and extensible attribution solution that could be applied across devices, software, and publishing and media platforms.

Today, this is typically done with metadata for digital media, but that system is corruptible, according to the CAI. The new system is based on ‘assertions,’ which identify the source of asset creation, and ‘claims’ which use cryptography to make the assertions verifiable and trustworthy. Creative professionals would authenticate to the software they use to create the content, for example, while the work of human rights activists gathering data would be verified by NGOs or media outlets based on CAI settings the activist applies.

How the individual creating content is authenticated is not specifically stipulated, but in order to support a variety of use cases, the model proposes using URI-based digital identity formats such as Decentralized Identifiers-DID,  WebIDs,  OpenID, and ORCiD.

The system is based on a minimum of novel technology, and relies on existing standards for encoding, hashing, signing, compression, and metadata.

AI Foundation raises $17M as deepfake detector reaches production

The AI Foundation has raised $17 million to develop “ethical” AI entities that can be trained to perform various tasks, VentureBeat reports.

The Foundation is a dual commercial and non-profit organization founded by former EA VP Lars Buttler and Rob Meadows to make AI widely available.

The organization has also brought Reality Defender, a tool for identifying known deepfakes and reporting suspected ones, to production. The software analyzes content with AI to detect signs of manipulation, and also looks for an “honest AI” watermark the AI Foundation promotes to content creation partners.

The AI Foundation boosted its understanding of deepfake techniques with a partnership with the Visual Computing Lab at the Technical University of Munich (TUM).

Participants in the series B funding round include Mousse Partners, You & Mr. Jones, Founders Fund, Alpha Edison, and Stone. The company also raised $10.5 million in 2018.

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Agentic AI working groups ask what happens when we ‘give identity the power to act’

The pitch behind agentic AI is that large language models and algorithms can be harnessed to deploy bots on behalf…

 

Nothin’ like a G-Knot: finger vein crypto wallet mixes hard science with soft lines

Let’s be frank: most biometric security hardware is not especially handsome. Facial scanners and fingerprint readers tend to skew toward…

 

Idemia Smart Identity negotiates with Nepal, nears ID document issuance in Armenia

A pair of deals for Idemia Smart Identity to supply biometric ID documents, one in Nepal and one in Armenia,…

 

Rapid expansion of DHS’s citizenship database raises new election concerns

Over the past month, the Department of Homeland Security (DHS) has quietly transformed the Systematic Alien Verification for Entitlements (SAVE)…

 

Aurigin adds voice liveness detection to Swisscom identity infrastructure

Aurigin.ai is collaborating with Swisscom Digital Trust to strengthen existing KYC processes with voice-based liveness verification and AI deepfake detection,…

 

Self completes $9M seed round, introduces points scheme for verification

Self, which provides zero-knowledge identity and proof-of-personhood (PoP) infrastructure, has announced the completion of a nine-million-dollar seed raise earlier this…

Comments

One Reply to “Deepfakes declared top AI threat, biometrics and content attribution scheme proposed to detect them”

  1. 3D Face Authentication has already been defending against deepfakes for some time. They are video-based, and when the system detects a non-human video attempt to log in, it is rejected. The FaceTec technology is in hundreds of applications in six continents. Deepfakes are an issue for high-profile individuals, but not for individuals on any system using 3D Face Authentication.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events