FB pixel

Clearview developing spoof detection for AI-generated, manipulated faces

Clearview developing spoof detection for AI-generated, manipulated faces
 

Clearview AI is in the process of developing software to detect face generated or manipulated with AI, as the company targets federal government customers with its facial recognition service.

Co-CEO Hal Lambert tells FedScoop that the plan is to have a detection function that can flag images as being possibly AI-generated out to customers by the end of the year. He also claimed that so far, deepfakes and AI-manipulated images have not been a major problem for Clearview.

Detecting outputs of generative AI has not historically been a major barrier to matching people with images scraped from the internet or to investigations of CSAM (child sexual abuse material), but Australian police raised the challenge posed by manipulated images which allow offenders to create CSAM more quickly last year as they were lobbying to be allowed to use facial recognition. Clearview has pivoted towards serving U.S. federal agencies since it installed the politically-connected Lambert. CSAM is one of two stated use cases for Clearview when it signed a $9.2 million facial recognition contract with ICE earlier in September.

Deepfakes protection against fraud often takes the form of injection attack detection (IAD), which identifies the mechanism used to delivery the fake. Other approaches include detecting anomalies that indicate the data’s creation by generative AI.

Deepfake detection, protection in demand

The market opportunity is large: face deepfake detection is forecast to generate $2.52 billion in revenue by 2027, according to the 2025 Deepfake Detection Market Report & Buyer’s Guide from Biometric Update and Goode Intelligence.

Cloud Security Alliance AI Safety Working Group Co-Chair Ken Huang noted during a presentation at Identity Week earlier this month that deepfake incidents have risen 680 percent year-over-year and impacted 92 percent of financial services companies. His proposed response is the Digital Identity Rights Framework (DIRF), which proposes a security and governance model.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Agentic AI working groups ask what happens when we ‘give identity the power to act’

The pitch behind agentic AI is that large language models and algorithms can be harnessed to deploy bots on behalf…

 

Nothin’ like a G-Knot: finger vein crypto wallet mixes hard science with soft lines

Let’s be frank: most biometric security hardware is not especially handsome. Facial scanners and fingerprint readers tend to skew toward…

 

Idemia Smart Identity negotiates with Nepal, nears ID document issuance in Armenia

A pair of deals for Idemia Smart Identity to supply biometric ID documents, one in Nepal and one in Armenia,…

 

Rapid expansion of DHS’s citizenship database raises new election concerns

Over the past month, the Department of Homeland Security (DHS) has quietly transformed the Systematic Alien Verification for Entitlements (SAVE)…

 

Aurigin adds voice liveness detection to Swisscom identity infrastructure

Aurigin.ai is collaborating with Swisscom Digital Trust to strengthen existing KYC processes with voice-based liveness verification and AI deepfake detection,…

 

Self completes $9M seed round, introduces points scheme for verification

Self, which provides zero-knowledge identity and proof-of-personhood (PoP) infrastructure, has announced the completion of a nine-million-dollar seed raise earlier this…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events