FB pixel

Clearview developing spoof detection for AI-generated, manipulated faces

Clearview developing spoof detection for AI-generated, manipulated faces
 

Clearview AI is in the process of developing software to detect face generated or manipulated with AI, as the company targets federal government customers with its facial recognition service.

Co-CEO Hal Lambert tells FedScoop that the plan is to have a detection function that can flag images as being possibly AI-generated out to customers by the end of the year. He also claimed that so far, deepfakes and AI-manipulated images have not been a major problem for Clearview.

Detecting outputs of generative AI has not historically been a major barrier to matching people with images scraped from the internet or to investigations of CSAM (child sexual abuse material), but Australian police raised the challenge posed by manipulated images which allow offenders to create CSAM more quickly last year as they were lobbying to be allowed to use facial recognition. Clearview has pivoted towards serving U.S. federal agencies since it installed the politically-connected Lambert. CSAM is one of two stated use cases for Clearview when it signed a $9.2 million facial recognition contract with ICE earlier in September.

Deepfakes protection against fraud often takes the form of injection attack detection (IAD), which identifies the mechanism used to delivery the fake. Other approaches include detecting anomalies that indicate the data’s creation by generative AI.

Deepfake detection, protection in demand

The market opportunity is large: face deepfake detection is forecast to generate $2.52 billion in revenue by 2027, according to the 2025 Deepfake Detection Market Report & Buyer’s Guide from Biometric Update and Goode Intelligence.

Cloud Security Alliance AI Safety Working Group Co-Chair Ken Huang noted during a presentation at Identity Week earlier this month that deepfake incidents have risen 680 percent year-over-year and impacted 92 percent of financial services companies. His proposed response is the Digital Identity Rights Framework (DIRF), which proposes a security and governance model.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Face biometrics use cases outnumbered only by important considerations

With face biometrics now used regularly in many different sectors and areas of life, stakeholders are asking questions about a…

 

Biometric Update Podcast explores identification at scale using browser fingerprinting

“Browser fingerprinting is this idea that modern browsers are so complex.” So says Valentin Vasilyev, Chief Technology Officer of Fingerprint,…

 

Passkeys now pervasive but passwords persist in enterprise authentication

Passkeys are here; now about those passwords. Specifically, passkeys are now prevalent in the enterprise, the FIDO Alliance says, with…

 

Pornhub returns to UK, but only for iOS users who verify age with Apple

In the UK, “wanker” is not typically a term of endearment. However, the case may be different for Pornhub, which…

 

Europol operated ‘shadow’ IT systems without data safeguards: Report

Europol has operated secret data analysis platforms containing large amounts of personal information, such as identity documents, without the security…

 

EU pushes AI Act deadlines for high-risk systems, including biometrics

The EU has reached a provisional agreement on changes to the AI Act that postpone rules on high-risk AI systems,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events