FB pixel

Clearview developing spoof detection for AI-generated, manipulated faces

Clearview developing spoof detection for AI-generated, manipulated faces
 

Clearview AI is in the process of developing software to detect face generated or manipulated with AI, as the company targets federal government customers with its facial recognition service.

Co-CEO Hal Lambert tells FedScoop that the plan is to have a detection function that can flag images as being possibly AI-generated out to customers by the end of the year. He also claimed that so far, deepfakes and AI-manipulated images have not been a major problem for Clearview.

Detecting outputs of generative AI has not historically been a major barrier to matching people with images scraped from the internet or to investigations of CSAM (child sexual abuse material), but Australian police raised the challenge posed by manipulated images which allow offenders to create CSAM more quickly last year as they were lobbying to be allowed to use facial recognition. Clearview has pivoted towards serving U.S. federal agencies since it installed the politically-connected Lambert. CSAM is one of two stated use cases for Clearview when it signed a $9.2 million facial recognition contract with ICE earlier in September.

Deepfakes protection against fraud often takes the form of injection attack detection (IAD), which identifies the mechanism used to delivery the fake. Other approaches include detecting anomalies that indicate the data’s creation by generative AI.

Deepfake detection, protection in demand

The market opportunity is large: face deepfake detection is forecast to generate $2.52 billion in revenue by 2027, according to the 2025 Deepfake Detection Market Report & Buyer’s Guide from Biometric Update and Goode Intelligence.

Cloud Security Alliance AI Safety Working Group Co-Chair Ken Huang noted during a presentation at Identity Week earlier this month that deepfake incidents have risen 680 percent year-over-year and impacted 92 percent of financial services companies. His proposed response is the Digital Identity Rights Framework (DIRF), which proposes a security and governance model.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Ring and Flock call off integration as scrutiny of camera-to-police partnership intensifies

Amazon-owned Ring and Flock Safety have canceled their planned partnership, stepping back from an integration that would have linked one…

 

MOSIP pursues democratization of digital identity with unconference conversations

A democratic vision of digital identity is central to the non-profit, open-source mandate of MOSIP. As the organization and the…

 

Liveness is king: FaceTec’s Jay Meier in conversation with Chris Burt 

It’s best, says Jay Meier, to think about identity management as a system of symbiotic systems. Which is to say,…

 

Ofcom fines Kick, threatens 4chan as OSA enforcement steadily dials up

UK regulator Ofcom has faced criticism for being too slow and lenient with its power to enforce the Online Safety…

 

Innovatrics, ROC improve rankings in NIST ELFT, rising to 2 and 3 respectively

Innovatrics is celebrating success in the latest National Institute of Standards and Technology (NIST) Evaluation of Latent Fingerprint Technologies (ELFT)…

 

Meta plans launch of facial recognition to smart glasses in ‘dynamic political environment’

Meta is reportedly planning to roll out facial recognition capabilities for its smart glasses as early as this year, taking…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events