FB pixel

Meta fights Facebook scam ads with facial recognition

Gets regulatory approval to expand trial to UK, EU
Meta fights Facebook scam ads with facial recognition
 

Meta says it has obtained regulatory approval to roll out a trial for its facial recognition system built to fight off malicious online adverts linked to the faces of celebrities as a bait.

According to Independent UK, the company that owns Facebook, Instagram and WhatsApp, said it will pilot the software in the UK and later in the European Union.

Scam ads linked to public figures have been reported in the UK with a BBC presenter Naga Munchetty cited as once being a victim of deepfake ads that used her face.

The system is already being trialled in the United States and other parts of the world.

The outlet quotes Meta as explaining that the system functions by flagging “celeb-bait” ads it suspects, and then tries to match the face linked to the ad to the profile photo of the celebrity in question, to determine if they are real.

Once the ad is adjudged to be a scam, the account posting the ads will be immediately blocked.

The Independent quotes David Agranovich, an official in charge of security policy at Meta, as saying that the trial is part of efforts by the company “to keep people safe while keeping bad actors out.”

He further stated that “the measures we’re rolling out this week utilize facial recognition technology to help us crack down on fake celebrity scams – commonly referred to as celeb-bait, and to enable faster account recovery for people whose accounts have been locked or potentially hacked.”

When the trial begins in the UK, users will get notifications asking if they are willing to opt in so as to receive the facial recognition protection against online celeb-bait ads.

On the aspect of facilitating the recovery of blocked or breached social media accounts, users will be required to provide their selfie biometrics for verification. The measure from Meta has been praised as a positive step forward in ensuring the safety of users of its social media platforms in the face of increasing online criminal activities and bait scams.

Meta says the move is part of its efforts to keep its online community safe and to combat the growing phenomenon whereby the identities of renowned figures are used to commit crime.

In the last few years, the company has also faced major scrutiny and struggles, including from European regulators, over personal data and privacy issues.

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Sphinx raises $7.1m to expand AI-powered compliance agents

Identity checks were once reliant on human eyes and human discernment, but making sure people and entities are who they…

 

Identity fraud revs up in the automotive sector as purchases move online

Like most industries, the automotive sector is dealing with a spike in fraud. A survey snapshot released by identity provider…

 

DHS RIVR results suggest most ID document validation disastrously ineffective

The results of the identity document validation track within the 2025 Remote Identity Validation Rally are sobering. They indicate that…

 

DHS signals major expansion of biometric matching infrastructure

The Department of Homeland Security (DHS) has issued a Request for Information (RFI) seeking industry input on biometric matching software…

 

ROC impresses in NIST biometric age estimation benchmark, Shufti makes debut

Two new entrants to NIST’s Face Analysis Technology Evaluation (FATE) Age Estimation & Verification, one a debut and the other…

 

Online dating at risk as romance scams, deepfakes infiltrate platforms

Online dating sites are being flooded with deepfakes and AI content, making it hard for users to distinguish real matches…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events