FB pixel

Use of deepfakes to manipulate social media users grows

YouTube rolls out likeness detection to journalists, politicians as Meta told by its oversight committee to do better
Use of deepfakes to manipulate social media users grows
 

More tech platforms are adopting measures to protect individuals from AI-generated videos that resemble their appearance. Earlier this week, YouTube announced that it will offer its likeness detection tool to government officials, journalists and political candidates, opening a path for removing unauthorized AI-impersonation

The pilot program is being introduced after the video platform offered likeness detection to  creators in the YouTube Partner Program last year.

“YouTube is where the world comes to understand the events shaping our lives, from breaking news to the debates driving our civil discourse, and as AI-generated content evolves, the people at the heart of those conversations need this tool to protect our identities,” says Rene Ritchie, the company’s creator liaison.

The move comes as AI-generated video becomes increasingly used for impersonation attacks, scams, political misinformation and reputational manipulation. Tech platforms are feeling the pressure to solve the issue.

Public figures and journalists are being protected first because impersonation can also distort markets, elections, and breaking news cycles, according to market intelligence company Liminal.

“This reflects a broader platform trend: synthetic media controls are moving from reactive moderation toward identity-linked detection and notification systems,” the firm says in a recent briefing.

YouTube is not the only one feeling the heat. On Tuesday, Meta was warned by its Oversight Board over the proliferation of deepfake videos in armed conflicts, especially in the case of the 2025 Israel-Iran war. The third-party board oversees how the tech giant makes moderation decisions on Instagram and Facebook.

Last year, OpenAI pledged to “strengthen guardrails around replication of voice and likeness when individuals do not opt-in,” after Breaking Bad actor Bryan Cranston contacted actors’ labor union SAG-AFTRA about unauthorized generative iterations of his face.

​Both OpenAI and YouTube have expressed support for the draft NO FAKES Act, the first federal legislation that would give public figures and private individuals greater control over their online likenesses, including AI-generated content.

How YouTube plans to detect deepfakes

YouTube’s new likeness tool works similarly to Content ID, an automated system that identifies matches of copyright-protected content. Eligible users must provide government identification and a video of themselves, after which YouTube can notify them when AI-generated videos appear to match their likeness. The individual can then review the content and request removal if it violates YouTube’s privacy guidelines.

YouTube says that there will be no automatic takedowns, as the company protects free expression and content in the public interest, including parody and satire.

The platform has also pledged that the data collected during setup will be used solely for identity verification and not for training Google’s generative AI models.

The clarification comes after media reports revealed that biometric data from creators in the YouTube Partner Program, collected to detect deepfakes, could be used for other purposes in the future. Google’s privacy policy states that “public content, including biometric information, can be used to help train Google’s AI models and build products and features.”

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Growing role of biometrics in everyday life demands urgent deepfake response

Biometrics are becoming more entrenched a couple of market segments, though not as fast as some would like. The top…

 

PNG expands mandatory digital ID to businesses taking gov’t contracts

The government of Papua New Guinea is making its national digital ID a mandatory form of authentication for all business…

 

Imply reaches face biometrics milestone at tech-forward Arena da Baixada

Imply Tecnologia’s facial recognition model has enabled more than 1 million accesses at Arena da Baixada, the home of Club…

 

Following IPO, ROC is investing in homegrown security for US market

In February, Colorado-based biometrics and vision AI provider ROC closed the first big biometrics IPO of 2026, raising just over…

 

Jumio expanding biometric reusable digital identity across LatAm

Following a launch in Brazil last year, U.S.-based Jumio is expanding its face biometrics-based reusable digital identity product, selfie.DONE, across…

 

Denmark imposes age checks to restrict social media to kids under 15

Welcome two more Europeans nations to the global age assurance legislation party. The Danish government is moving ahead with an…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events