FB pixel

Use of deepfakes to manipulate social media users grows

YouTube rolls out likeness detection to journalists, politicians as Meta told by its oversight committee to do better
Use of deepfakes to manipulate social media users grows
 

More tech platforms are adopting measures to protect individuals from AI-generated videos that resemble their appearance. Earlier this week, YouTube announced that it will offer its likeness detection tool to government officials, journalists and political candidates, opening a path for removing unauthorized AI-impersonation

The pilot program is being introduced after the video platform offered likeness detection to  creators in the YouTube Partner Program last year.

“YouTube is where the world comes to understand the events shaping our lives, from breaking news to the debates driving our civil discourse, and as AI-generated content evolves, the people at the heart of those conversations need this tool to protect our identities,” says Rene Ritchie, the company’s creator liaison.

The move comes as AI-generated video becomes increasingly used for impersonation attacks, scams, political misinformation and reputational manipulation. Tech platforms are feeling the pressure to solve the issue.

Public figures and journalists are being protected first because impersonation can also distort markets, elections, and breaking news cycles, according to market intelligence company Liminal.

“This reflects a broader platform trend: synthetic media controls are moving from reactive moderation toward identity-linked detection and notification systems,” the firm says in a recent briefing.

YouTube is not the only one feeling the heat. On Tuesday, Meta was warned by its Oversight Board over the proliferation of deepfake videos in armed conflicts, especially in the case of the 2025 Israel-Iran war. The third-party board oversees how the tech giant makes moderation decisions on Instagram and Facebook.

Last year, OpenAI pledged to “strengthen guardrails around replication of voice and likeness when individuals do not opt-in,” after Breaking Bad actor Bryan Cranston contacted actors’ labor union SAG-AFTRA about unauthorized generative iterations of his face.

​Both OpenAI and YouTube have expressed support for the draft NO FAKES Act, the first federal legislation that would give public figures and private individuals greater control over their online likenesses, including AI-generated content.

How YouTube plans to detect deepfakes

YouTube’s new likeness tool works similarly to Content ID, an automated system that identifies matches of copyright-protected content. Eligible users must provide government identification and a video of themselves, after which YouTube can notify them when AI-generated videos appear to match their likeness. The individual can then review the content and request removal if it violates YouTube’s privacy guidelines.

YouTube says that there will be no automatic takedowns, as the company protects free expression and content in the public interest, including parody and satire.

The platform has also pledged that the data collected during setup will be used solely for identity verification and not for training Google’s generative AI models.

The clarification comes after media reports revealed that biometric data from creators in the YouTube Partner Program, collected to detect deepfakes, could be used for other purposes in the future. Google’s privacy policy states that “public content, including biometric information, can be used to help train Google’s AI models and build products and features.”

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Face biometrics use cases outnumbered only by important considerations

With face biometrics now used regularly in many different sectors and areas of life, stakeholders are asking questions about a…

 

Biometric Update Podcast explores identification at scale using browser fingerprinting

“Browser fingerprinting is this idea that modern browsers are so complex.” So says Valentin Vasilyev, Chief Technology Officer of Fingerprint,…

 

Passkeys now pervasive but passwords persist in enterprise authentication

Passkeys are here; now about those passwords. Specifically, passkeys are now prevalent in the enterprise, the FIDO Alliance says, with…

 

Pornhub returns to UK, but only for iOS users who verify age with Apple

In the UK, “wanker” is not typically a term of endearment. However, the case may be different for Pornhub, which…

 

Europol operated ‘shadow’ IT systems without data safeguards: Report

Europol has operated secret data analysis platforms containing large amounts of personal information, such as identity documents, without the security…

 

EU pushes AI Act deadlines for high-risk systems, including biometrics

The EU has reached a provisional agreement on changes to the AI Act that postpone rules on high-risk AI systems,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events