FB pixel

End-to-end standard pushed to out deepfakes and other misinformation

Categories Biometric R&D  |  Biometrics News
End-to-end standard pushed to out deepfakes and other misinformation
 

A proposed open standard for detecting deepfakes has been published by a broad coalition of companies involved with artificial intelligence, content creation or authentication.

The Coalition for Content Provenance and Authenticity is pushing a standard for software and hardware used to create, edit, manage and verify content.

Creators would be able to certify what they did in generating a still image or video. They also would be protected when alterations are made downstream by certifying those unauthorized changes.

Editors, distributors (such as social media platforms) and consumers of digital content could largely allay worries about AI-generated misinformation, according to the coalition’s standard specifications.

But law enforcement, court officials, regulators, celebrities and even human rights workers could also benefit from the ability to investigate the provenance of allegedly true content.

Some coalition members, such as the United Kingdom’s BBC, CBC/Radio-Canada, The New York Times and Twitter, have everything to lose if they are distrusted. Others, like vendors Adobe, Niko, Serum of Truth and Truepic, hope to benefit from societies seeking more information certainty.

(Truepic has been working since at least 2020 with U.S. wireless vendor Qualcomm Technologies to make Truepic provenance software native on phones using Qualcomm chipsets.)

Many entities faced with a reality dysfunction created by deepfakes want to find a reliable farm-fertilizer detector and will invest in any technology solution with a shot at working.

This coalition is not even the first to take a crack at solving this problem. In 2020, a group similar to this one was trying to get its hands around deepfakes.

Ultimately, though, the potential authentication standard will not be the key piece of the puzzle. Information distributors still have to have the integrity to test content. And information consumers have to be honest and curious enough challenge content that bolsters their own uninformed biases.

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Japan moves toward age verification for social media filters and risk labels

Japan’s policymakers are considering their own version of age assurance for social media with content filtering taking the limelight. Nikkei…

 

AVPA plots course for age assurance future based on learnings from Australia

In 2025, few people on Earth logged as many travel miles as Iain Corby, the executive director of the Age…

 

Regula analysis finds ID document verification hardest for Arabic, Chinese, Japanese

While the Latin alphabet is the alpha and omega for around 40 percent of the world’s people, that still leaves…

 

London police win legal challenge against live facial recognition deployment

London’s Met Police force has won a legal challenge to its use of live facial recognition, allowing them to continue…

 

Roblox settles with Alabama, West Virginia, agrees to age checks for users under 16

Social gaming platform Roblox is settling its accounts. Having settled with the State of Nevada for $12.5 million over lawsuits…

 

YouTube offers its biometric deepfake detection tool to celebrities

After content creators, politicians and journalists, YouTube will also enable celebrities to access its likeness detection tool, allowing them to…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events