FB pixel

UN initiative unites standards bodies to tackle global deepfake threat

Big Tech, government think tanks also among contributors to AI-focused collaboration
UN initiative unites standards bodies to tackle global deepfake threat
 

You may have heard a friend or relative proclaim recently, “I can’t even tell what’s real anymore.” Generative AI is flooding the internet with synthetic content. AI-generated bands are going viral. And deepfake have gotten so convincing, fraudsters have the confidence to try impersonating major public figures like U.S. Secretary of State Marco Rubio.

The UN is listening, and has published two papers through an initiative called the AI and Multimedia Authenticity Standards Collaboration, which a post says aims to “respond decisively to the risks posed by deepfakes, misinformation, and synthetic content misuse, while fostering the creative and societal benefits of AI.”

“Our understanding of creativity, truth, and integrity is undergoing a radical transformation with the rise of artificial intelligence,” says the UN Agency for Digital Technologies. “AI-generated and -edited content is becoming the new norm, especially among younger, AI-native communicators and consumers. Synthetic media, once a novel anomaly, is now seamlessly woven into our cultural fabric – reshaping communication, democratizing access to content-creation tools, and simultaneously challenging long-standing assumptions about authenticity and trust.”

The initiative is led by the World Standards Cooperation, a partnership of the International Electrotechnical Commission (IEC), International Organization for Standardization (ISO) and the International Telecommunication Union (ITU). It engages standards developers, technology leaders, policymakers, researchers and civil society in an effort to build “a cohesive ecosystem of international standards,” which will “redefine digital integrity with an inclusive and future-oriented framework of transparency, accountability, and ethical innovation.”

The Content Authenticity Initiative (CAI), the Coalition for Content Provenance and Authenticity (C2PA) and the Internet Engineering Task Force (IETF) are also involved. But participants come from outside the regulatory sandbox, too. Adobe, Microsoft and Shutterstock represent Big Tech. Authentication specialists DataTrails and Deep Media are present, as is the human rights organization Witness. The research cohort includes Germany’s Frauenhofer research institute, the Swiss Federal Institute of Technology in Lausanne (EPFL) and the China Academy of Information and Communications Technology (CAICT), a research institute run by China’s Ministry of Industry and Information Technology (MIIT).

Papers tackle deepfake issue from technical, policy perspectives 

The first major deliverables from the AI and Multimedia Authenticity Standards Collaboration are two new white papers, “one technical, the other more policy-focused.”

The first presents a systematic overview of current standards and specifications “at the intersection of digital media authenticity and AI.” It aims to map existing coverage and expose gaps, through what it identifies as five “key clusters”:  content provenance, trust and authenticity, asset identifiers, rights declarations, and watermarking.

“Ultimately, our technical paper sets out to inform and inspire the next wave of standardization efforts in support of responsible innovation, rights protection, and trustworthy AI systems.”

The policy paper, meanwhile, looks at how international standards can build trust in content authenticity, “in a world where misinformation and disinformation spreads faster than regulation can keep up.”

The paper promises a “roadmap to address prevention, detection and response strategies,” and “highlights the delicate balance needed to preserve freedom of expression and innovation while protecting society from the harms of manipulated media.”

Specific tools and tactics explored include a “regulatory options matrix” to help define “what to regulate, how, and to what extent;” checklists for the design of regulations, enforcement mechanisms and crisis response; and supporting tools such as conformity assessments.

The initiative asserts that global collaboration is essential to confronting the threat to authenticity in media. “We can only tackle issues like misinformation and disinformation by collaborating with all key players, including civil society, academic institutions, public service media and others with a vested interest in ensuring online content can be trusted,” it says. “Digital content can be powerful and creative. But it must also be traceable, trustworthy, and ethically produced.”

Ofcom sequel to deepfake paper evaluates attribution measures

The UK government has declared deepfakes to be the “greatest challenge of the online age,” and deepfake detection to be an urgent priority. In keeping, UK regulator Ofcom has released a follow-up to its July 2024 paper, Deepfake Defenses.

Where that document focused on the harms of deepfakes in setting out a three-part typology (demean, defraud and disinform), the new paper, Deepfake Defences 2: The Attribution Toolkit, turns its attention to attribution measures: “watermarking tools, provenance metadata schemes, AI labels, and context annotations,” which are “designed in one way or another to attribute certain types of information to a piece of content, for example information about who created it, how and when it was created, and – in some cases – whether the content is accurate or misleading.”

Effectively, the paper offers a kind of mini-trial of attribution measures, assessing strengths and weaknesses and the viability of deployment. Ofcom says it will “draw on the paper’s insights to inform our policy development and supervision of regulated services.”

WEF identifies disinformation as a top global risk for 2025

The World Economic Forum (WEF) has thoughts on the cost of disinformation, noting the potential for “massive financial and reputational damage, leading to stock price crashes, revenue losses and consumer distrust.”

“In the digital age, the scale and speed of disinformation have become a significant economic threat,” says WEF. “Today, AI-driven falsehoods spread faster and wider than ever, prompting the World Economic Forum to rank disinformation as one of the top global risks for 2025. No business is immune to disinformation. Once used to influence elections and undermine public figures, disinformation has become a powerful weapon to target global businesses.”

“Whether a multinational corporation or a small family business, false narratives have caused serious reputational and financial damage,” says Matthew Blake, managing director and head of the Centre for Financial and Monetary Systems at WEF.

AI for good – or no good for AI? 

The UN announced its AI and Multimedia Authenticity Standards Collaboration ITU’s AI for Good Global Summit, which emphasized the global commitment to AI governance, skills, and standards and saw ITU Secretary-General Doreen Bogdan-Martin “recommit to treating AI not as an end, but as a means to do good, for the benefit of all humanity, everywhere.”

Yet with all the doomsaying about deepfakes, some are wondering if there’s really such a thing as “AI for good.”

In a new edition of his newsletter, Untangled, author Charley Johnson argues that “‘AI for’ builds ‘AI’ into the premise of whatever solution or program is on offer. If we want real change, we must envision the world that we want and then map backward to determine whether/how to include AI at all.”

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

MOSIP delves into biometric data quality considerations

Biometric data quality was in focus at MOSIP Connect 2026 in Rabat, Morocco, from policies for ensuring good enrollment practices…

 

NIST nominee pressed on AI standards, facial recognition oversight

The Senate Committee on Commerce, Science and Transportation on Thursday considered the nomination of Arvind Raman to serve as Under…

 

Trulioo’s Hal Lonas on how he applies aeronautics principles to fighting fraud

Rocket science is routinely held up as the ultimate example of a highly complex discipline. But Trulioo’s Hal Lonas found…

 

Vouched donates MCP-I framework to Decentralized Identity Foundation

An announcement from Seattle-based Vouched says it has formally donated its Model Context Protocol – Identity (MCP-I) framework to the…

 

California’s OS-based age verification law challenges open-source community

California’s new online safety bill, AB 1043 (the Digital Age Assurance Act), adopts a declared age model for operating systems….

 

87% of failed biometric verifications in Southern Africa due to AI spoofing: Smile ID

A new report spotlights deepfake fraud posing an acute problem for Africa. Digital identity, banking and e-government are being used…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events