Media authentication an emerging front in battle against deepfakes: Microsoft report

Developing deepfake detection techniques is becoming a serious task for both governments and enterprises. Microsoft has published a new report examining techniques that can help verify the provenance of digital content – where an image or video came from, who created it and whether it has been tampered with.
These so-called media integrity and authentication (MIA) methods include secure provenance (C2PA), imperceptible watermarking, and soft hash fingerprinting across images, audio and video. All of them offer different levels of protection and serve different purposes, Microsoft says in its Media Integrity and Authentication: Status, Directions, and Futures report.
The new study examines content authentication methods to better understand their limitations and explore potential ways to strengthen them, focusing on an audience of creators, technologists, policymakers and others. Its central finding is that no single solution can prevent digital deception on its own.
Preventing every attack or stopping certain platforms from stripping provenance signals isn’t possible, says Jessica Young, director of science and technology policy in the Office of the Chief Scientific Officer at Microsoft. The answer, she argues, “is figuring out how to surface the most reliable indicators with strong security built in — and, when necessary, reinforce them with additional methods that allow recovery or support manual digital-forensics work.”
There is also no one-size-fits-all solution for authenticating media, adds Young, who is the co-chair of the study.
“You have different formats that have different limitations or trade-offs for the signals they can contain,” she says. “Whether it’s images, audio, video – not to mention text, which has a whole different array of challenges – and how strong the solutions can be applied there.”
Microsoft co-founded Coalition for Content Provenance and Authenticity (C2PA) in 2021 alongside companies such as Adobe, Arm, BBC, Intel and Truepic. Its goal is to develop technical standards for certifying the source and history of media content and preventing disinformation, misinformation and online content fraud.
Microsoft collaborates on deepfake detection with UK govt
Microsoft has also participated in the UK government’s Deepfake Detection Challenge, a hackathon-style event that evaluated the country’s ability to deal with synthetic media. The event took place in January with more than 350 experts participating, including from the Interpol, the Five Eyes community, Big Tech companies and smaller entities such as Ingenium Biometric Laboratories.
The UK has been developing a deepfake detection framework that evaluates the performance of detection tools to identify harmful deepfakes across different applications.
The tools are evaluated using real-world scenarios, including impersonation, fraud, and non-consensual sexual imagery, to identify where existing technology performs effectively and where gaps remain. The findings will then be used to establish new industry benchmarks, helping companies strengthen their ability to detect and combat deepfakes.
The effort includes government agencies such as the Accelerated Capability Environment, Department for Science, Innovation and Technology (DSIT), Department for Digital, Culture, Media and Sport, HM Revenue and Customs (HMRC) and the Alan Turing Institute, according to the Home Office summary of the initiative.
Article Topics
AI fraud | deepfake detection | deepfakes | Microsoft | synthetic data






Comments