FB pixel

Meta, Trust Stamp among firms turning attention to deepfake detection

As deepfakes proliferate, defenses become key tools for journalism, fraud prevention
Meta, Trust Stamp among firms turning attention to deepfake detection
 

Like an ouroboros swallowing its tail, AI continues its cycle of begetting deepfake fraud and detection tools to stop it. New products to flag deepfake content are being launched in advance of the U.S. presidential election in November. Meta has made a suite of AI models publicly available, to accelerate AI development. Trust Stamp has a provisional patent for injection attack detection. Deep Media and LatticeFlow AI have novel approaches. And Google now lets you call out deepfakes on YouTube.

Meta relaunches refreshed AudioSeal audio watermarking tool

Meta’s Fundamental AI Research (FAIR) team is publicly releasing new AI models, including what it calls “the first audio watermarking technique designed specifically for the localized detection of AI-generated speech.”

In a news release, Meta says releasing the models is intended to “accelerate future research and allow others to innovate and apply AI at scale.” It believes the rapid pace of innovation makes collaboration with the global AI community more important than ever. Hence, the fresh access to some of its novel AI technologies.

“We’re publicly releasing five models including image-to-text and text-to-music generation models, a multi-token prediction model and a technique for detecting AI-generated speech,” says the post on their website. “By publicly sharing this research, we hope to inspire iterations and ultimately help advance AI in a responsible way.”

The AI-generated-speech detection model is a re-release of AudioSeal, the audio watermarking tool. AudioSeal uses a localized detection approach to pinpoint AI-generated segments within a longer audio sample. So, for instance, it could be used to detect fake audio embedded in a recorded conversation or a podcast.

Meta is releasing AudioSeal under a commercial license, “to help prevent the misuse of generative AI tools.”

Trust Stamp patent covers deepfake injection attack detection

Trust Stamp has announced a provisional patent for an AI tool to counter deepfake injection attacks. A press release from the firm says provisional patent #63/662,575, filed with the US Patent and Trademark Office, covers a new way to detect injection attacks in biometric authentication processes, including the injection of deepfake images and videos into data streams.

Noting that Trust Stamp has already implemented “a number of liveness detection technologies,” Chief Science Officer Dr. Norman Poh says fraud techniques continue to evolve, meaning risk continues to grow.

“There are now billions of daily attacks being perpetrated with a growing number of injection attacks using genuine artifacts captured out of context as well as deep fake images and videos,” he says. “When genuine artifacts are used out of context, they may be able to pass legacy liveness detection tests. This latest presentation attack detection technology that we have patented targets injection attacks regardless of the artifacts being used.”

LatticeFlow AI amps up deepfake detection ahead of U.S. election

Deepfake detection is a “critical new tool” for journalists, says a release from LatticeFlow AI, announcing the rollout of its LatticeFlow AI Audio product, which detects model errors in audio AI applications. The firm is pitching its tool as a must-have for anyone converting the U.S. federal election campaign – ripe territory for deepfake attacks.

“We believe that there is no place for deepfakes in civil, ethical societies,” says Dr. Petar Tsankov, CEO of LatticeFlow AI. “Ethical, safe, and trustworthy AI are the foundational values of our company – and why we want to help address deepfakes detection in advance of the 2024 elections.”

The release also quotes Chris O’Brien, who covered Silicon Valley for the San Jose Mercury News and the Los Angeles Times, saying deepfake detection software like LatticeFlow AI’s is “a vital step towards preserving the integrity and accuracy of the information we often rely on as journalists.”

Deep Media gets granular with data sets for specific AI audio tools

A recent blog from Deep Media titled “A New Era in Evaluating Deepfake Detection” lays out its initiative to rethink how deepfake detection methods are evaluated and tested, by staying at the outer edge of the technological curve and prioritizing ethical use.

“By embedding ethical considerations into every aspect of our work, Deep Media aims to set a new standard for responsible research in the field of deepfake audio detection,” it says. Its validation set encompasses more than 100,000 fake audio samples and generated audio samples, “curated to cover a diverse array of voice styles, accents, and emotion.” A key feature is the inclusion of validation subsets dedicated to specific deepfake audio generators, housing about 9,000 samples per subset. Neural voice cloning models, voice conversion algorithms with deep learning architectures, and proprietary algorithms used by major tech companies are among models covered in the subsets.

Reality Defender recently unveiled its own version of granularity in deepfake detection, with its expansion to include multilingual deepfake detection in Spanish and Portuguese.

YouTube expands privacy requests to include AI-generated fakery

A blog from YouTube says the video sharing platform is expanding its privacy request process so that users can request the removal of “AI-generated or other synthetic or altered content that simulates their face or voice.”

YouTube will evaluate requests for removal, first asking whether the “content is altered or synthetic and could be mistaken for real, whether the person making the request is identifiable, or whether the content is parody or satire when it involves well-known figures.”

The post says the move aligns with Google’s approach to responsible AI innovation.

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Beyond Identity integrates Nametag ID verification to enhance deepfake defense

Identity and access management (IAM) provider Beyond Identity and identity verification firm Nametag are partnering on a deepfake defense and…

 

SSA ID verification overhaul sparks backlash over accessibility and security

The Social Security Administration (SSA) is mounting a significant overhaul of its identity verification procedures in a move that is…

 

Generative AI lowering barrier to digital crime: Europol

Organized crime groups are becoming more sophisticated and dangerous thanks to technological developments such as generative AI which is reducing…

 

Outseer makes behavioral biometrics a native platform feature

U.S.-based payment authentication provider Outseer has added behavioral biometrics to its platform as a native feature to secure digital banking…

 

UK citizens told to prepare for October launch of EES

UK citizens have been advised to expect the introduction of the EU’s Entry-Exit System (EES) in October 2025. However, the…

 

World Bank moving faster and thinking bigger to bring about digital transformation

The World Bank held its Global Digital Summit in Washington D.C. with the theme on ‘Digital Pathways for All’. The…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events