FB pixel

Deepfakes strike at the heart and mind on dating sites, remote student interviews

Reality Defender finds few on Match.com sites but risks remain high as attacks vary
Deepfakes strike at the heart and mind on dating sites, remote student interviews
 

To date, much political beef has been cut from the question of stolen jobs, slightly less about stolen spots in universities, and almost none about stolen hearts – or deepfake identity theft. But deepfakes are showing up in more and more places; to borrow terminology from the biometric firms that detect them, their use cases are expanding, requiring more active and purpose-made approaches to deepfake detection. Recent research from iProov shows that just 0.1 percent of people can accurately distinguish between real and fake images or video content.

Reality Defender finds few deepfakes on Match Group sites

Match Group, which owns dating and hookup sites Tinder, Hinge, Plenty of Fish, and Match.com, has enlisted Reality Defender to conduct an independent analysis of a representative sample of profile images from Tinder and Hinge.

A release from Match Group says findings revealed that “AI-generated or -manipulated content likely accounts for only a small fraction of content on our platforms – with 99.4 percent of images showing no signs of concerning AI manipulations.”

Even in cases that did show signs of manipulation, “in over 88 percent of cases, these manipulated images were not malicious deepfakes but rather authentic users employing face-tuning apps or other types of image filters.”

The dating site conglomerate is quick to note that “this doesn’t mean that deepfakes don’t exist on Tinder or other dating apps.” Match Group says it plans to use the data from Reality Defender’s investigation to develop new deepfake detection tools.” Noting that “a prevailing fear among dating app and dating sites users is that they’ll accidentally fall for a deepfake,” it also plans to publish an educational guide to deepfakes and AI with visual examples.

In a recent report, Reality Defender emphasizes that “the risks posed by deepfakes are not hypothetical – they are happening now.”

Deepfakes showing up for student credibility interviews

According to The Pie News, Enroly, a platform that streamlines onboarding and arrival processes for universities, students and agents, has reported growing instances of deepfakes being deployed in credibility interviews.

“Our student interviews have revealed instances of advanced technological manipulation, including lip-syncing, impersonation and even the use of deepfake technology,” says Phoebe O’Donnell, head of services at Enroly. “Challenges that were once the realm of science fiction but are now a growing reality.”

The piece quotes a UK Home Office spokesperson, who highlights the department’s measures to prevent deepfake fraud in its own processes. “We have stringent systems in place to identify and prevent fraudulent student visa application,” the official says. “Any individual attempting to cheat or use deception will not succeed and may face a ban from applying for UK visas for 10 years.”

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Face biometrics use cases outnumbered only by important considerations

With face biometrics now used regularly in many different sectors and areas of life, stakeholders are asking questions about a…

 

Biometric Update Podcast explores identification at scale using browser fingerprinting

“Browser fingerprinting is this idea that modern browsers are so complex.” So says Valentin Vasilyev, Chief Technology Officer of Fingerprint,…

 

Passkeys now pervasive but passwords persist in enterprise authentication

Passkeys are here; now about those passwords. Specifically, passkeys are now prevalent in the enterprise, the FIDO Alliance says, with…

 

Pornhub returns to UK, but only for iOS users who verify age with Apple

In the UK, “wanker” is not typically a term of endearment. However, the case may be different for Pornhub, which…

 

Europol operated ‘shadow’ IT systems without data safeguards: Report

Europol has operated secret data analysis platforms containing large amounts of personal information, such as identity documents, without the security…

 

EU pushes AI Act deadlines for high-risk systems, including biometrics

The EU has reached a provisional agreement on changes to the AI Act that postpone rules on high-risk AI systems,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events