FB pixel

Deepfakes strike at the heart and mind on dating sites, remote student interviews

Reality Defender finds few on Match.com sites but risks remain high as attacks vary
Deepfakes strike at the heart and mind on dating sites, remote student interviews
 

To date, much political beef has been cut from the question of stolen jobs, slightly less about stolen spots in universities, and almost none about stolen hearts – or deepfake identity theft. But deepfakes are showing up in more and more places; to borrow terminology from the biometric firms that detect them, their use cases are expanding, requiring more active and purpose-made approaches to deepfake detection. Recent research from iProov shows that just 0.1 percent of people can accurately distinguish between real and fake images or video content.

Reality Defender finds few deepfakes on Match Group sites

Match Group, which owns dating and hookup sites Tinder, Hinge, Plenty of Fish, and Match.com, has enlisted Reality Defender to conduct an independent analysis of a representative sample of profile images from Tinder and Hinge.

A release from Match Group says findings revealed that “AI-generated or -manipulated content likely accounts for only a small fraction of content on our platforms – with 99.4 percent of images showing no signs of concerning AI manipulations.”

Even in cases that did show signs of manipulation, “in over 88 percent of cases, these manipulated images were not malicious deepfakes but rather authentic users employing face-tuning apps or other types of image filters.”

The dating site conglomerate is quick to note that “this doesn’t mean that deepfakes don’t exist on Tinder or other dating apps.” Match Group says it plans to use the data from Reality Defender’s investigation to develop new deepfake detection tools.” Noting that “a prevailing fear among dating app and dating sites users is that they’ll accidentally fall for a deepfake,” it also plans to publish an educational guide to deepfakes and AI with visual examples.

In a recent report, Reality Defender emphasizes that “the risks posed by deepfakes are not hypothetical – they are happening now.”

Deepfakes showing up for student credibility interviews

According to The Pie News, Enroly, a platform that streamlines onboarding and arrival processes for universities, students and agents, has reported growing instances of deepfakes being deployed in credibility interviews.

“Our student interviews have revealed instances of advanced technological manipulation, including lip-syncing, impersonation and even the use of deepfake technology,” says Phoebe O’Donnell, head of services at Enroly. “Challenges that were once the realm of science fiction but are now a growing reality.”

The piece quotes a UK Home Office spokesperson, who highlights the department’s measures to prevent deepfake fraud in its own processes. “We have stringent systems in place to identify and prevent fraudulent student visa application,” the official says. “Any individual attempting to cheat or use deception will not succeed and may face a ban from applying for UK visas for 10 years.”

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Financial firms beef up fraud prevention with biometrics and FIDO standards

Globally, financial companies are moving to strengthen their digital security and identity protocols, leveraging biometrics, FIDO standards and cryptography to…

 

Building trust in the age of digital identity: why cyber resilience must come first

By Nathalie Gosset, VP Identity and Biometric Solutions at Thales Trust is the invisible infrastructure of the digital world. Without…

 

Biometric ticketing, IDV sweeps across Brazilian stadiums under mandate

Brazil has mandated face biometrics for use in large stadiums, a landmark move for the widespread implementation of the technology….

 

China’s supreme court releases facial recognition violation cases in crackdown

China’s highest court has upheld the need for stronger protection of personal information, emphasizing to judges the need to maintain…

 

Privacy doesn’t have to cost us great online services

By Andrew Black, Managing Director ConnectID and Sujeet Rana, Chief Digital Officer NAB For years, we accepted an implicit trade-off…

 

Alan Turing Institute reveals digital identity and DPI risks in Cyber Threats Observatory Workshop

Digital identity systems are showing growing vulnerabilities with commensurate risks for the development of DPI. The Alan Turing Institute launched…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events