Deepfakes strike at the heart and mind on dating sites, remote student interviews

To date, much political beef has been cut from the question of stolen jobs, slightly less about stolen spots in universities, and almost none about stolen hearts – or deepfake identity theft. But deepfakes are showing up in more and more places; to borrow terminology from the biometric firms that detect them, their use cases are expanding, requiring more active and purpose-made approaches to deepfake detection. Recent research from iProov shows that just 0.1 percent of people can accurately distinguish between real and fake images or video content.
Reality Defender finds few deepfakes on Match Group sites
Match Group, which owns dating and hookup sites Tinder, Hinge, Plenty of Fish, and Match.com, has enlisted Reality Defender to conduct an independent analysis of a representative sample of profile images from Tinder and Hinge.
A release from Match Group says findings revealed that “AI-generated or -manipulated content likely accounts for only a small fraction of content on our platforms – with 99.4 percent of images showing no signs of concerning AI manipulations.”
Even in cases that did show signs of manipulation, “in over 88 percent of cases, these manipulated images were not malicious deepfakes but rather authentic users employing face-tuning apps or other types of image filters.”
The dating site conglomerate is quick to note that “this doesn’t mean that deepfakes don’t exist on Tinder or other dating apps.” Match Group says it plans to use the data from Reality Defender’s investigation to develop new deepfake detection tools.” Noting that “a prevailing fear among dating app and dating sites users is that they’ll accidentally fall for a deepfake,” it also plans to publish an educational guide to deepfakes and AI with visual examples.
In a recent report, Reality Defender emphasizes that “the risks posed by deepfakes are not hypothetical – they are happening now.”
Deepfakes showing up for student credibility interviews
According to The Pie News, Enroly, a platform that streamlines onboarding and arrival processes for universities, students and agents, has reported growing instances of deepfakes being deployed in credibility interviews.
“Our student interviews have revealed instances of advanced technological manipulation, including lip-syncing, impersonation and even the use of deepfake technology,” says Phoebe O’Donnell, head of services at Enroly. “Challenges that were once the realm of science fiction but are now a growing reality.”
The piece quotes a UK Home Office spokesperson, who highlights the department’s measures to prevent deepfake fraud in its own processes. “We have stringent systems in place to identify and prevent fraudulent student visa application,” the official says. “Any individual attempting to cheat or use deception will not succeed and may face a ban from applying for UK visas for 10 years.”
Article Topics
biometrics | deepfake detection | deepfakes | fraud prevention | Match.com | Reality Defender | Tinder
Comments