Deepfake image scams rise; experts say better detection needed for darker skin
Deepfake fraud’s still rising, with the latest example uncovered in Hong Kong. Meanwhile, insiders warn that detection tools must improve at examining darker skin.
Hong Kong police have cracked the first case in that financial hub involving deepfakes to deceive banks and other lenders.
Police arrested six people who doctored eight stolen images alongside pilfered Hong Kong ID cards and proofs of address and income. The suspects allegedly used fraudulent documents to apply for at least 20 loans online. One of the applications was approved for a HK$70,000 (US$8,937) loan, the South China Morning Post reports. The Morning Post is owned by Alibaba, in which the Chinese government holds shares.
The generated images were used to mimic people from the ID cards during the online application process when financial institutions require applicants to upload scans of their identification documents and real-time selfies.
Another deepfake scam, in May, coaxed a Japanese citizen to buy HK$1,700 (US$216) worth of computer game credits. The victim was fooled by a fraudulent video call in which a scammer swapped his face with the CEO of a Hong Kong bank.
Deepfake detection bias could hurt some people more often
ID insiders are warning that deepfake detection methods do not always work on people with darker skin tones. Training sets must contain all ethnicities, accents, genders, ages and skin tones, to be more effective.
Rijul Gupta, co-founder and CEO of DeepMedia, a vendor that makes a deepfake social media app, told The Guardian that datasets have been heavily skewed towards white middle-aged men. The “inherent bias” in these tools means that they will perform worse when analyzing the images of everyone else.
There will be an “an increase of deepfake scams, fraud and misinformation caused by AI that will be highly targeted and focused on marginalized communities,” Gupta says.
Mutale Nkonde, AI policy adviser to and CEO of AI for the People, a nonprofit trying to push AI into socially responsible, democratic roles, points out that developers are well aware of facial recognition’s problem with dark skin tones. But given that there is little regulation on selling facial recognition, he claims the underlying bias continues to be reproduced.
Ellis Monk, professor of sociology at Harvard University and visiting faculty researcher at Google, says he has been trying to solve the problem by providing new datasets for machine learning models. His Monk Skin Tone Scale provides a broader spectrum of tones than the Fitzpatrick scale.
Article Topics
biometric-bias | biometrics | deepfake detection | deepfakes | research and development | skin tone scale
Comments