Deepfake fraud fears mount after Progress Corp hack

Fears are mounting among some commentators about the rise of deepfake scams, following the hack of U.S. file transfer software firm Progress Corp, revealed in June.
The vulnerability impacting the firm’s software, reportedly allowed Russian hacking group Cl0p to steal the personal data of consumers held by large corporates such as British Airways, Shell, and PwC as well as U.S. government agencies such as the California pension system.
Haywood Talcove, the chief executive of LexisNexis Risk Solutions’ Government division, told The Financial Times this data, which may include information such as driving licenses, photographs, dates of birth, and health and pension information could be used in conjunction with software to carry out deepfake scams.
“I am not a criminal, but I’ve been studying this for a long time — if I had this much information, and it was so pristine, the sky is the limit,” he explained.
For example, the exec claimed these deepfake techniques could be used to create fraudulent video selfies that many U.S. state agencies use with biometrics to verify identities, allowing criminals to collect unemployment, food stamps, or even college loans.
Talcove claimed just one successfully faked identity could defraud the government to the tune of $2 million.
Sumsub noted a dramatic increase in deepfakes among observed fraud incidents when announcing the launch of its deepfake detection tool last month. The company is one of many to develop tools to defend against the products of generative AI.
Jumio recently warned that consumers are overconfident in their ability to spot deepfakes.
Outside the West, many governments are now recognizing the importance of preventing deepfake fraud, and how technology can come into the picture.
At a speech at an anti-scam conference in Singapore, ComplyAdvantage notes that Sun Xueling, minister of state for the Ministry of Home Affairs and Ministry of Social and Family Development, pointed towards deepfake fraud and touched on how technology could help combat these new emerging forms of fraud. For example, this could include the introduction of scam detection registries and better fraud surveillance.
The Minister also pointed towards international cooperation, including cross-border data sharing, which he said could help accomplish better fraud prevention.
An article in Finance Magnates has highlighted the potential benefits of voice and gesture-based payments, explaining benefits for users, such as hygiene, and quality of customer experience, not needing a physical device such as a card, and even in the future the introduction of integrations with augmented reality.
However, the article also touches on how the rise of deepfakes could increase the risks of voice-based payments fraud, and said the industry needed to explore “effective measures such as prioritizing user education, employing multi-factor authentication, and leveraging AI-powered defense mechanisms.”
Numerous biometrics-related firms are stepping up and providing solutions aimed at combating deepfake fraud.
Earlier this week Amsterdam-based facial recognition technology provider VisionLabs launched a deepfake detection product, claiming its algorithms can provide an accuracy rate between 92 and 100 percent. ElevenLabs, a U.S.-based start-up that offers a speech synthesis platform publishers and creators can use to make synthetic audio content, recently unveiled a new tool that will allow users to upload any audio sample to identify if it contains audio generated with the company’s own AI. Other vendors recently releasing tools to fight the threat of deepfake fraud include Daon, Paravision and ID R&D.
Article Topics
biometric liveness detection | biometrics | deepfake detection | deepfakes | payments | spoof detection
Comments