FB pixel

Deepfakes and synthetic IDs are already a problem; just wait for the next upgrade

Firms in scrimmage to define fraud systems, design deepfake prevention frameworks
Deepfakes and synthetic IDs are already a problem; just wait for the next upgrade
 

Deepfake fraud has proven its potential to cost businesses millions of dollars. But the deepfake dread reverberating through tech circles goes beyond the pocketbook. With their ability to co-opt someone’s identity without consent, to deceive and to disrupt economic and political processes, deepfakes are troubling on an existential level. Put simply, they’re creepy.

And according to security leaders monitoring the deepfake landscape, they’re only going to get creepier. The technology is developing exponentially and new identity fraud tactics are emerging at speed. Generative AI, which many executives had hoped might be a passing fad, has morphed into an increasingly common threat. And while AI-based deepfake detection tools are available, they aren’t guaranteed to have implemented governance measures around client data that align with corporate policies.

IDology report shows widespread concern about generative AI

New data from IDology shines a light on the “industrial scale” of fraud being perpetrated with synthetic identities created using generative AI. A release touting the research says 45 percent of fintechs reported increased synthetic identity fraud in the last 12 months, and fully half are concerned that GenAI will create more convincing synthetic identities and better deepfakes.

Per the release, “GenAI has given criminals a path to work faster, scale attacks, and create more believable phishing scams and synthetic identities.” And it is only the beginning: businesses see generative AI-driven attacks as the dominant fraud trend over the next 3-5 years.

The response from IDology is a familiar rallying cry: use AI to fight AI.

“These numbers indicate a need for action,” says James Bruni, Managing Director at GBG IDology. “While Gen AI is being used to escalate fraud tactics, its ability to quickly scrutinize vast volumes of data can also be a boon for fintechs, allowing them to fast-track trusted identities and escalate those that are high-risk. The powerful combination of AI, human fraud expertise and cross-sector industry collaboration will help fintechs verify customers in real-time, authenticate their identities and monitor transactions across the enterprise and beyond to protect against difficult-to-detect types of fraud, such as synthetic identity fraud.”

FS-ISAC proposes deepfake threat taxonomy

The deepfake drum continues beating with the release of a new report from the Financial Services Information Sharing and Analysis Center (FS-ISAC), an industry consortium dedicated to reducing cyber-risk in the global financial sector. Prepared by FS-ISAC’s Artificial Intelligence Risk Working Group, “Deepfakes in the Financial Sector: Understanding the Threats, Managing the Risks” outlines broad categories and “a common language of deepfake threats and controls to counter them.”

Like any industry, financial services brings its own specific context for deepfake fraud. One of the most feared new techniques is deepfake CEO video fraud, or, more generally, “C-suite impersonation.” Customer biometrics are a target and banks are their own goldmine for fraudsters committing consumer fraud, often perpetrated through voice authentication systems. Infrastructure can be attacked, and deepfake detection models themselves are often in the crosshairs.

The risks are various: destabilized markets, costly data breaches, humiliation leading to reputational damage.

The meat of IDology’s paper is its Deepfake Threat Taxonomy, which breaks down threats to organizations by category. “The FS-ISAC Deepfake Taxonomy covers two topics,” says the paper. “The six threats that financial services firms face from deepfakes” and “three primary attack vectors targeting the technologies that detect and prevent deepfakes.” Each defined category has a number of sub-categories, which together offer a broad view of the overall deepfake fraud ecosystem.

“Understanding the different types of threats posed by deepfakes and how they can be taxonomized clarifies the types of controls most suitable to defense,” the paper says. “Financial services institutions should perform a complete threat modeling for each of the threat categories.” A corresponding table of control mechanisms completes the mosaic.

The fight against deepfakes, says FS-ISAC, will need to be collaborative, vigilant and nimble. “While the threat posed by deepfakes to financial institutions is significant and evolving,

a proactive, multi-faceted approach to security can substantially mitigate these risks. The path forward lies in the continuous improvement of detection technologies, coupled with robust security practices and comprehensive awareness programs.”

Advanced deepfake fraud soon to fool everyone’s moms

An article from Fortune.com solicits opinions on the deepfake threat from cyber chiefs at SoftBank, Mastercard and Anthropic – and the diagnosis is grim, suggesting we have entered an “AI cold war.”

“You’ve got the criminal entities moving very quickly, using AI to come up with new types of threats and methodologies to make money,” says Gary Hayslip, chief security officer at investment holding company SoftBank. “That, in turn, pushes back on us with the breaches and the incidents that we have, which pushes us to develop new technologies.”

“In a way it’s like a tidal wave,” Hislip says of the rate at which new AI technologies are spilling into the market.

Fraud detection is also improving, but companies have concerns about what third-party AI vendors are allowed to do with data they collect. Hislip says you “have to be a little paranoid” in assessing which tools and services get integrated into a company’s security ecosystem. Some products will bring an unacceptable risk, especially in highly-regulated industries like healthcare.

Meanwhile, Alissa Abdullah, deputy CSO at Mastercard, says deepfake scams are getting better and more varied. She describes an emerging attack technique in which AI video and audio deepfakes present as strangers from a trusted brand, such as a help desk representative.

“They will call you and say, ‘we need to authenticate you into our system,’ and ask for $20 to remove the ‘fraud alert’ that was on my account,” Abdullah says. “No longer is it wanting $20 billion in Bitcoin, but $20 from 1000 people – small amounts that even people like my mother would be happy to say ‘let me just give it to you.’”

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Deepfake ecosystem develops around apps, services as detection fights to keep pace

Deepfakes are the topic du jour in the biometrics and identity verification industries, which are increasingly involved in the global…

 

Surveillance tech firm Auror raises NZ$82M for global expansion

New Zealand crime intelligence platform Auror has raised NZ$82 million (roughly US$48.7 million) that will be used to fund its…

 

Airport biometrics integrations bring together sector’s leaders, new players

SITA has concluded an integration of newly-acquired IPS, just as its airport biometric scanners roll out in Thailand. Details are…

 

Stricter retail age verification on the agenda as UK fails to curb underage vaping

A survey of vape users in Northern Ireland is causing alarm in the UK, with some observers warning that a…

 

Facial recognition deployments must factor in risk v. reward: report

Some deployments of facial recognition technology are more publicly acceptable than others. This, according to a new article published in…

 

Mastercard brings passkeys for ecommerce payments to UAE

Mastercard will roll out its passkey-enabled Click to Pay ecommerce feature in the United Arab Emirates through a partnership with…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events