FB pixel

Deepfake candidates, AI resumes increasingly infiltrate hiring processes

Job fraud enabled by generative AI has HR departments scrambling for solutions
Deepfake candidates, AI resumes increasingly infiltrate hiring processes
 

If you’re a job seeker finding the market especially competitive right now, you can perhaps take some solace in knowing that a chunk of your would-be competitors is fake. Biometric deepfakes have infiltrated the hiring pipeline, making deepfake job fraud an issue of concern for HR departments.

While remote work and virtual hiring have opened the door to a broader range of candidates, they have also created a new instability. As Daon puts it in a recent blog, “the same digital infrastructure that enables legitimate remote work has created the perfect conditions for a new breed of recruitment fraud.”

The scale of the problem is illustrated by the U.S. Department of Justice’s revelation that more than 300 U.S. firms had unknowingly hired IT workers with direct ties to North Korea, who intended to funnel money to Pyongyang. Per Daon, “conservative estimates suggest these operatives collectively channeled over 100 million dollars annually to support North Korea’s nuclear and conventional weapons programs.”

Some believe the only way to fix things is by going back to in-person interviews. Others have faith that robust identity verification is up to the task. Regardless, the deepfake fraud problem is only increasing.

Half of employers consider AI resumes to be a form of fraud

According to new survey results from Software Finder, 72 percent of hiring professionals have encountered AI-generated resumes during the application process, and 15 percent have seen face-swapping used in video interviews. Remote hiring has opened the door to video deepfakes, and once bad actors are granted access, the potential for damage is significant. Tech jobs are the main target, followed by marketing, design and finance.

If you’re thinking about using a large language model to make your resume stand out among all those bots, think again: half of respondents say they view “AI-enhanced resumes” as a form of fraud, with nearly 50 percent rejecting candidates based on suspected AI use and 40 percent doing so due to concerns about AI identity manipulation. Per a blog from Software Finder, “AI-based resume manipulation is seen as a greater threat than deepfake video, with 63 percent of recruiters considering it the bigger risk.”

As Daon’s blog notes, “these aren’t trivial white lies about proficiency in Excel or overstated language skills; they’re comprehensive deceptions designed to place individuals with fake identities into positions of trust and access.”

Firms lag in adopting necessary deepfake detection tools

Despite the broad concern, many organizations still have yet to implement adequate deepfake detection tools: only 31 percent say they’re using AI or deepfake detection software. That number is likely to rise over time, as nearly 40 percent of respondents say their company plans to invest in detection tools within the next year.

Regardless, the arrival of deepfake candidates has many believing the hiring process will need to “fundamentally change” within five years. Identity verification must be more stringent, and nearly 7 in 10 respondents say they would support mandatory “live only” interviews to validate candidate identities. There is support for government regulation, with more than 60 percent of hiring professionals backing federal laws requiring job seekers to disclose if they’ve used AI in their application.

“If teams want to stay ahead, it’s time to move from awareness to action,” says the blog. “That means investing in tools that can catch AI-generated content, giving recruiters the training they need, and pushing for stronger safeguards across the platforms we rely on.”

Identity verification, injection attack detection, liveness all key

Daon asserts that “today’s candidates move through virtual pipelines, transmitting their voices across continents, their faces pixelated squares on screens, and their credentials now digital entries on databases that hiring managers access through dashboards rather than manila folders.” However, a piece in the Wall Street Journal suggests that some companies are switching back to in-person job interviews, as a way to guarantee that the person they’re hiring is a real and genuine candidate.

Daon believes the answer is to be found in advanced remote identity verification. “Thorough verification has become non-negotiable in today’s hiring landscape. This means going beyond cursory background checks to validate educational credentials directly with institutions, confirming professional licenses with issuing authorities, and contacting past employers through official channels rather than provided references.”

“The most effective approaches don’t rely on any single countermeasure but instead create multiple verification checkpoints throughout the hiring process.”

In its stack of tools required to manage the threat, Daon lists robust biometric identity verification, document validation, active and passive liveness detection and algorithmic deepfake detection and injection attack detection.

“Companies that implement multi-layered verification systems combining robust identity checks, AI-powered deepfake detection, and comprehensive employee education will be best positioned to protect themselves from job recruitment fraud.”

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Ring and Flock call off integration as scrutiny of camera-to-police partnership intensifies

Amazon-owned Ring and Flock Safety have canceled their planned partnership, stepping back from an integration that would have linked one…

 

MOSIP pursues democratization of digital identity with unconference conversations

A democratic vision of digital identity is central to the non-profit, open-source mandate of MOSIP. As the organization and the…

 

Liveness is king: FaceTec’s Jay Meier in conversation with Chris Burt 

It’s best, says Jay Meier, to think about identity management as a system of symbiotic systems. Which is to say,…

 

Ofcom fines Kick, threatens 4chan as OSA enforcement steadily dials up

UK regulator Ofcom has faced criticism for being too slow and lenient with its power to enforce the Online Safety…

 

Innovatrics, ROC improve rankings in NIST ELFT, rising to 2 and 3 respectively

Innovatrics is celebrating success in the latest National Institute of Standards and Technology (NIST) Evaluation of Latent Fingerprint Technologies (ELFT)…

 

Meta plans launch of facial recognition to smart glasses in ‘dynamic political environment’

Meta is reportedly planning to roll out facial recognition capabilities for its smart glasses as early as this year, taking…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events