FB pixel

Deepfake candidates, AI resumes increasingly infiltrate hiring processes

Job fraud enabled by generative AI has HR departments scrambling for solutions
Deepfake candidates, AI resumes increasingly infiltrate hiring processes
 

If you’re a job seeker finding the market especially competitive right now, you can perhaps take some solace in knowing that a chunk of your would-be competitors is fake. Biometric deepfakes have infiltrated the hiring pipeline, making deepfake job fraud an issue of concern for HR departments.

While remote work and virtual hiring have opened the door to a broader range of candidates, they have also created a new instability. As Daon puts it in a recent blog, “the same digital infrastructure that enables legitimate remote work has created the perfect conditions for a new breed of recruitment fraud.”

The scale of the problem is illustrated by the U.S. Department of Justice’s revelation that more than 300 U.S. firms had unknowingly hired IT workers with direct ties to North Korea, who intended to funnel money to Pyongyang. Per Daon, “conservative estimates suggest these operatives collectively channeled over 100 million dollars annually to support North Korea’s nuclear and conventional weapons programs.”

Some believe the only way to fix things is by going back to in-person interviews. Others have faith that robust identity verification is up to the task. Regardless, the deepfake fraud problem is only increasing.

Half of employers consider AI resumes to be a form of fraud

According to new survey results from Software Finder, 72 percent of hiring professionals have encountered AI-generated resumes during the application process, and 15 percent have seen face-swapping used in video interviews. Remote hiring has opened the door to video deepfakes, and once bad actors are granted access, the potential for damage is significant. Tech jobs are the main target, followed by marketing, design and finance.

If you’re thinking about using a large language model to make your resume stand out among all those bots, think again: half of respondents say they view “AI-enhanced resumes” as a form of fraud, with nearly 50 percent rejecting candidates based on suspected AI use and 40 percent doing so due to concerns about AI identity manipulation. Per a blog from Software Finder, “AI-based resume manipulation is seen as a greater threat than deepfake video, with 63 percent of recruiters considering it the bigger risk.”

As Daon’s blog notes, “these aren’t trivial white lies about proficiency in Excel or overstated language skills; they’re comprehensive deceptions designed to place individuals with fake identities into positions of trust and access.”

Firms lag in adopting necessary deepfake detection tools

Despite the broad concern, many organizations still have yet to implement adequate deepfake detection tools: only 31 percent say they’re using AI or deepfake detection software. That number is likely to rise over time, as nearly 40 percent of respondents say their company plans to invest in detection tools within the next year.

Regardless, the arrival of deepfake candidates has many believing the hiring process will need to “fundamentally change” within five years. Identity verification must be more stringent, and nearly 7 in 10 respondents say they would support mandatory “live only” interviews to validate candidate identities. There is support for government regulation, with more than 60 percent of hiring professionals backing federal laws requiring job seekers to disclose if they’ve used AI in their application.

“If teams want to stay ahead, it’s time to move from awareness to action,” says the blog. “That means investing in tools that can catch AI-generated content, giving recruiters the training they need, and pushing for stronger safeguards across the platforms we rely on.”

Identity verification, injection attack detection, liveness all key

Daon asserts that “today’s candidates move through virtual pipelines, transmitting their voices across continents, their faces pixelated squares on screens, and their credentials now digital entries on databases that hiring managers access through dashboards rather than manila folders.” However, a piece in the Wall Street Journal suggests that some companies are switching back to in-person job interviews, as a way to guarantee that the person they’re hiring is a real and genuine candidate.

Daon believes the answer is to be found in advanced remote identity verification. “Thorough verification has become non-negotiable in today’s hiring landscape. This means going beyond cursory background checks to validate educational credentials directly with institutions, confirming professional licenses with issuing authorities, and contacting past employers through official channels rather than provided references.”

“The most effective approaches don’t rely on any single countermeasure but instead create multiple verification checkpoints throughout the hiring process.”

In its stack of tools required to manage the threat, Daon lists robust biometric identity verification, document validation, active and passive liveness detection and algorithmic deepfake detection and injection attack detection.

“Companies that implement multi-layered verification systems combining robust identity checks, AI-powered deepfake detection, and comprehensive employee education will be best positioned to protect themselves from job recruitment fraud.”

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Opinions on UK Online Safety Act emphasize importance of enforcement

Online safety legislation is making headlines around the world. But in places where laws have taken effect, are they proving…

 

UK Home Office raises estimate for passport contract to 12 years, £576M

The UK Home Office has opened a third round of market engagement for its next major passport manufacturing and personalization…

 

US lawmakers move to restrict AI chatbots used by kids

A bipartisan pair of House and Senate bills would impose new federal restrictions on AI chatbots, including a ban on…

 

Utah age assurance law for VPN users takes effect this week

Privacy advocates and virtual private network (VPN) providers are up in arms over Utah’s Senate Bill 73 (SB 73), “Online…

 

CLR Labs wins ISO 17025 accreditation for biometrics testing across EU

Cabinet Louis Reynaud (CLR Labs) has been accredited for ISO/IEC 17025, the international standard for testing and calibration laboratories, in…

 

Leidos, Idemia PS advance checkpoint modernization with biometrics, CAT-2 systems

Leidos and Idemia Public Security have formed a strategic partnership to deploy biometric‑enabled eGates and integrated Credential Authentication Technology (CAT-2)…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events