US grapples with deepfakes as barriers to successful exploits
Deepfakes are accessible enough to have graduated from major heists and state-level misinformation to the more mundane world of fake job applications. That warning from the FBI comes just as the Federal Trade Commission calls for innovation beyond AI to combat online harms including deepfakes.
The Federal Bureau of Investigations’ Internet Crime Complaint Center, or IC3 has issued a public service announcement warning that applicants for remote work positions, particularly in IT, have been using deepfakes to defraud job interviewers.
In fact, the FBI says stolen personally identifiable information is being used in applications. The videos did not align voice and video closely enough.
Jobs targeted often were those that would involve access to proprietary and financial information.
“Modern day cybercriminals have the knowledge, tools and sophistication to create highly realistic deepfakes while leveraging stolen personally identifiable information (PII) to pose as real people and deceive companies into hiring them,” Jumio CTO Stuart Wells told Biometric Update in email. “As an employee, hackers can steal a wide range of confidential data, from customer and employee information to company financial reports.”
Wells notes that a series of similar warnings have come from federal agencies recently. Some companies victimized by fraud have unknowingly violated sanctions on North Korea and could face serious penalties.
“As workforce operations remain widely remote or hybrid, many organizations have no way of truly knowing the employees and contractors they are hiring are legitimate candidates,” Wells adds. “Tougher security measures are needed to detect deepfakes and thwart these highly advanced cybercriminals. Biometric authentication – which leverages a person’s unique human traits to verify identity – is a safe, secure security measure that can be incorporated into the workforce onboarding process and every employee login to guarantee the person signing into their systems is who they claim to be.”
FTC sees limit to AI defense against AI attacks
The Federal Trade Commission suggests that artificial intelligence has a role in defending Americans from online harms including deepfakes, illegal and extremist content, and election disinformation.
The FTC’s 82-page report, ‘Combatting Online Harms Through Innovation’, makes a series of recommendations ranging from having “humans in the loop” to pushing for legislation.
The section on deepfakes notes efforts by agencies including the Department of Homeland Protection and the Defense Advanced Research Projects Agency as well as academic researchers and technology vendors to define and defend against deepfake attacks.
Algorithmic transparency and accountability are also discussed, but the FTC ultimately concludes that “AI is no magical shortcut.” Quoting Tarleton Gillespie, a researcher with Cornell University and Microsoft, the commission notes that “platforms dream of electric shepherds,” but it does not share those dreams.