FB pixel

Call me Fake Ishmael: for executives, deepfakes present a gargantuan problem

Injection attacks, vishing, harvesting social media make digital impersonation easier
Call me Fake Ishmael: for executives, deepfakes present a gargantuan problem
 

Anyone who’s read the classic Moby Dick knows that whales are hard to catch – but if you nab one, the returns are abundant. As it goes with cetaceans, so it goes with executive impersonation fraud. A new post from Reality Defender looks at the new era of “whaling attacks” driven by advances in generative AI and freely available deepfake engines.

It’s worth noting that the “whales” in whaling are not necessarily the executives themselves, but the Moby Dick-sized payouts to be found in pretending to be in charge. Fraudsters use the stolen or hijacked identities of high ranking executives to instruct employees to transfer sums, as in the case of the deepfake Zoom call that cost UK firm Arup $25 million in fraudulent transfers from its Hong Kong office. In this, whalers are less like Ahab and more like the protagonist of the Dune series, Paul Atreides, riding giant sandworms to attack more lowly beings.

Nonetheless, “giant sandworming attacks” are not yet a thing in fraud circles; for now, it’s whaling on an ever-greater scale. “Once limited to fraudulent emails posing as urgent requests from executives, cybercriminals now leverage generative AI to exploit voice, likeness, and digital identity,” says Reality Defender. “Cases of AI-powered fraud utilizing executive impersonation are actively draining corporate accounts, exposing confidential data, and destroying brand integrity.”

The New York deepfake detection firm identifies three primary attack vectors for whalers. There is vishing, or voice phishing, in which deepfake voices generated with AI audio tools convincingly mimic executives. Video Conferencing Exploits see fraudsters injecting fake participants into meetings (as was the case in Hong Kong). For Executive Brand Manipulation attackers “use AI-generated content to impersonate corporate leaders on public-facing platforms, launching disinformation campaigns or engaging in financial fraud.”

The blog cites research from Truecaller showing that voice-based fraud results in $25 billion in annual losses. “Vishing attacks prey on human psychology and workplace expectations,” it says. “A well-trained employee might question an email request for funds, but when the voice on the line sounds exactly like their CFO’s, the decision to comply feels natural.”

In some cases, the executives are the whales. “Attackers don’t just impersonate leadership to deceive employees – they create deepfake versions of legal advisors or regulators to manufacture a sense of urgency and manipulate executives into high-risk actions impacting companies at the highest levels.”

Likewise, the quality of deepfake videos is now such that whole conferences can be hijacked with AI-assisted fake video and audio. “The consequences are far-reaching, as malicious actors can exploit video deepfakes beyond financial fraud to conduct corporate and government espionage.” In light of recent news regarding the lax data security practices of senior U.S. government officials, this is a significant problem.

The public nature of life as a C-suite executive – online, on television, at public events – makes it easy for fraudsters to mine material. “Attackers refine their approaches through reputation leveraging, studying an executive’s online presence and communication style to craft convincing impersonations,” says Reality Defender. “Fabricated communications can seriously affect brand reputation and trigger rapid stock price volatility.” The issue is worsening with the emergence of cross-channel validation, which connects fraudulent posts that appear simultaneously on different platforms.

Deepfake lifesavers include predefined authentication protocols, education

The defenses required to keep the deepfake leviathan at bay are available. Secondary verification protocols that require a secondary confirmation via a different communication channel can help secure high-value transactions. Deepfake detection products can catch attacks early. But, says Reality Defender, “educating employees on deepfake threats is now just as critical as phishing awareness.”

Pre-defined authentication protocols can keep deepfakes out of sensitive meetings. Should a suspect participant gain entry, employees should have “a direct channel to verify identities without hesitation.”

Real-time monitoring of streams with algorithmic analysis can detect manipulation in video feeds. Likewise, monitoring of executive digital footprints can detect impersonation attempts before they do damage.

Finally, “having a crisis playbook in place can mitigate reputational damage if a whaling attack occurs.”

Reality Defender CEO Ben Colman will be on hand in Las Vegas next week to present at the Transact conference. His talk, “Trust in the Age of AI: Defending Against the Rising Deepfake Threat,” takes place Thursday, April 3, 2025 at 10:15 a.m. PT.

Philippine government looks to launch anti-deepfake app

With the UK declaring deepfakes to be the “greatest challenge of the online age,” governments around the world are mustering deepfake defenses. A release from the Philippine News Agency says two government agencies, the Presidential Communications Office (PCO) and the Cybercrime Investigation Coordinating Center (CICC), have signed a memorandum of agreement (MOA) on collaborating to fight fake news and rampant scams.

A new scam hotline and digital reporting feature are being introduced, accompanied by a campaign of anti-scam and deepfake education. CICC Executive Director Undersecretary Alexander Ramos says a national task force will be created and an AI app launched to “intensify the government’s campaign against misinformation and disinformation, especially amid the rise of deepfakes.

Some PHP2 million (about US$35,000) is to be allotted for “‘regionalized’ software that would be purchased from a foreign developer.’

The government says it has tested a version of a deepfake app that can detect deepfake content in 30 seconds.

Deepfakes are pervasive in the Philippines. A report from PhilStar Global cites research from Sumsub, showing that the country experienced the biggest jump in deepfakes among all Asia-Pacific nations in 2023. Deepfakes in the region grew by an average of 1,530 percent in 2023 compared to the previous year. There have been instances of political deepfakes showing the president issuing bogus military orders.

The piece argues that “it’s about time that Congress take deepfakes more seriously and that the President certify an anti-deepfake bill as an urgent measure before more lives and reputations are destroyed.”

Indian subcommittee needs more time for deepfake assessment

In India, a subcommittee of the Ministry of Electronics and Information Technology (MEITy) is “examining the issue of deepfakes,” according to an article in Indian Express – and needs three more months to complete its consultation and submit its report.

The Delhi High Court directed MEITy to also “consider suggestions by creative professionals and artists, as well as by the self-regulatory body of the advertising industry Advertising Standards Council of India (ASCI) while deliberating over rules and regulations pertaining to deepfakes.”

New deepfake detection tools from Trust Stamp, Loti AI

A couple of biometrics firms are pursuing new deepfake and injection attack prevention products. Trust Stamp has announced that the United States Patent and Trademark Office has allowed it to apply to patent its “Shape Overlay for Proof of Liveness” mechanism, which a release says improves the security of remote person authentication by defending it against deepfake and injection attacks.

This approach “requires users to interact with randomly generated shape overlays on their device screens, ensuring real-time verification of a live subject.” Trust Stamp’s Chief Science Officer Dr. Norman Poh says the tool “offers a highly adaptable challenge-response mechanism that can be implemented on any smartphone, regardless of make or budget.”

Loti AI has broadened access to its likeness protection technology, which until now has been available only to “public figures and A-list celebrities.” A release says the technology is now freely available to anyone who wants to safeguard their digital reputation.

Loti’s digital identity protection and automated content takedown services platform scans the public internet daily for deepfakes, impersonations, adult content and misleading unauthorized content. Unauthorized likenesses can be automatically flagged, and the firm says its tools result in a 95 percent takedown success rate within 17 hours.

“The internet is getting out of hand, and people’s digital reputations are at risk like never before,” says Luke Arrigoni, CEO of Loti AI. “From deepfakes to unauthorized illicit content, these threats are no longer limited to celebrities. “Our goal is simple: to help you reach zero – zero images of you online that you haven’t approved.”

Related Posts

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Age assurance shouldn’t lead to harvesting of kids’ data: Irish privacy watchdog

Age assurance requirements for pornography sites and platforms hosting extremely violent content will become mandatory in Ireland this July. Media…

 

Idemia reveals Armenia JV details, Saudi Arabia MoU, WVU biometrics research lab

Idemia is busily establishing new partnerships to develop biometrics for national projects, from Armenia to Saudi Arabia, and to further…

 

EU SafeTravellers project works to secure biometric digital travel credentials

Idemia Public Security, iProov, Vision-Box and Ubiquitous Technologies Company (Ubitech) are part of a European Union-funded project to introduce traveler…

 

World puzzled by lack of public trust in massive technology corporations

Sam Altman and Alex Blania, figureheads and evangelists for cryptically related firms World and Tools for Humanity, recently spoke at…

 

Milwaukee police debate trading biometric data for Biometrica facial recognition

Although it has pledged to seek public consultation before signing a contract with a biometrics provider, the Milwaukee Police Department…

 

Italian regulator holds out hopes to collect fine from Clearview AI

Italy data protection regulator, the Garante, has not given up on collecting the millions of euros in fines it imposed…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events