FB pixel

Deepfake voice fraud dupes Swiss businessman into transferring millions

Proliferation of AI fraud puts another victim into deepfake hall of fame
Deepfake voice fraud dupes Swiss businessman into transferring millions
 

CEO fraud enabled by voice deepfake technology has claimed another victim, this time in Switzerland. Deploying audio manipulated to sound like a trusted business partner, fraudsters bamboozled an entrepreneur from the canton of Schwyz into transferring “several million Swiss francs” to a bank account in Asia.

According to a brief from SRF, the deception was perpetrated through a series of phone calls conducted over a two-week period, and was not discovered until after a number of financial transfers had occurred. The crime is currently under investigation.

The fleecing of Schwyz joins the 25-million-dollar Hong Kong Zoom call in the lore of damaging deepfake attacks leading to steep losses. The era of freely available generative AI products has given fraudsters a whole new playbook. And navigating the deepfake detection market (covered in Biometric Update’s 2025 Deepfake Detection Market Research and Buyer’s Guide) has become a new operational priority for enterprises.

Per Aurigin.ai, which collaborates with Swisscom Digital Trust on voice-based liveness verification and AI deepfake detection, most KYC systems still rely on visual cues, such as head movements, document tilts, or challenge questions, to confirm a person’s identity – measures that fail against today’s AI-generated faces and voices. “Fraudsters no longer need stolen documents; an online video or short audio clip is often enough to create a convincing fake.”

A company blog says most KYC systems still rely on visual cues, such as head movements, document tilts, or challenge questions, to confirm a person’s identity. “These measures were effective against basic fraud but fail against today’s AI-generated faces and voices.”

Deepfake technology has already evolved to be able to produce synthetic media that is undetectable to the human eye and ear. Recent research from the Queen Mary University of London shows that the average listener can no longer distinguish between deepfake voices and those of real human beings.

The issue is complicated by cultures of trust and hierarchy that make it counterintuitive to question one’s superiors, and by social engineering techniques that exploit established relationships.

Pindrop helps credit union, insurer beef up deepfake defense

Pindrop has been listening in on the audio deepfake ecosystem with its real-time voice and device intelligence system. The firm has seen how the deepfake crisis has rattled the foundational trust on which many financial institutions are built. A call from the CEO is no longer to be taken for granted, no matter how real it sounds.

“Deepfake tools, synthetic voices, and real-time manipulation technologies are now widely accessible, scalable and realistic,” says a blog by Pindrop Senior Product Marketing Manager Ketuman Sardesai. “This new wave of AI-driven deception is eroding one of the oldest and most dependable signals in financial services: the human voice.”

A few seconds of audio is enough to generate realistic voice clones. Manipulated or stitched audio is a factor. “In this environment, trust can’t rely solely on sound. It must be rooted in patterns, signals, and analysis that operate far beyond the threshold of human perception.”

Sardesai presents the case study of Michigan State University Federal Credit Union (MSUFCU), which faced rising call times and a spike in fraud attempts, leading it to realize traditional authentication methods like knowledge-based questions and behavioral cues had become obsolete.

“If a fraudster can convincingly imitate someone else’s voice, then questions like ‘What’s your mother’s maiden name?’ become little more than theater,” he says. “Slow authentication becomes both a friction point and a symptom of systems trying to compensate for signals that no longer carry meaning.”

MSUFCU implemented Pindrop’s voice authentication and real-time analysis to evaluate a multilayered set of security signals. The tool examines acoustic features and patterns, device characteristics such as metadata and behavioral analysis, call data and network insights to provide a robust defense against fake voices.

In doing so, Sardesai says, “the institution regained control of the one thing fraudsters were trying to claim: credibility.” Efficiency and customer satisfaction improved, and losses came down.

A second case study shows Pindrop deploying its Pindrop Pulse product to help a large U.S. insurer detect deepfakes and synthetic voice activity in their contact center. The insurer was spooked by the Hong Kong incident into strengthening its deepfake defenses.

“The insurer had been using Pindrop Protect to detect fraud attempts and Pindrop Passport to authenticate customers in the contact center,” says the study. “To strengthen its contact center authentication strategy, the insurer realized it needed to expand the ecosystem to include an AI-based deepfake detection solution like Pindrop Pulse.”

The story is the same: traditional authentication solutions are no longer reliable. Pindrop’s advice is to get ahead of the problem and implement effective deepfake detection and fraud prevention before an incident leads to major losses. No one wants to be the next canton of Schwyz.

Detecting deepfakes – Choosing the right technology

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Australia plans biometric liveness detection refresh for national digital ID

Australia plans to contract a biometric liveness detection capability to support the country’s national digital ID and protect it against…

 

Deepfake threats exploiting the trust inside corporate systems

New York-based AI security company Reality Defender is warning businesses that deepfake threats have moved beyond isolated fraud schemes and…

 

Under AMLA, 95% false positives become a regulator’s problem

By Max Irwin, Regional Vice President EU, Shufti By the end of the day on 22 April 2026, around forty…

 

Sri Lanka defines trust boundaries ahead of digital ID rollout

Sri Lanka’s Unique Digital ID (SL-UDI project is placing trust architecture at the center of its rollout, with officials emphasizing…

 

Biometrics demand holds firm across core and emerging use cases

A UK court ruling that live facial recognition use by police does not violate human rights could have major implications…

 

ADVP and NO2ID back DVS framework from opposing perspectives

The UK’s Digital Verification Service (DVS) trust framework is drawing support from both industry and long-time critics of centralized identity…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events