FB pixel

Deepfake voice fraud dupes Swiss businessman into transferring millions

Proliferation of AI fraud puts another victim into deepfake hall of fame
Deepfake voice fraud dupes Swiss businessman into transferring millions
 

CEO fraud enabled by voice deepfake technology has claimed another victim, this time in Switzerland. Deploying audio manipulated to sound like a trusted business partner, fraudsters bamboozled an entrepreneur from the canton of Schwyz into transferring “several million Swiss francs” to a bank account in Asia.

According to a brief from SRF, the deception was perpetrated through a series of phone calls conducted over a two-week period, and was not discovered until after a number of financial transfers had occurred. The crime is currently under investigation.

The fleecing of Schwyz joins the 25-million-dollar Hong Kong Zoom call in the lore of damaging deepfake attacks leading to steep losses. The era of freely available generative AI products has given fraudsters a whole new playbook. And navigating the deepfake detection market (covered in Biometric Update’s 2025 Deepfake Detection Market Research and Buyer’s Guide) has become a new operational priority for enterprises.

Per Aurigin.ai, which collaborates with Swisscom Digital Trust on voice-based liveness verification and AI deepfake detection, most KYC systems still rely on visual cues, such as head movements, document tilts, or challenge questions, to confirm a person’s identity – measures that fail against today’s AI-generated faces and voices. “Fraudsters no longer need stolen documents; an online video or short audio clip is often enough to create a convincing fake.”

A company blog says most KYC systems still rely on visual cues, such as head movements, document tilts, or challenge questions, to confirm a person’s identity. “These measures were effective against basic fraud but fail against today’s AI-generated faces and voices.”

Deepfake technology has already evolved to be able to produce synthetic media that is undetectable to the human eye and ear. Recent research from the Queen Mary University of London shows that the average listener can no longer distinguish between deepfake voices and those of real human beings.

The issue is complicated by cultures of trust and hierarchy that make it counterintuitive to question one’s superiors, and by social engineering techniques that exploit established relationships.

Pindrop helps credit union, insurer beef up deepfake defense

Pindrop has been listening in on the audio deepfake ecosystem with its real-time voice and device intelligence system. The firm has seen how the deepfake crisis has rattled the foundational trust on which many financial institutions are built. A call from the CEO is no longer to be taken for granted, no matter how real it sounds.

“Deepfake tools, synthetic voices, and real-time manipulation technologies are now widely accessible, scalable and realistic,” says a blog by Pindrop Senior Product Marketing Manager Ketuman Sardesai. “This new wave of AI-driven deception is eroding one of the oldest and most dependable signals in financial services: the human voice.”

A few seconds of audio is enough to generate realistic voice clones. Manipulated or stitched audio is a factor. “In this environment, trust can’t rely solely on sound. It must be rooted in patterns, signals, and analysis that operate far beyond the threshold of human perception.”

Sardesai presents the case study of Michigan State University Federal Credit Union (MSUFCU), which faced rising call times and a spike in fraud attempts, leading it to realize traditional authentication methods like knowledge-based questions and behavioral cues had become obsolete.

“If a fraudster can convincingly imitate someone else’s voice, then questions like ‘What’s your mother’s maiden name?’ become little more than theater,” he says. “Slow authentication becomes both a friction point and a symptom of systems trying to compensate for signals that no longer carry meaning.”

MSUFCU implemented Pindrop’s voice authentication and real-time analysis to evaluate a multilayered set of security signals. The tool examines acoustic features and patterns, device characteristics such as metadata and behavioral analysis, call data and network insights to provide a robust defense against fake voices.

In doing so, Sardesai says, “the institution regained control of the one thing fraudsters were trying to claim: credibility.” Efficiency and customer satisfaction improved, and losses came down.

A second case study shows Pindrop deploying its Pindrop Pulse product to help a large U.S. insurer detect deepfakes and synthetic voice activity in their contact center. The insurer was spooked by the Hong Kong incident into strengthening its deepfake defenses.

“The insurer had been using Pindrop Protect to detect fraud attempts and Pindrop Passport to authenticate customers in the contact center,” says the study. “To strengthen its contact center authentication strategy, the insurer realized it needed to expand the ecosystem to include an AI-based deepfake detection solution like Pindrop Pulse.”

The story is the same: traditional authentication solutions are no longer reliable. Pindrop’s advice is to get ahead of the problem and implement effective deepfake detection and fraud prevention before an incident leads to major losses. No one wants to be the next canton of Schwyz.

Detecting deepfakes – Choosing the right technology

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Photo ID, proof of citizenship take center stage in US voting fight

The Safeguard American Voter Eligibility Act (SAVE) has become the centerpiece of a renewed congressional fight over who sets the…

 

AI fakery is turning fear into a voter suppression tool ahead of US elections

In the months leading up to the 2026 midterm elections which could see Democrats sweeping both the House and Senate,…

 

Alcatraz partners with gun violence group on school, workplace safety

Alcatraz has joined the Active Shooter Prevention Project (ASPP), a U.S.-based initiative that develops strategies to reduce risks in schools,…

 

V-Key gets PE firm backing to expand mobile digital identity security footprint

Singapore-headquartered digital identity and Mobile Application Protection and Security (MAPS) provider V-Key has a new majority investor, with Tower Capital…

 

IDfy secures $52M to pursue digital ID trust services ambitions

Digital ID verification firm IDfy has obtained funding of 476 crore Indian rupees, approximately US$52 million, to pursue its digital…

 

WSO2 to help MOSIP’s passwordless authentication platform eSignet Go Thunder

IIIT-Bangalore, home to India’s burgeoning digital public goods efforts, has formed a partnership through the MOSIP initiative it hosts with…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events