FB pixel

Audio cloning can take over a phone call in real time without the speakers knowing

Audio cloning can take over a phone call in real time without the speakers knowing
 

Generative AI could be listening to your phone calls and hijacking them with fake biometric audio for fraud or manipulation purposes, according to new research published by Security Intelligence. In the wake of a Hong Kong fraud case that saw an employee transfer US$25 million in funds to five bank accounts after a virtual meeting with what turned out to be audio-video deepfakes of senior management, the biometrics and digital identity world is on high alert, and the threats are growing more sophisticated by the day.

A blog post by Chenta Lee, chief architect of threat intelligence at IBM Security, breaks down how researchers from IBM X-Force successfully intercepted and covertly hijacked a live conversation by using LLM to understand the conversation and manipulate it for malicious purposes – without the speakers knowing it was happening.

“Alarmingly,” writes Lee, “it was fairly easy to construct this highly intrusive capability, creating a significant concern about its use by an attacker driven by monetary incentives and limited to no lawful boundary.”

Hack used a mix of AI technologies and a focus on keywords

By combining large language models (LLM), speech-to-text, text-to-speech and voice cloning tactics, X-Force was able to dynamically modify the context and content of a live phone conversation. The method eschewed the use of generative AI to create a whole fake voice and focused instead on replacing keywords in context – for example, masking a spoken real bank account number with an AI-generated one. Tactics can be deployed through a number of vectors, such as malware or compromised VOIP services. A three second audio sample is enough to create a convincing voice clone, and the LLM takes care of parsing and semantics.

“It is akin to transforming the people in the conversation into dummy puppets,” writes Lee. “And due to the preservation of the original context, it is difficult to detect.” With advanced social engineering added to the mix, the size of the attack surface only grows. Outside of fraud, Lee also points to the potential for a new kind of real-time censorship, which could have dire implications for political discourse, journalism and the general fabric of reality.

In light of the ease with which they were able to create a successful proof of concept for dynamic voice hijacking, Lee says it is crucial to recognize that “trusted and secure AI is not confined to the AI models themselves. The broader infrastructure must be a defensive mechanism for our AI models and AI-driven attacks.”

Pindrop says software identifies deepfakes more effectively than humans

 According to Pindrop, a further complication is that humans are not very good at detecting fake speech. Writing on the firm’s blog, Head of Brand and Digital Experience Laura Fitzgerald cites new research from UCL showing that humans could only detect artificially generated speech 73 percent of the time.

“Using generative AI technology, bad actors can inject voice into real-time streams, leading to significant fraud loss, the spread of misinformation, and damaged brand reputation,” writes Fitzgerald. The firm says its biometric voice engine, Pindrop Pulse, can outperform humans at deepfake detection.

“In our lab testing with 11 million sample test data sets, Pindrop Pulse can detect a deepfake 99 percent of the time,” says Fitzgerald. The tech Pindrop’s processes a call’s metadata to generate predictions and risk scores. The Passport software provides additional risk analysis based on multiple inputs. Risk APIs display liveness scores in real time, and policies can be calibrated to filter deepfake calls.

The capabilities of AI and LLMs are increasing at speed. “AI performance on benchmark charts can show that it’s surpassed humans at several tasks,” writes Fitzgerald. “And the rate at which humans are being surpassed at new tasks is increasing.” Defenses must be nimble and adaptable, as the curve trends upward into unknown territory.

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

UK tucks biometric bias reports deep into police facial recognition plan

The UK government pledged on Thursday to increase its use of facial recognition and biometrics to identify wanted suspects. The…

 

Pandemic surveillance – how AI will police the next global health crisis

By Professor Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner Fears about AI-enabled biometric tools like facial recognition are often…

 

Behavioral Signals brings novel approach to audio deepfake detection

Deepfakes have advanced beyond the capability of leading software tools using vocal biomarkers to detect them. Fortunately, behavioral biometrics and…

 

NEC takes a stake in PopID, Tencent and Wink biometrics integrated with POS terminals

Major technology firms and payment providers are racing to replace cards and phones with face, palm and voice biometrics. Payments…

 

Firms dive head first into agentic AI governance frameworks, dashboard options

ServiceNow has announced its intent to acquire identity security company Veza, in a move that a release says will extend…

 

SecuGen biometric devices advance toward Aadhaar L1 certification, MOSIP launch

The fingerprint biometric scanners SecuGen is building robust biometric liveness detection into through its partnership with Precise Biometrics are advancing…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events