Voice replication feature identified as potential threat to voice biometrics
With Apple’s iOS 17 upgrade, iPhone users can now sign up for the Personal Voice beta release, a voice generator that Apple categorizes as an accessibility feature for those at risk of speech loss. But as deepfake fraud is on the rise, critics are raising concerns that voice replication technology can be used to circumvent voice biometrics.
Using 15 to 25 minutes of randomized voice prompts, Apple’s accessibility tool can generate a voice that repeats anything users type. As a security precaution, Apple stores the voice data involved in the replicator on the local device rather than in the cloud.
Still, just by triple clicking the side button of a locked iPhone, anyone can access the voice stored on the device, found Pratik Navani, Radio New Zealand (RNZ) staff member who signed up for the beta.
“If a phone is ideally sitting somewhere it would be very easy for someone to… get your phone to say anything in your voice, record it on another device and have that recording ready to go,” he says in an interview with RNZ. He was able to use his voice replication to bypass his phone banking voice authentication system.
“If this voice feature is available before biometric authentication, that’s something that Apple should be looking at. And that’s what the Beta cycle is for,” said Johnathan Mosen, disability advocate, also in an RNZ interview.
Mosen says that Navani’s claim that he ‘tricked’ his banking system with the technology “is a serious mischaracterization of this technology… What happened is the system worked exactly as intended. Disabled people are perfectly entitled to access banking technology like anyone else.”
Mosen argues that Navani used the feature as intended “so that his voice could be available at times when maybe his actual physical voice would not be.”
The real concern, Mosen claims, is when individuals replicate other people’s voices.
The concern is bound to increase, after Meta announced its Voicebox generator tool and declined to release it to the public in the same breath, citing voice fraud concerns.
Opportunities to scam voice authentication systems may be growing, too. PayPal is thinking about authenticating users with voice biometrics, as seen in a patent filing spotted by Pymnts.
Research by the publication also found that more than half of consumers (54 percent) would use voice to complete transactions faster than they can by typing or on a touchscreen.
“If given the choice, opting for facial biometric verification over vocal biometric verification or a one-time (text) code is the safest, most effective way to protect you from fraudsters utilizing deep fakes to hack into accounts,” says Ajay Amlani, president, head of Americas at iProov, in an interview with U.S. News.
Article Topics
biometrics | deepfakes | financial services | fraud prevention | iProov | patents | Paypal | voice biometrics
Comments