FB pixel

Voice cloning tools give rise to cacophony of impersonation fraud

Consumer Reports petition urges FTC to crack down on voice cloning companies
Voice cloning tools give rise to cacophony of impersonation fraud
 

In the Hunger Games franchise, engineered mutant birds called jabberjays drive people to madness by mimicking the voices of their loved ones in pain. This is an apt metaphor for the wave of AI voice cloning fraud plaguing U.S. consumers, who collectively lost nearly $3 billion to imposter scams in 2023 alone. Many of these target the elderly, sometimes posing as relatives in need.

In response, more than 75,000 people have signed a petition, delivered by Consumer Reports, urging the Federal Trade Commission (FTC) to hold to account companies that operate biometric AI voice cloning products which enable scams and impersonation, and to “protect all Americans from these kinds of deepfakes.”

“AI voice cloning tools are making it easier than ever for scammers to impersonate someone’s voice,” says Grace Gedye, policy analyst for AI issues at Consumer Reports. “These AI-enabled scams are increasingly difficult to detect, are costing consumers real money, and can present a threat to our national security as we recently saw when someone impersonated Secretary of State Marco Rubio. We urgently need proper oversight and guardrails for this technology.”

“We are calling on the FTC, as well as national and state policymakers, to investigate AI voice cloning companies with insufficient guardrails and address the dangers this emerging technology presents to consumers.”

Specific bullets include a call to “use Section 5 powers to investigate companies that facilitate voice-cloning scams and hold them accountable,” to “recommence work on the Individual Impersonation rulemaking (SNPRM, R207000),” and a request that state Attorneys General “use their laws and enforcement tools to investigate these voice cloning apps, and to hold companies accountable if they are not doing enough to protect consumers.”

Microsoft further enhances Azure AI Speech

Presumably, the best thing for companies to do to protect customers would be to slow down in developing cheap, freely available voice cloning tools. Alas, the AI gold rush continues, and large companies continue to develop more sophisticated biometric voice technology.

The Register reports that new capabilities in Microsoft’s Azure AI Speech allow users to rapidly generate a voice replica with just a few seconds of sampled speech.

The new zero-shot text-to-speech model, named “DragonV2.1Neural,” produces more natural-sounding and expressive voices, and will generate audio in more than 100 supported languages.

Microsoft says it “unlocks a wide range of applications, from customizing chatbot voices to dubbing video content in an actor’s original voice across multiple languages, enabling truly immersive and individualized audio experiences.”

It says its policies require anyone whose voice is reproduced to have given consent. How it intends to enforce that is an open question.

As to what Silicon Valley’s giants think about the potential for easy voice cloning or synthesis to supercharge fraud, one need look no further than OpenAI’s Sam Altman, who recently threw some dirt at voice authentication efforts, saying “apparently there are some financial institutions that will accept the voiceprint as authentication. That is a crazy thing to still be doing. Like, AI has fully defeated that.”

Pindrop collaboration allows Nvidia to rein in zero-shot cloning feature

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

OpenAge is on a roll: CEO talks AgeKeys with Biometric Update Podcast

Since launching in November, the OpenAge Initiative has become a common reference point among many in the age assurance industry….

 

Milwaukee police sink efforts to contract facial recognition with unsanctioned use

A meeting on whether and how Milwaukee police should use facial recognition in criminal investigations took an unexpected turn Thursday…

 

New UK deepfake detection testing framework, challenge aim to meet crisis head-on

Having declared deepfakes the greatest challenge of the online age, the UK government is set to take the lead on…

 

Kneron’s access control biometrics pass Fime performance and PAD assessments

Kneron’s has passed assessments for biometric presentation attack detection and performance in a month-long evaluation of its access control technology…

 

Entreprises d’identité, unissez-vous! French MoU unites EUDI Wallet stakeholders

Dozens of firms and public authorities have agreed to work together on the launch of France’s implementation of the European…

 

Analysis of 50 European eIDs shows most popular apps found in Ukraine and Turkey

The most popular European digital identities are not in the EU, a new survey analyzing 50 eID apps across the…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events