FB pixel

Maker confesses to Biden audio deepfakes flagged by Pindrop

Maker confesses to Biden audio deepfakes flagged by Pindrop
 

The source of the fake Joe Biden robocalls that have rattled U.S. politics has been revealed, as lawmakers continue to try and freshen up their legal defenses against a threat that has many people concerned ahead of a contentious U.S. federal election.

Pindrop teams up with voice cloning firm Respeecher on ethical use

With the distinction of having identified the original AI engine used to create the Joe Biden deepfake audio, Pindrop has partnered with Respeecher, a voice cloning firm that services the creative content industry, to promote the ethical use of generative AI.

A blog post by Pindrop’s Chief Product Officer Rahul Sood says the partnership will allow the companies to share research tools and data in the interest of optimizing deepfake detection technology by working closely with voice cloning systems.

“Voice clones, used ethically, can create more social and consumer engagement, help patients with speech disabilities recover their voice, dub content in a different language or voice a new character,” writes Sood. “However, the same voice cloning technology, in the hands of bad actors, can be used for nefarious purposes such as financial fraud, impersonating family members, or audiojacking live conversations.

“With the increasing sophistication of AI, the risk of more realistic voice clones and scaled-up organized attacks poses a major challenge. It is imperative that voice cloning providers design their solutions to avoid causing harm, upholding their high standards for the ethical use of AI and avoiding cloning voices without permission.”

The tech will only continue to improve, as fraudsters develop real-time voice conversion tactics that reduce artifacts and pauses to create a more human-sounding flow – and defenses must improve with it, says Sood. Pindrop’s real-time detection analyzes a voice 8000 times a second for artifacts that reflect the unique speech qualities of both human and machine vocal mechanics.

“Pindrop is excited about partnering with Respeecher so deepfake detection can stay ahead of the curve to protect against voice clones,” says Sood. “This partnership will help Pindrop in our mission to promote trust in every call for businesses like banks, insurance firms, or healthcare providers.”

Or – in at least one case – governments.

Creator of robocall deepfakes confesses

CNN reports that Steve Kramer, a former political consultant for Democratic Rep. Dean Phillips, has admitted to hiring a New Orleans street magician to create fake audio content of Joe Biden advising people in New Hampshire not to vote in primary elections. Kramer, who says the voice deepfake was created with “easy to use online technology,” then sent it out in an automated robocall to thousands of voters.

The easy-to-use technology in question is ElevenLabs, which Pindrop’s audio deepfake detection software identified as the text-to-speech (TTS) AI engine used to generate the Biden robocall audio. Kramer claims it took less than half an hour to create the fake voice content with ElevenLabs. The precise role of the magician remains unclear.

Pindrop subsequently explained how its voice biometrics engine identifies fakes like the Biden spoof.

The recognition of Pindrop’s success in confirming the deepfake in consumer press represents a feather in the company’s public relations cap.

Oregon passes bill requiring disclosure of AI campaign materials

Considered the first major instance of AI being used to try and suppress votes, the Biden robocall has tipped off a domino effect of legislation.

This week, Oregon became the latest state to pass a law to regulate deepfake AI technology and protect against its misuse. According to a report from Oregon Capital Chronicle, Senate Bill 1571 would require campaigns to disclose whether their campaign materials use AI or other digital technology to manipulate an image, audio or video in an attempt to influence voters. If passed, the bill would also enable the secretary of state or attorney general to block the use of AI-generated campaign materials that have not been disclosed as such, and to impose a fine of up to $10,000 per violation.

“These synthetic media can be used to create false narratives, impersonate public figures and manipulate public opinion in ways that aren’t immediately discernible to the average viewer,” says Aaron Woods, an Oregon state senator, about AI.

While some agree with Woods but caution against vesting too much power to rule on the matter in one authority, others are leaning into anxiety: Senator Dennis Linthicum voted against the bill on the grounds that AI is being used by “the ruling elite” to control people around the world. “That may make everyone think I have a tin foil hat on my head, but I don’t,” Linthicum says. “I have a well-reasoned, logical argument.”

Is this tinfoil hat real, or fake: take an audio deepfake test

Public awareness of deepfakes is growing, with each critical incident driving media coverage on top of legislative action. To gauge how effective voice deepfake technology has gotten, The Guardian has looked to one of online tech’s bygone trends: the online quiz. The paper’s journalists used Parrot AI, “an app with audio renditions of public figures that users can input words into,” to create a number of phrases spoken by fake and/or real Trumps and Bidens; users can guess which clips are real, and which generated. (For the record, this reporter failed at number two.)

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

US Justice developing AI use guidelines for law enforcement, civil rights

The US Department of Justice (DOJ) continues to advance draft guidelines for the use of AI and biometric tools like…

 

Airport authorities expand biometrics deployments with Thales, Idemia tech

Biometric deployments involving Thales, Idemia and Vision-Box, alongside agencies like the TSA,  highlight the aviation industry’s commitment to streamlining operations….

 

Age assurance laws for social media prove slippery

Age verification for social media remains a fluid issue across regions, as stakeholders argue their positions to courts and governments,…

 

ZeroBiometrics passes pioneering BixeLab biometric template protection test

ZeroBiometrics’ face biometrics software meets the specifications for template protection set out in the ISO/IEC 30136, according to a pioneering…

 

Apple patent filing aims for reuse of digital ID without sacrificing privacy

A patent filing from Apple for ensuring a presented reusable digital ID belongs to the person holding it via selfie…

 

Publication of ISO standard sets up biometric bias tests and measurement

The international standard for measuring biometric bias, or demographic differentials, is now available for purchase and preview from the International…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events