Startup develops voice imitation software
Montreal-based technology startup Lyrebird has developed software that the company says can imitate any individual’s voice and make it say anything, as well as reflect the desired emotion, stress and intonation, according to a report by Scientific American.
Based on a 60-second sample of the person’s voice recording, Lyrebird’s “voice imitation algorithm” will have all the information it needs to impersonate the voice.
The company’s website offers three samples of audio recordings generated by the software impersonating Donald Trump, Barack Obama and Hillary Clinton.
The technology relies on “artificial neural networks – which use algorithms designed to help them function like a human brain – that rely on deep-learning techniques to transform bits of sound into speech,” according to Scientific American.
Developed by researchers at the University of Montreal’s Institute for Learning Algorithms laboratory, Lyrebird allows users to create entire dialogs with the desired voice or even design a voice completely from scratch.
The company also says on its website that it offers “a large catalog of different voices and let the user design their own unique voices tailored for their needs.”
“We can generate thousands of sentences in one second, which is crucial for real-time applications,” said Alexandre de Brébisson, Lyrebird co-founder. “Lyrebird also adds the possibility of copying a voice very fast and is language agnostic.”
The technology certainly raises several concerns regarding privacy and security, especially when considering that many banks and other financial institutions are using voice-recognition systems for online and phone banking.
Lyrebird addresses these issues in the Ethics section of its website, stating that its technology “could potentially have dangerous consequences such as misleading diplomats, fraud and more generally any other problem caused by stealing the identity of someone else”.
“We hope that everyone will soon be aware that such technology exists and that copying the voice of someone else is possible… and by releasing our technology publicly and making it available to anyone, we want to ensure that there will be no such risks,” Lyrebird’s website reads.
The technology has been met with mixed feedback from the security community. IBM Resilient CTO Bruce Schneier said that fake audio clips have become, “the new reality.”
“Imagine the social engineering implications of an attacker on the telephone being able to impersonate someone the victim knows,” Schneier wrote on his blog. “I don’t think we’re ready for this.”