‘Grammy said I could stay up late!’ Amazon hints at deepfake voices as family bonding
Amazon executives say they want to give their Alexa voice assistant the ability to mimic any voice it is trained on for less than a minute.
The company is hitting hard on emotions, according to reporting by Reuters. A senior vice president is quoted saying Amazon was inspired to write the code because “so many of us have lost someone we love” to the pandemic.
Too much? The company that now wants to “make the memories last” showed promotional video at Amazon’s re:Mars conference of a child who says, “Alexa, can grandma finish reading me the Wizard of Oz?”
Sadly, no video evidence of that heart-warmer could be found online, maybe because more than one news site covering the conference found the development “creepy.”
This is not the first time someone has made news with a voice deepfake.
Someone stole $35 million from a United Arab Emirates banker in 2020 using voice cloning.
Then there was the case of the fraudster in 2019 who swindled a British energy firm out of almost $250,000 the same way.
There also is a startup in this field, Descript, which claims it can make “ultra-realistic voice cloning.” In one example voice, that of a young woman? Ends phrases like questions? So, we can count on hearing? Even more of that annoying affected style?
Interest in being able to call out a faked voice is growing, too.
Researchers from the Ruhr-University Bochum in Germany published a report in January with suggestions on how to tackle voice deepfakes through the use of a novel dataset.