FB pixel

Baidu researchers compare voice cloning methods

 

Scientists with Baidu Research’s Deep Voice project has published a new study on the relative merits of “speaker adaptation” and “speaker encoding” as voice cloning methods.

Neural Voice Cloning with a Few Samples” (PDF) suggests that the different strengths of the two methods make each one appropriate for certain applications.

In speaker adaptation, a multi-speaker generative model is fine-tuned by applying backpropogation-based optimization to several cloning samples. This method enables speaker representation with a lower number of parameters, with the trade-offs of longer cloning time and lower audio quality.

Speaker encoding, in which a separate model is trained to directly infer a new speaker embedding, involves retrieving speaker identity information from each audio sample with “time-and-frequency-domain processing blocks.” This enables fast cloning time with a low number of parameters, which the researchers say makes it favorable for low-resource deployments.

The researchers expect voice cloning to be used for personalizing human-machine interactions. With voice authentication applications increasing in number and scale, it could also force those applications to use other methods and modalities, such as behavioral biometrics, to supplement voice recognition.

Article Topics

 |   | 

Latest Biometrics News

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics