Training dataset tower of babel collected for voice AI development
A Chinese AI data services vendor claims to have built speech training datasets in at least 30 languages, a task that would make rolling out a multilanguage voice biometrics product more efficient.
Datatang executives say their speech recognition datasets are created with native language speakers and that surpass data quality standards. The company says it gathered signed authorization agreements to collect the data.
Failure to obtain consent from subjects for inclusion in datasets used to train biometrics and other algorithms has long been seen as a point of ethical failure within the AI community.
Among the languages covered are German, Spanish, Korean, French, Hindi and Japanese.
The Japanese set is something shy of 1,000 hours of spoken language useful for in-vehicle and smart home devices.
The Spanish set holds 3,000 hours spoken by natives of Spain, Mexico, Columbia, Venezuela and other nations. It also is pitched at vehicle and home use.
The Korean dataset, with about 2,000 hours, on the other hand, has speech relevant to economics, news and entertainment.
Last fall, Microsoft and Nvidia said they had trained the Megatron-Turing national language generation system, which perform speech recognition tasks including natural language inferences.