FB pixel

ByteDance releases new generative AI model OmniHuman

ByteDance releases new generative AI model OmniHuman
 

Chinese tech company ByteDance has come up with a generative AI framework that can create highly realistic videos of a human based on a single image and motion signal called OmniHuman-1.

ByteDance’s researchers demonstrated the technology by generating several realistic human videos, including Albert Einstein and Nvidia CEO Jensen Huang. The videos show humans talking and singing in challenging body positions, including using their hands, and in different aspect ratios such as portraits, half-body and full-body. The system can also animate cartoons.

The company behind TikTok says that the framework beats existing technology which is still struggling to scale beyond animating faces or upper bodies, limiting their potential in real applications. OmniHuman outperforms existing methods because it can generate extremely realistic human videos based on weak signal inputs, especially audio, according to a research paper published by the company.

“In OmniHuman, we introduce a multimodality motion conditioning mixed training strategy, allowing the model to benefit from data scaling up of mixed conditioning,” the researchers write. “This overcomes the issue that previous end-to-end approaches faced due to the scarcity of high-quality data.”

The researchers relied on more than 18,000 hours of human-related data for training the framework, allowing it to learn from text, audio, and body movements. This resulted in more natural-looking human videos.

“Our key insight is that incorporating multiple conditioning signals, such as text, audio and pose, during training can significantly reduce data wastage,” says the paper.

The system initially handles each input type independently, condensing movement details from text descriptions, reference images, audio signals and movement data into a compact format. It then progressively enhances this data into realistic video output, refining motion generation by comparing its results with real videos.

ByteDance has been investing in AI video generation, rivaling firms such as Meta, Microsoft and Google DeepMind. In January, the company released an upgrade to its AI model Doubao, claiming it outperforms OpenAI’s o1 benchmark test AIME.

Related Posts

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

Calls for national standards grow as U.S. AI action plan takes shape

On February 6, the National Science Foundation’s (NSF) Networking and Information Technology Research and Development National Coordination Office (NCO) issued…

 

DOGE’s influence at SSA triggers legal and congressional scrutiny

An affidavit in support of an amended complaint and motion for emergency relief to halt Elon Musk’s so-called Department of Government Efficiency’s…

 

UK Online Safety Act passes first enforcement deadline, threatening big fines

One of the main reasons regulations are not especially popular among ambitious CEOs is that they can cost money. This…

 

Digital ID, passkeys are transforming Australian government services

Tax has gone digital in Australia, where businesses now need to use the Australian Government Digital ID System to verify…

 

Biometrics ‘the lynchpin of where gaming companies need to be,’ says gambling executive

Online gambling continues to be a fruitful market for biometrics providers, as betting platforms seek secure and frictionless KYC, onboarding,…

 

Surveillance, identity and the right to go missing

By Professor Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner Do we have a right to go missing? The global…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events