FB pixel

ByteDance releases new generative AI model OmniHuman

ByteDance releases new generative AI model OmniHuman
 

Chinese tech company ByteDance has come up with a generative AI framework that can create highly realistic videos of a human based on a single image and motion signal called OmniHuman-1.

ByteDance’s researchers demonstrated the technology by generating several realistic human videos, including Albert Einstein and Nvidia CEO Jensen Huang. The videos show humans talking and singing in challenging body positions, including using their hands, and in different aspect ratios such as portraits, half-body and full-body. The system can also animate cartoons.

The company behind TikTok says that the framework beats existing technology which is still struggling to scale beyond animating faces or upper bodies, limiting their potential in real applications. OmniHuman outperforms existing methods because it can generate extremely realistic human videos based on weak signal inputs, especially audio, according to a research paper published by the company.

“In OmniHuman, we introduce a multimodality motion conditioning mixed training strategy, allowing the model to benefit from data scaling up of mixed conditioning,” the researchers write. “This overcomes the issue that previous end-to-end approaches faced due to the scarcity of high-quality data.”

The researchers relied on more than 18,000 hours of human-related data for training the framework, allowing it to learn from text, audio, and body movements. This resulted in more natural-looking human videos.

“Our key insight is that incorporating multiple conditioning signals, such as text, audio and pose, during training can significantly reduce data wastage,” says the paper.

The system initially handles each input type independently, condensing movement details from text descriptions, reference images, audio signals and movement data into a compact format. It then progressively enhances this data into realistic video output, refining motion generation by comparing its results with real videos.

ByteDance has been investing in AI video generation, rivaling firms such as Meta, Microsoft and Google DeepMind. In January, the company released an upgrade to its AI model Doubao, claiming it outperforms OpenAI’s o1 benchmark test AIME.

Related Posts

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

Findynet funding development of six digital wallet solutions

Finnish public-private cooperative Findynet has announced it will award 60,000 euros (US$69,200) to six digital wallet vendors to help translate…

 

Patchwork of age check, online safety legislation grows across US

As the U.S. waits for the Supreme Court’s opinion on the Texas case of Paxton v. Free Speech Coalition, which…

 

AVPA laud findings from age assurance tech trial

The Age Verification Providers Association (AVPA), and several of its members, have welcomed the publication of preliminary findings from the…

 

Sri Lanka to launch govt API policies and guidelines

Sri Lanka’s government, in the wake of its digital economy drive, is gearing up to release application programming interface (API)…

 

Netherlands’ asylum seeker ID cards from Idemia use vertical ICAO format

The Netherlands will introduce new identity documents for asylum seekers Idemia Smart Identity, compliant with the ICAO specification for vertical…

 

Zenoo integrates Trinsic, Sumsub for advanced digital ID onboarding options

Onboarding and compliance orchestration engine provider Zenoo has formed a pair of partnerships to give its customers a broader range…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events