FB pixel

ByteDance releases new generative AI model OmniHuman

ByteDance releases new generative AI model OmniHuman
 

Chinese tech company ByteDance has come up with a generative AI framework that can create highly realistic videos of a human based on a single image and motion signal called OmniHuman-1.

ByteDance’s researchers demonstrated the technology by generating several realistic human videos, including Albert Einstein and Nvidia CEO Jensen Huang. The videos show humans talking and singing in challenging body positions, including using their hands, and in different aspect ratios such as portraits, half-body and full-body. The system can also animate cartoons.

The company behind TikTok says that the framework beats existing technology which is still struggling to scale beyond animating faces or upper bodies, limiting their potential in real applications. OmniHuman outperforms existing methods because it can generate extremely realistic human videos based on weak signal inputs, especially audio, according to a research paper published by the company.

“In OmniHuman, we introduce a multimodality motion conditioning mixed training strategy, allowing the model to benefit from data scaling up of mixed conditioning,” the researchers write. “This overcomes the issue that previous end-to-end approaches faced due to the scarcity of high-quality data.”

The researchers relied on more than 18,000 hours of human-related data for training the framework, allowing it to learn from text, audio, and body movements. This resulted in more natural-looking human videos.

“Our key insight is that incorporating multiple conditioning signals, such as text, audio and pose, during training can significantly reduce data wastage,” says the paper.

The system initially handles each input type independently, condensing movement details from text descriptions, reference images, audio signals and movement data into a compact format. It then progressively enhances this data into realistic video output, refining motion generation by comparing its results with real videos.

ByteDance has been investing in AI video generation, rivaling firms such as Meta, Microsoft and Google DeepMind. In January, the company released an upgrade to its AI model Doubao, claiming it outperforms OpenAI’s o1 benchmark test AIME.

Related Posts

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

Biometrics back digital government gains around the world

Digital government was in the spotlight this week on Biometric Update with the release of the OECD rankings and a…

 

MOSIP delves into biometric data quality considerations

Biometric data quality was in focus at MOSIP Connect 2026 in Rabat, Morocco, from policies for ensuring good enrollment practices…

 

NIST nominee pressed on AI standards, facial recognition oversight

The Senate Committee on Commerce, Science and Transportation on Thursday considered the nomination of Arvind Raman to serve as Under…

 

Trulioo’s Hal Lonas on how he applies aeronautics principles to fighting fraud

Rocket science is routinely held up as the ultimate example of a highly complex discipline. But Trulioo’s Hal Lonas found…

 

Vouched donates MCP-I framework to Decentralized Identity Foundation

An announcement from Seattle-based Vouched says it has formally donated its Model Context Protocol – Identity (MCP-I) framework to the…

 

California’s OS-based age verification law challenges open-source community

California’s new online safety bill, AB 1043 (the Digital Age Assurance Act), adopts a declared age model for operating systems….

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events