- Published on April 4, 2025
- In AI News
The framework aims to overcome the limitations of current image-based human animation methods.

ByteDance has introduced DreamActor-M1, a new framework designed to generate realistic human animations from reference images. This framework addresses key issues in current animation models to achieve finer control, greater adaptability, and better consistency.
This comes in right after ByteDance announced Goku and InfiniteYou AI models.
DreamActor-M1 is based on a Diffusion Transformer (DiT) architecture and uses a hybrid guidance approach to achieve its results. The model employs a combination of implicit facial representations, 3D head spheres, and 3D body skeletons to control facial expressions and body movements with greater precision.
To handle various body poses and image scales, the model is trained using a progressive strategy on a dataset with varying resolutions and scales. DreamActor-M1 integrates motion patterns from sequential frames with complementary visual references to ensure consistency over extended periods, addressing challenges with unseen regions during complex movements.
The research paper compares DreamActor-M1 with several state-of-the-art human image animation models. For body animation, DreamActor-M1 was compared against Animate Anyone, Champ, MimicMotion, and DisPose.

In portrait animation, the model was evaluated alongside LivePortrait, X-Portrait, SkyReels-A1, and Runway Act-One.

The results of these comparisons demonstrate that DreamActor-M1 outperforms existing methods in generating more expressive and consistent animations.
The researchers also acknowledged that these AI models can be misused. They stated, “To reduce these risks, clear ethical rules and responsible usage guidelines are necessary. We will strictly restrict access to our core models and codes to prevent misuse. Images and videos are all from publicly available sources.”
While DreamActor-M1 represents a significant advancement, the researchers acknowledge certain limitations. The model struggles with controlling dynamic camera movements and generating physical interactions with environmental objects. Their future work aims to address these challenges and further enhance the model’s capabilities.
📣 Want to advertise in AIM? Book here
Ankush Das
I am a tech aficionado and a computer science graduate with a keen interest in AI, Open Source, and Cybersecurity.
Related Posts
Our Upcoming Conference
India's Biggest Conference on AI Startups
April 25, 2025 | 📍 Hotel Radisson Blu, Bengaluru
Subscribe to The Belamy: Our Weekly Newsletter
Biggest AI stories, delivered to your inbox every week.
Happy Llama 2025
AI Startups Conference.April 25, 2025 | 📍 Hotel Radisson Blu, Bengaluru, India
Data Engineering Summit 2025
May 15 - 16, 2025 | 📍 Hotel Radisson Blu, Bengaluru
MachineCon GCC Summit 2025
June 20 to 22, 2025 | 📍 ITC Grand, Goa
Cypher India 2025
Sep 17 to 19, 2025 | 📍KTPO, Whitefield, Bengaluru, India
MLDS 2026
India's Biggest Developers Summit | 📍Nimhans Convention Center, Bengaluru
Rising 2026
India's Biggest Summit on Women in Tech & AI 📍 Bengaluru