Word: The venture web page for this work consists of 33 autoplaying high-res movies totaling half a gigabyte, which destabilized my system on load. For that reason, I received’t hyperlink to it straight. Readers can discover the URL within the paper’s summary or PDF in the event that they select.
One of many major aims in present video synthesis analysis is producing an entire AI-driven video efficiency from a single picture. This week a brand new paper from Bytedance Clever Creation outlined what would be the most complete system of this type to date, able to producing full- and semi-body animations that mix expressive facial element with correct large-scale movement, whereas additionally reaching improved identification consistency – an space the place even main industrial programs typically fall quick.
Within the instance beneath, we see a efficiency pushed by an actor (prime left) and derived from a single picture (prime proper), that gives a remarkably versatile and dexterous rendering, with not one of the traditional points round creating giant actions or ‘guessing’ about occluded areas (i.e., elements of clothes and facial angles that should be inferred or invented as a result of they aren’t seen within the sole supply picture):
AUDIO CONTENT. Click on to play. A efficiency is born from two sources, together with lip-sync, which is generally the protect of devoted ancillary programs. This can be a diminished model from the supply web site (see word at starting of article – applies to all different embedded movies right here).
Although we will see some residual challenges concerning persistence of identification as every clip proceeds, that is the primary system I’ve seen that excels in typically (although not all the time) sustaining ID over a sustained interval with out the usage of LoRAs:
AUDIO CONTENT. Click on to play. Additional examples from the DreamActor venture.
The brand new system, titled DreamActor, makes use of a three-part hybrid management system that offers devoted consideration to facial features, head rotation and core skeleton design, thus accommodating AI-driven performances the place neither the facial nor physique side endure on the expense of the opposite – a uncommon, arguably unknown functionality amongst related programs.
Under we see one in every of these sides, head rotation, in motion. The coloured ball within the nook of every thumbnail in the direction of the correct signifies a sort of digital gimbal that defines head-orientation independently of facial motion and expression, which is right here pushed by an actor (decrease left).
Click on to play. The multicolored ball visualized right here represents the axis of rotation of the pinnacle of the avatar, whereas the expression is powered by a separate module and knowledgeable by an actor’s efficiency (seen right here decrease left).
One of many venture’s most attention-grabbing functionalities, which isn’t even included correctly within the paper’s exams, is its capability to derive lip-sync motion straight from audio – a functionality which works unusually effectively even with out a driving actor-video.
The researchers have taken on the most effective incumbents on this pursuit, together with the much-lauded Runway Act-One and LivePortrait, and report that DreamActor was capable of obtain higher quantitative outcomes.
Since researchers can set their very own standards, quantitative outcomes aren’t essentially an empirical customary; however the accompanying qualitative exams appear to help the authors’ conclusions.
Sadly this technique will not be supposed for public launch, and the one worth the neighborhood can doubtlessly derive from the work is in doubtlessly reproducing the methodologies outlined within the paper (as was performed to notable impact for the equally closed-source Google Dreambooth in 2022).
The paper states*:
‘Human picture animation has attainable social dangers, like being misused to make pretend movies. The proposed know-how may very well be used to create pretend movies of individuals, however present detection instruments [Demamba, Dormant] can spot these fakes.
‘To cut back these dangers, clear moral guidelines and accountable utilization tips are needed. We are going to strictly prohibit entry to our core fashions and codes to forestall misuse.’
Naturally, moral issues of this type are handy from a industrial standpoint, because it supplies a rationale for API-only entry to the mannequin, which may then be monetized. ByteDance has already performed this as soon as in 2025, by making the much-lauded OmniHuman accessible for paid credit on the Dreamina web site. Due to this fact, since DreamActor is presumably a good stronger product, this appears the seemingly final result. What stays to be seen is the extent to which its ideas, so far as they’re defined within the paper, can help the open supply neighborhood.
The new paper is titled DreamActor-M1: Holistic, Expressive and Sturdy Human Picture Animation with Hybrid Steerage, and comes from six Bytedance researchers.
Methodology
The DreamActor system proposed within the paper goals to generate human animation from a reference picture and a driving video, utilizing a Diffusion Transformer (DiT) framework tailored for latent area (apparently some taste of Steady Diffusion, although the paper cites solely the 2022 landmark launch publication).
Slightly than counting on exterior modules to deal with reference conditioning, the authors merge look and movement options straight contained in the DiT spine, permitting interplay throughout area and time by means of consideration:

Schema for the brand new system: DreamActor encodes pose, facial movement, and look into separate latents, combining them with noised video latents produced by a 3D VAE. These indicators are fused inside a Diffusion Transformer utilizing self- and cross-attention, with shared weights throughout branches. The mannequin is supervised by evaluating denoised outputs to scrub video latents. Supply: https://arxiv.org/pdf/2504.01724
To do that, the mannequin makes use of a pretrained 3D variational autoencoder to encode each the enter video and the reference picture. These latents are patchified, concatenated, and fed into the DiT, which processes them collectively.
This structure departs from the frequent observe of attaching a secondary community for reference injection, which was the strategy for the influential Animate Anybody and Animate Anybody 2 tasks.
As an alternative, DreamActor builds the fusion into the primary mannequin itself, simplifying the design whereas enhancing the move of knowledge between look and movement cues. The mannequin is then educated utilizing move matching reasonably than the usual diffusion goal (Stream matching trains diffusion fashions by straight predicting velocity fields between knowledge and noise, skipping rating estimation).
Hybrid Movement Steerage
The Hybrid Movement Steerage technique that informs the neural renderings combines pose tokens derived from 3D physique skeletons and head spheres; implicit facial representations extracted by a pretrained face encoder; and reference look tokens sampled from the supply picture.
These parts are built-in throughout the Diffusion Transformer utilizing distinct consideration mechanisms, permitting the system to coordinate world movement, facial features, and visible identification all through the technology course of.
For the primary of those, reasonably than counting on facial landmarks, DreamActor makes use of implicit facial representations to information expression technology, apparently enabling finer management over facial dynamics whereas disentangling identification and head pose from expression.
To create these representations, the pipeline first detects and crops the face area in every body of the driving video, resizing it to 224×224. The cropped faces are processed by a face movement encoder pretrained on the PD-FGC dataset, which is then conditioned by an MLP layer.

PD-FGC, employed in DreamActor, generates a speaking head from a reference picture with disentangled management of lip sync (from audio), head pose, eye motion, and expression (from separate movies), permitting exact, impartial manipulation of every. Supply: https://arxiv.org/pdf/2211.14506
The result’s a sequence of face movement tokens, that are injected into the Diffusion Transformer by means of a cross-attention layer.
The identical framework additionally helps an audio-driven variant, whereby a separate encoder is educated that maps speech enter on to face movement tokens. This makes it attainable to generate synchronized facial animation – together with lip actions – with out a driving video.
AUDIO CONTENT. Click on to play. Lip-sync derived purely from audio, with out a driving actor reference. The only real character enter is the static picture seen upper-right.
Secondly, to regulate head pose independently of facial features, the system introduces a 3D head sphere illustration (see video embedded earlier on this article), which decouples facial dynamics from world head motion, enhancing precision and adaptability throughout animation.
Head spheres are generated by extracting 3D facial parameters – resembling rotation and digicam pose – from the driving video utilizing the FaceVerse monitoring technique.

Schema for the FaceVerse venture. Supply: https://www.liuyebin.com/faceverse/faceverse.html
These parameters are used to render a shade sphere projected onto the 2D picture aircraft, spatially aligned with the driving head. The sphere’s measurement matches the reference head, and its shade displays the pinnacle’s orientation. This abstraction reduces the complexity of studying 3D head movement, serving to to protect stylized or exaggerated head shapes in characters drawn from animation.

Visualization of the management sphere influencing head orientation.
Lastly, to information full-body movement, the system makes use of 3D physique skeletons with adaptive bone size normalization. Physique and hand parameters are estimated utilizing 4DHumans and the hand-focused HaMeR, each of which function on the SMPL-X physique mannequin.

SMPL-X applies a parametric mesh over the complete human physique in a picture, aligning with estimated pose and expression to allow pose-aware manipulation utilizing the mesh as a volumetric information. Supply: https://arxiv.org/pdf/1904.05866
From these outputs, key joints are chosen, projected into 2D, and related into line-based skeleton maps. Not like strategies resembling Champ, that render full-body meshes, this strategy avoids imposing predefined form priors, and by relying solely on skeletal construction, the mannequin is thus inspired to deduce physique form and look straight from the reference photographs, lowering bias towards fastened physique varieties, and enhancing generalization throughout a spread of poses and builds.
Throughout coaching, the 3D physique skeletons are concatenated with head spheres and handed by means of a pose encoder, which outputs options which can be then mixed with noised video latents to supply the noise tokens utilized by the Diffusion Transformer.
At inference time, the system accounts for skeletal variations between topics by normalizing bone lengths. The SeedEdit pretrained picture modifying mannequin transforms each reference and driving photographs into a normal canonical configuration. RTMPose is then used to extract skeletal proportions, that are used to regulate the driving skeleton to match the anatomy of the reference topic.

Overview of the inference pipeline. Pseudo-references could also be generated to complement look cues, whereas hybrid management indicators – implicit facial movement and specific pose from head spheres and physique skeletons – are extracted from the driving video. These are then fed right into a DiT mannequin to supply animated output, with facial movement decoupled from physique pose, permitting for the usage of audio as a driver.
Look Steerage
To boost look constancy, significantly in occluded or hardly ever seen areas, the system dietary supplements the first reference picture with pseudo-references sampled from the enter video.
Click on to play. The system anticipates the necessity to precisely and constantly render occluded areas. That is about as shut as I’ve seen, in a venture of this type, to a CGI-style bitmap-texture strategy.
These further frames are chosen for pose variety utilizing RTMPose, and filtered utilizing CLIP-based similarity to make sure they continue to be in keeping with the topic’s identification.
All reference frames (major and pseudo) are encoded by the identical visible encoder and fused by means of a self-attention mechanism, permitting the mannequin to entry complementary look cues. This setup improves protection of particulars resembling profile views or limb textures. Pseudo-references are all the time used throughout coaching and optionally throughout inference.
Coaching
DreamActor was educated in three levels to regularly introduce complexity and enhance stability.
Within the first stage, solely 3D physique skeletons and 3D head spheres had been used as management indicators, excluding facial representations. This allowed the bottom video technology mannequin, initialized from MMDiT, to adapt to human animation with out being overwhelmed by fine-grained controls.
Within the second stage, implicit facial representations had been added, however all different parameters frozen. Solely the face movement encoder and face consideration layers had been educated at this level, enabling the mannequin to study expressive element in isolation.
Within the closing stage, all parameters had been unfrozen for joint optimization throughout look, pose, and facial dynamics.
Information and Exams
For the testing section, the mannequin is initialized from a pretrained image-to-video DiT checkpoint† and educated in three levels: 20,000 steps for every of the primary two levels and 30,000 steps for the third.
To enhance generalization throughout totally different durations and resolutions, video clips had been randomly sampled with lengths between 25 and 121 frames. These had been then resized to 960x640px, whereas preserving side ratio.
Coaching was carried out on eight (China-focused) NVIDIA H20 GPUs, every with 96GB of VRAM, utilizing the AdamW optimizer with a (tolerably excessive) studying price of 5e−6.
At inference, every video section contained 73 frames. To keep up consistency throughout segments, the ultimate latent from one section was reused because the preliminary latent for the following, which contextualizes the duty as sequential image-to-video technology.
Classifier-free steering was utilized with a weight of two.5 for each reference photographs and movement management indicators.
The authors constructed a coaching dataset (no sources are acknowledged within the paper) comprising 500 hours of video sourced from numerous domains, that includes cases of (amongst others) dance, sports activities, movie, and public talking. The dataset was designed to seize a broad spectrum of human movement and expression, with a good distribution between full-body and half-body photographs.
To boost facial synthesis high quality, Nersemble was included within the knowledge preparation course of.

Examples from the Nersemble dataset, used to reinforce the info for DreamActor. Supply: https://www.youtube.com/watch?v=a-OAWqBzldU
For analysis, the researchers used their dataset additionally as a benchmark to evaluate generalization throughout numerous situations.
The mannequin’s efficiency was measured utilizing customary metrics from prior work: Fréchet Inception Distance (FID); Structural Similarity Index (SSIM); Discovered Perceptual Picture Patch Similarity (LPIPS); and Peak Sign-to-Noise Ratio (PSNR) for frame-level high quality. Fréchet Video Distance (FVD) was used for assessing temporal coherence and general video constancy.
The authors carried out experiments on each physique animation and portrait animation duties, all using a single (goal) reference picture.
For physique animation, DreamActor-M1 was in contrast in opposition to Animate Anybody; Champ; MimicMotion, and DisPose.

Quantitative comparisons in opposition to rival frameworks.
Although the PDF supplies a static picture as a visible comparability, one of many movies from the venture web site might spotlight the variations extra clearly:
AUDIO CONTENT. Click on to play. A visible comparability throughout the challenger frameworks. The driving video is seen top-left, and the authors’ conclusion that DreamActor produces the most effective outcomes appears affordable.
For portrait animation exams, the mannequin was evaluated in opposition to LivePortrait; X-Portrait; SkyReels-A1; and Act-One.

Quantitative comparisons for portrait animation.
The authors word that their technique wins out in quantitative exams, and contend that it is usually superior qualitatively.
AUDIO CONTENT. Click on to play. Examples of portrait animation comparisons.
Arguably the third and closing of the clips proven within the video above reveals a much less convincing lip-sync in comparison with a few the rival frameworks, although the overall high quality is remarkably excessive.
Conclusion
In anticipating the necessity for textures which can be implied however not really current within the sole goal picture fueling these recreations, ByteDance has addressed one of many greatest challenges going through diffusion-based video technology – constant, persistent textures. The subsequent logical step after perfecting such an strategy could be to in some way create a reference atlas from the preliminary generated clip that may very well be utilized to subsequent, totally different generations, to take care of look with out LoRAs.
Although such an strategy would successfully nonetheless be an exterior reference, that is no totally different from texture-mapping in conventional CGI methods, and the standard of realism and plausibility is way increased than these older strategies can get hold of.
That mentioned, essentially the most spectacular side of DreamActor is the mixed three-part steering system, which bridges the normal divide between face-focused and body-focused human synthesis in an ingenious means.
It solely stays to be seen if a few of these core ideas could be leveraged in additional accessible choices; because it stands, DreamActor appears destined to develop into one more synthesis-as-a-service providing, severely sure by restrictions on utilization, and by the impracticality of experimenting extensively with a industrial structure.
* My substitution of hyperlinks for the authors; inline citations
† As talked about earlier, it’s not clear with taste of Steady Diffusion was used on this venture.
First revealed Friday, April 4, 2025