Skeleton2Stage: Reward-Guided Fine-Tuning for Physically Plausible Dance Generation

Shanghai Jiao Tong University
*Corresponding author

TL;DR

Skeleton2Stage bridges the critical yet often overlooked skeleton-to-mesh gap by distilling physics-based motion priors into dance generation models through reward-guided fine-tuning.

It reduces artifacts such as body interpenetration and foot-ground artifacts under full-body mesh visualization.

Abstract

Despite advances in dance generation, most methods are trained in the skeletal domain and ignore mesh-level physical constraints. As a result, motions that look plausible as joint trajectories often exhibit body self-penetration and Foot-Ground Contact (FGC) anomalies when visualized with a human body mesh, reducing the aesthetic appeal of generated dances and limiting their real-world applications. We address this skeleton-to-mesh gap by deriving physics-based rewards from the body mesh and applying Reinforcement Learning Fine-Tuning (RLFT) to steer the diffusion model toward physically plausible motion synthesis under mesh visualization. Our reward design combines (i) an imitation reward that measures a motion's general plausibility by its imitability in a physical simulator (penalizing penetration and foot skating), and (ii) a Foot-Ground Deviation (FGD) reward with test-time FGD guidance to better capture the dynamic foot-ground interaction in dance. However, we find that the physics-based rewards tend to push the model to generate freezing motions for fewer physical anomalies and better imitability. To mitigate it, we propose an anti-freezing reward to preserve motion dynamics while maintaining physical plausibility. Experiments on multiple dance datasets consistently demonstrate that our method can significantly improve the physical plausibility of generated motions, yielding more realistic and aesthetically pleasing dances.

Results

We evaluate Skeleton2Stage from both qualitative and quantitative perspectives, with a focus on mesh-level physical plausibility, aesthetic quality, and motion-condition consistency across different motion generation models and datasets.

BibTeX

@misc{jia2026skeleton2stagerewardguidedfinetuningphysically,
      title={Skeleton2Stage: Reward-Guided Fine-Tuning for Physically Plausible Dance Generation}, 
      author={Jidong Jia and Youjian Zhang and Huan Fu and Dacheng Tao},
      year={2026},
      eprint={2602.13778},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2602.13778}, 
}