Enhanced Fine-Grained Motion Diffusion for Text-Driven Human Motion Synthesis

被引:0
|
作者
Wei, Dong [1 ]
Sun, Xiaoning [1 ]
Sun, Huaijiang [1 ]
Hu, Shengxiang [1 ]
Li, Bin [2 ]
Li, Weiqing [1 ]
Lu, Jianfeng [1 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing, Peoples R China
[2] Tianjin AiForward Sci & Technol Co Ltd, Tianjin, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The emergence of text-driven motion synthesis technique provides animators with great potential to create efficiently. However, in most cases, textual expressions only contain general and qualitative motion descriptions, while lack fine depiction and sufficient intensity, leading to the synthesized motions that either (a) semantically compliant but uncontrollable over specific pose details, or (b) even deviates from the provided descriptions, bringing animators with undesired cases. In this paper, we propose DiffKFC, a conditional diffusion model for text-driven motion synthesis with KeyFrames Collaborated, enabling realistic generation with collaborative and efficient dual-level control: coarse guidance at semantic level, with only few keyframes for direct and fine-grained depiction down to body posture level. Unlike existing inference-editing diffusion models that incorporate conditions without training, our conditional diffusion model is explicitly trained and can fully exploit correlations among texts, keyframes and the diffused target frames. To preserve the control capability of discrete and sparse keyframes, we customize dilated mask attention modules where only partial valid tokens participate in local-to-global attention, indicated by the dilated keyframe mask. Additionally, we develop a simple yet effective smoothness prior, which steers the generated frames towards seamless keyframe transitions at inference. Extensive experiments show that our model not only achieves state-of-the-art performance in terms of semantic fidelity, but more importantly, is able to satisfy animator requirements through fine-grained guidance without tedious labor.
引用
收藏
页码:5876 / 5884
页数:9
相关论文
共 50 条
  • [21] Fine-grained skeleton action recognition with pairwise motion salience learning
    Li H.
    Tu Z.
    Xie W.
    Zhang J.
    Scientia Sinica Informationis, 2023, 53 (12) : 2440 - 2457
  • [22] Brain-inspired multimodal motion and fine-grained action recognition
    Li, Yuening
    Yang, Xiuhua
    Chen, Changkui
    FRONTIERS IN NEUROROBOTICS, 2025, 18
  • [23] FineMoGen: Fine-Grained Spatio-Temporal Motion Generation and Editing
    Zhang, Mingyuan
    Li, Huirong
    Cai, Zhongang
    Ren, Jiawei
    Yang, Lei
    Liu, Ziwei
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [24] MODERNN: TOWARDS FINE-GRAINED MOTION DETAILS FOR SPATIOTEMPORAL PREDICTIVE LEARNING
    Chai, Zenghao
    Xu, Zhengzhuo
    Yuan, Chun
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 4658 - 4662
  • [25] Fine-Grained Spatiotemporal Motion Alignment for Contrastive Video Representation Learning
    Zhu, Minghao
    Lin, Xiao
    Dang, Ronghao
    Liu, Chengju
    Chen, Qijun
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 4725 - 4736
  • [26] Cross-modality motion parameterization for fine-grained video prediction
    Yan, Yichao
    Ni, Bingbing
    Zhang, Wendong
    Tang, Jun
    Yang, Xiaokang
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2019, 183 : 11 - 19
  • [27] Appearance, motion, and embodiment: unpacking avatars by fine-grained communication analysis
    Tanaka, Kazuaki
    Nakanishi, Hideyuki
    Hiroshi, Ishiguro
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2015, 27 (11): : 2706 - 2724
  • [28] Fine-grained Action Recognition with Robust Motion Representation Decoupling and Concentration
    Sun, Baoli
    Ye, Xinchen
    Yan, Tiantian
    Wang, Zhihui
    Li, Haojie
    Wang, Zhiyong
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 4779 - 4788
  • [29] Context-Free Fine-Grained Motion Sensing using WiFi
    Du, Changlai
    Yuan, Xiaoqun
    Lou, Wenjing
    Hou, Y. Thomas
    2018 15TH ANNUAL IEEE INTERNATIONAL CONFERENCE ON SENSING, COMMUNICATION, AND NETWORKING (SECON), 2018, : 199 - 207
  • [30] AttT2M: Text-Driven Human Motion Generation with Multi-Perspective Attention Mechanism
    Zhong, Chongyang
    Hu, Lei
    Zhang, Zihao
    Xia, Shihong
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 509 - 519