LaMD: Latent Motion Diffusion for Image-Conditional Video Generation

被引:0
|
作者
Hu, Yaosi [1 ]
Chen, Zhenzhong [1 ]
Luo, Chong [2 ]
机构
[1] Wuhan Univ, Sch Remote Sensing & Informat Engn, Wuhan, Peoples R China
[2] Microsoft Res Asia, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Video generation; Video prediction; Diffusion model; Motion generation;
D O I
10.1007/s11263-025-02386-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The video generation field has witnessed rapid improvements with the introduction of recent diffusion models. While these models have successfully enhanced appearance quality, they still face challenges in generating coherent and natural movements while efficiently sampling videos. In this paper, we propose to condense video generation into a problem of motion generation, to improve the expressiveness of motion and make video generation more manageable. This can be achieved by breaking down the video generation process into latent motion generation and video reconstruction. Specifically, we present a latent motion diffusion (LaMD) framework, which consists of a motion-decomposed video autoencoder and a diffusion-based motion generator, to implement this idea. Through careful design, the motion-decomposed video autoencoder can compress patterns in movement into a concise latent motion representation. Consequently, the diffusion-based motion generator is able to efficiently generate realistic motion on a continuous latent space under multi-modal conditions, at a cost that is similar to that of image diffusion models. Results show that LaMD generates high-quality videos on various benchmark datasets, including BAIR, Landscape, NATOPS, MUG and CATER-GEN, that encompass a variety of stochastic dynamics and highly controllable movements on multiple image-conditional video generation tasks, while significantly decreases sampling time.
引用
收藏
页数:17
相关论文
共 50 条
  • [31] Conditional diffusion model-based generation of speckle patterns for digital image correlation
    Wang, Xiao
    Yue, Qingrui
    Liu, Xiaogang
    OPTICS AND LASERS IN ENGINEERING, 2024, 175
  • [32] Show-1: Marrying Pixel and Latent Diffusion Models for Text-to-Video Generation
    Zhang, David Junhao
    Wu, Jay Zhangjie
    Liu, Jia-Wei
    Zhao, Rui
    Ran, Lingmin
    Gu, Yuchao
    Gao, Difei
    Shou, Mike Zheng
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2025, 133 (04) : 1879 - 1893
  • [33] DiffPerformer: Iterative Learning of Consistent Latent Guidance for Diffusion-based Human Video Generation
    Wang, Chenyang
    Zheng, Zerong
    Yu, Tao
    Lv, Xiaoqian
    Zhong, Bineng
    Zhang, Shengping
    Nie, Liqiang
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2024, 2024, : 6169 - 6179
  • [34] Graphusion: Latent Diffusion for Graph Generation
    Yang, Ling
    Huang, Zhilin
    Zhang, Zhilong
    Liu, Zhongyi
    Hong, Shenda
    Zhang, Wentao
    Yang, Wenming
    Cui, Bin
    Zhang, Luxia
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (11) : 6358 - 6369
  • [35] Evaluation Metrics for Conditional Image Generation
    Yaniv Benny
    Tomer Galanti
    Sagie Benaim
    Lior Wolf
    International Journal of Computer Vision, 2021, 129 : 1712 - 1731
  • [36] Evaluation Metrics for Conditional Image Generation
    Benny, Yaniv
    Galanti, Tomer
    Benaim, Sagie
    Wolf, Lior
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2021, 129 (05) : 1712 - 1731
  • [37] Conditional Image Generation with PixelCNN Decoders
    van den Oord, Aaron
    Kalchbrenner, Nal
    Vinyals, Oriol
    Espeholt, Lasse
    Graves, Alex
    Kavukcuoglu, Koray
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [38] Attentive Normalization for Conditional Image Generation
    Wang, Yi
    Chen, Ying-Cong
    Zhang, Xiangyu
    Sun, Jian
    Jia, Jiaya
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 5093 - 5102
  • [39] CIGLI: Conditional Image Generation from Language & Image
    Lu, Xiaopeng
    Ng, Lynnette
    Fernandez, Jared
    Zhu, Hao
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 3127 - 3131
  • [40] DiffProsody: Diffusion-Based Latent Prosody Generation for Expressive Speech Synthesis With Prosody Conditional Adversarial Training
    Oh, Hyung-Seok
    Lee, Sang-Hoon
    Lee, Seong-Whan
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2024, 32 : 2654 - 2666