Text2Performer: Text-Driven Human Video Generation

被引:1
|
作者
Jiang, Yuming [1 ]
Yang, Shuai [1 ]
Koh, Tong Liang [1 ]
Wu, Wayne [2 ]
Loy, Chen Change [1 ]
Liu, Ziwei [1 ]
机构
[1] Nanyang Technol Univ, S Lab, Singapore, Singapore
[2] Shanghai AI Lab, Shanghai, Peoples R China
关键词
D O I
10.1109/ICCV51070.2023.02079
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Text-driven content creation has evolved to be a transformative technique that revolutionizes creativity. Here we study the task of text-driven human video generation, where a video sequence is synthesized from texts describing the appearance and motions of a target performer. Compared to general text-driven video generation, human-centric video generation requires maintaining the appearance of synthesized human while performing complex motions. In this work, we present Text2Performer to generate vivid human videos with articulated motions from texts. Text2Performer has two novel designs: 1) decomposed human representation and 2) diffusion-based motion sampler. First, we decompose the VQVAE latent space into human appearance and pose representation in an unsupervised manner by utilizing the nature of human videos. In this way, the appearance is well maintained along the generated frames. Then, we propose continuous VQ-diffuser to sample a sequence of pose embeddings. Unlike existing VQ-based methods that operate in the discrete space, continuous VQdiffuser directly outputs the continuous pose embeddings for better motion modeling. Finally, motion-aware masking strategy is designed to mask the pose embeddings spatialtemporally to enhance the temporal coherence. Moreover, to facilitate the task of text-driven human video generation, we contribute a Fashion-Text2Video dataset with manually annotated action labels and text descriptions. Extensive experiments demonstrate that Text2Performer generates high-quality human videos (up to 512 x 256 resolution) with diverse appearances and flexible motions. Our project page is https://yumingj.github.io/ projects/Text2Performer.html
引用
收藏
页码:22690 / 22700
页数:11
相关论文
共 50 条
  • [31] A statistical parametric approach to video-realistic text-driven talking avatar
    Xie, Lei
    Sun, Naicai
    Fan, Bo
    MULTIMEDIA TOOLS AND APPLICATIONS, 2014, 73 (01) : 377 - 396
  • [32] Text-Driven Video Acceleration: A Weakly-Supervised Reinforcement Learning Method
    Ramos, Washington
    Silva, Michel
    Araujo, Edson
    Moura, Victor
    Oliveira, Keller
    Marcolino, Leandro Soriano
    Nascimento, Erickson R. R.
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (02) : 2492 - 2504
  • [33] Text2City: One-Stage Text-Driven Urban Layout Regeneration
    Qin, Yiming
    Zhao, Nanxuan
    Sheng, Bin
    Lau, Rynson W. H.
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 5, 2024, : 4578 - 4586
  • [34] Text-Driven Chinese Sign Language Synthesis
    徐琳
    高文
    晏洁
    Journal of Harbin Institute of Technology, 1998, (03) : 93 - 98
  • [35] Text-driven Speech Animation with Emotion Control
    Chae, Wonseok
    Kim, Yejin
    KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS, 2020, 14 (08): : 3473 - 3487
  • [36] A text-driven sign language synthesis system
    Gao, W
    Xu, L
    Yin, BC
    Liu, Y
    Song, YB
    Yan, J
    Zhou, J
    Chen, HT
    FIFTH INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN & COMPUTER GRAPHICS, VOLS 1 AND 2, 1997, : 6 - 11
  • [37] StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery
    Patashnik, Or
    Wu, Zongze
    Shechtman, Eli
    Cohen-Or, Daniel
    Lischinski, Dani
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 2065 - 2074
  • [38] Words shaping worlds: A comprehensive exploration of text-driven image and video with adversarial networks
    Khalid, Mohd Nor Akmal
    Ullah, Anwar
    Numan, Muhammad
    Majid, Abdul
    NEUROCOMPUTING, 2025, 632
  • [39] DeltaEdit: Exploring Text-free Training for Text-Driven Image Manipulation
    Lyu, Yueming
    Lin, Tianwei
    Li, Fu
    He, Dongliang
    Dong, Jing
    Tan, Tieniu
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 6894 - 6903
  • [40] ControlVideo: conditional control for one-shot text-driven video editing and beyond
    Min ZHAO
    Rongzhen WANG
    Fan BAO
    Chongxuan LI
    Jun ZHU
    Science China(Information Sciences), 2025, 68 (03) : 150 - 162