Scripted Video Generation With a Bottom-Up Generative Adversarial Network

被引:14
|
作者
Chen, Qi [1 ,2 ]
Wu, Qi [3 ]
Chen, Jian [1 ]
Wu, Qingyao [1 ]
van den Hengel, Anton [3 ]
Tan, Mingkui [1 ]
机构
[1] South China Univ Technol, Sch Software Engn, Guangzhou 510640, Peoples R China
[2] Pazhou Lab, Guangzhou 510335, Peoples R China
[3] Univ Adelaide, Sch Comp Sci, Adelaide, SA 5005, Australia
基金
中国国家自然科学基金;
关键词
Generative adversarial networks; video generation; semantic alignment; temporal coherence;
D O I
10.1109/TIP.2020.3003227
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Generating videos given a text description (such as a script) is non-trivial due to the intrinsic complexity of image frames and the structure of videos. Although Generative Adversarial Networks (GANs) have been successfully applied to generate images conditioned on a natural language description, it is still very challenging to generate realistic videos in which the frames are required to follow both spatial and temporal coherence. In this paper, we propose a novel Bottom-up GAN (BoGAN) method for generating videos given a text description. To ensure the coherence of the generated frames and also make the whole video match the language descriptions semantically, we design a bottom-up optimisation mechanism to train BoGAN. Specifically, we devise a region-level loss via attention mechanism to preserve the local semantic alignment and draw details in different sub-regions of video conditioned on words which are most relevant to them. Moreover, to guarantee the matching between text and frame, we introduce a frame-level discriminator, which can also maintain the fidelity of each frame and the coherence across frames. Last, to ensure the global semantic alignment between whole video and given text, we apply a video-level discriminator. We evaluate the effectiveness of the proposed BoGAN on two synthetic datasets (i.e., SBMG and TBMG) and two real-world datasets (i.e., MSVD and KTH).
引用
收藏
页码:7454 / 7467
页数:14
相关论文
共 50 条
  • [41] Automated Video Generation of Moving Digits from Text Using Deep Deconvolutional Generative Adversarial Network
    Ullah, Anwar
    Yu, Xinguo
    Numan, Muhammad
    CMC-COMPUTERS MATERIALS & CONTINUA, 2023, 77 (02): : 2359 - 2383
  • [42] Bottom-up inference of loss rate in sensor network
    School of Computer Science, Northwestern Polytechnical University, Xi'an 710072, China
    Journal of Computational Information Systems, 2008, 4 (04): : 1429 - 1434
  • [43] Bottom-Up Construction of an Adaptive Enzymatic Reaction Network
    Helwig, Britta
    van Sluijs, Bob
    Pogodaev, Aleksandr A.
    Postma, Sjoerd G. J.
    Huck, Wilhelm T. S.
    ANGEWANDTE CHEMIE-INTERNATIONAL EDITION, 2018, 57 (43) : 14065 - 14069
  • [44] ODD-VGAN: Optimised Dual Discriminator Video Generative Adversarial Network for Text-to-Video Generation with Heuristic Strategy
    Mehmood, Rayeesa
    Bashir, Rumaan
    Giri, Kaiser J. J.
    JOURNAL OF INFORMATION & KNOWLEDGE MANAGEMENT, 2023,
  • [45] Bottom-up generative up-cycling: a part based design study with genetic algorithms
    Zirek, Seda
    RESULTS IN ENGINEERING, 2023, 18
  • [46] Bottom-up excitonics
    Aspuru-Guzik, Alan
    ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY, 2016, 251
  • [47] A bottom-up review
    Standing, G
    FOREIGN POLICY, 2001, (122) : 8 - +
  • [48] Bottom-Up Management
    不详
    HUMAN ORGANIZATION, 1950, 9 (01) : 38 - 38
  • [49] Bottom-Up Management
    Gordon, Paul J.
    INDUSTRIAL & LABOR RELATIONS REVIEW, 1950, 3 (04): : 620 - 621
  • [50] BOTTOM-UP TESTING
    MEHTA, KD
    IEEE SOFTWARE, 1990, 7 (05) : 4 - 4