Unifying Multimodal Transformer for Bi-directional Image and Text Generation

被引:22
|
作者
Huang, Yupan [1 ]
Xue, Hongwei [2 ]
Liu, Bei [3 ]
Lu, Yutong [1 ]
机构
[1] Sun Yat Sen Univ, Guangzhou, Guangdong, Peoples R China
[2] Univ Sci & Technol China, Hefei, Peoples R China
[3] Microsoft Res Asia, Beijing, Peoples R China
来源
PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021 | 2021年
关键词
cross-modal generation; image captioning; text-to-image synthesis; LANGUAGE;
D O I
10.1145/3474085.3481540
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We study the joint learning of image-to-text and text-to-image generations, which are naturally bi-directional tasks. Typical existing works design two separate task-specific models for each task, which impose expensive design efforts. In this work, we propose a unified image-and-text generative framework based on a single multimodal model to jointly study the bi-directional tasks. We adopt Transformer as our unified architecture for its strong performance and task-agnostic design. Specifically, we formulate both tasks as sequence generation tasks, where we represent images and text as unified sequences of tokens, and the Transformer learns multimodal interactions to generate sequences. We further propose two-level granularity feature representations and sequence-level training to improve the Transformer-based unified framework. Experiments show that our approach significantly improves previous Transformer-based model X-LXMERT's FID from 37.0 to 29.9 (lower is better) for text-to-image generation, and improves CIDEr-D score from 100.9% to 122.6% for fine-tuned image-to-text generation on the MS-COCO dataset. Our code is available online.
引用
收藏
页码:1138 / 1147
页数:10
相关论文
共 50 条
  • [21] Proposal With Alignment: A Bi-Directional Transformer for 360° Video Viewport Proposal
    Guo, Yichen
    Xu, Mai
    Jiang, Lai
    Deng, Xin
    Zhou, Jing
    Chen, Gaoxing
    Sigal, Leonid
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (11) : 11423 - 11437
  • [22] Sentiment Analysis of Text Based on CNN and Bi-directional LSTM Model
    Zhou, Kai
    Long, Fei
    2018 24TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATION AND COMPUTING (ICAC' 18), 2018, : 613 - 617
  • [23] Hierarchical bi-directional conceptual interaction for text-video retrieval
    Han, Wenpeng
    Niu, Guanglin
    Zhou, Mingliang
    Zhang, Xiaowei
    MULTIMEDIA SYSTEMS, 2024, 30 (06)
  • [24] Authorship Attribution on Kannada Text using Bi-Directional LSTM Technique
    Chandrika, C. P.
    Kallimani, Jagadish S.
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2022, 13 (09) : 963 - 971
  • [25] Outpatient blood pressure monitoring using bi-directional text messaging
    Anthony, Chris A.
    Polgreen, Linnea A.
    Chounramany, James
    Foster, Eric D.
    Goerdt, Christopher J.
    Miller, Michelle L.
    Suneja, Manish
    Segre, Alberto M.
    Carter, Barry L.
    Polgreen, Philip M.
    JOURNAL OF THE AMERICAN SOCIETY OF HYPERTENSION, 2015, 9 (05) : 375 - 381
  • [26] IMAGE ENLARGEMENT USING BI-DIRECTIONAL SHIFTED LINEAR INTERPOLATION
    Tamura, Yuta
    Tanaka, Kiyoshi
    2008 INTERNATIONAL SYMPOSIUM ON INTELLIGENT SIGNAL PROCESSING AND COMMUNICATIONS SYSTEMS (ISPACS 2008), 2008, : 290 - 293
  • [27] Facial Image Completion Using Bi-Directional Pixel LSTM
    Yu, Xiulan
    He, Jiahao
    Zhang, Zufan
    IEEE ACCESS, 2020, 8 : 48642 - 48651
  • [28] Bi-Directional Co-Attention Network for Image Captioning
    Jiang, Weitao
    Wang, Weixuan
    Hu, Haifeng
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2021, 17 (04)
  • [29] Bi-directional Relationship Inferring Network for Referring Image Segmentation
    Hu, Zhiwei
    Feng, Guang
    Sun, Jiayu
    Zhang, Lihe
    Lu, Huchuan
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 4423 - 4432
  • [30] Bi-Directional Seed Attention Network for Interactive Image Segmentation
    Song, Gwangmo
    Lee, Kyoung Mu
    IEEE SIGNAL PROCESSING LETTERS, 2020, 27 : 1540 - 1544