Autoencoder-Based Collaborative Attention GAN for Multi-Modal Image Synthesis

被引:9
|
作者
Cao, Bing [1 ,2 ]
Cao, Haifang [1 ,3 ]
Liu, Jiaxu [1 ,3 ]
Zhu, Pengfei [1 ,3 ]
Zhang, Changqing [1 ,3 ]
Hu, Qinghua [1 ,3 ]
机构
[1] Tianjin Univ, Coll Intelligence & Comp, Tianjin 300403, Peoples R China
[2] Xidian Univ, State Key Lab Integrated Serv Networks, Xian 710000, Peoples R China
[3] Tianjin Univ, Haihe Lab Informat echnol Applicat Innovat, Tianjin 300403, Peoples R China
关键词
Image synthesis; Collaboration; Task analysis; Generative adversarial networks; Feature extraction; Data models; Image reconstruction; Multi-modal image synthesis; collaborative attention; single-modal attention; multi-modal attention; TRANSLATION; NETWORK;
D O I
10.1109/TMM.2023.3274990
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multi-modal images are required in a wide range of practical scenarios, from clinical diagnosis to public security. However, certain modalities may be incomplete or unavailable because of the restricted imaging conditions, which commonly leads to decision bias in many real-world applications. Despite the significant advancement of existing image synthesis techniques, learning complementary information from multi-modal inputs remains challenging. To address this problem, we propose an autoencoder-based collaborative attention generative adversarial network (ACA-GAN) that uses available multi-modal images to generate the missing ones. The collaborative attention mechanism deploys a single-modal attention module and a multi-modal attention module to effectively extract complementary information from multiple available modalities. Considering the significant modal gap, we further developed an autoencoder network to extract the self-representation of target modality, guiding the generative model to fuse target-specific information from multiple modalities. This considerably improves cross-modal consistency with the desired modality, thereby greatly enhancing the image synthesis performance. Quantitative and qualitative comparisons for various multi-modal image synthesis tasks highlight the superiority of our approach over several prior methods by demonstrating more precise and realistic results.
引用
收藏
页码:995 / 1010
页数:16
相关论文
共 50 条
  • [41] Dual Autoencoder-based Framework for Image Compression and Decompression
    Patel, Bhargav
    FIFTEENTH INTERNATIONAL CONFERENCE ON MACHINE VISION, ICMV 2022, 2023, 12701
  • [42] MixFuse: An iterative mix-attention transformer for multi-modal image fusion
    Li, Jinfu
    Song, Hong
    Liu, Lei
    Li, Yanan
    Xia, Jianghan
    Huang, Yuqi
    Fan, Jingfan
    Lin, Yucong
    Yang, Jian
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 261
  • [43] Autoencoder-based image compression for wireless sensor networks
    Lungisani, Bose Alex
    Zungeru, Adamu Murtala
    Lebekwe, Caspar
    Yahya, Abid
    SCIENTIFIC AFRICAN, 2024, 24
  • [44] Deep Convolutional AutoEncoder-based Lossy Image Compression
    Cheng, Zhengxue
    Sun, Heming
    Takeuchi, Masaru
    Katto, Jiro
    2018 PICTURE CODING SYMPOSIUM (PCS 2018), 2018, : 253 - 257
  • [45] An Autoencoder-Based Image Reconstruction for Electrical Capacitance Tomography
    Zheng, Jin
    Peng, Lihui
    IEEE SENSORS JOURNAL, 2018, 18 (13) : 5464 - 5474
  • [46] MIA-Net: Multi-Modal Interactive Attention Network for Multi-Modal Affective Analysis
    Li, Shuzhen
    Zhang, Tong
    Chen, Bianna
    Chen, C. L. Philip
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (04) : 2796 - 2809
  • [47] Multi-modal Remote Sensing Image Description Based on Word Embedding and Self-Attention Mechanism
    Wang, Yuan
    Alifu, Kuerban
    Ma, Hongbing
    Li, Junli
    Halik, Umut
    Lv, Yalong
    2019 3RD INTERNATIONAL SYMPOSIUM ON AUTONOMOUS SYSTEMS (ISAS 2019), 2019, : 358 - 363
  • [48] Multi-modal Remote Sensing Image Description Based on Word Embedding and Self-Attention Mechanism
    Wang, Yuan
    Alifu, Kuerban
    Ma, Hongbing
    Li, Junli
    Halik, Umut
    Lv, Yalong
    3rd International Symposium on Autonomous Systems, ISAS 2019, 2019, : 358 - 363
  • [49] Unsupervised multi-modal image translation based on the squeeze-and-excitation mechanism and feature attention module
    胡振涛
    HU Chonghao
    YANG Haoran
    SHUAI Weiwei
    HighTechnologyLetters, 2024, 30 (01) : 23 - 30
  • [50] Unsupervised multi-modal image translation based on the squeeze-and-excitation mechanism and feature attention module
    Hu Z.
    Hu C.
    Yang H.
    Shuai W.
    High Technology Letters, 2024, 30 (01) : 23 - 30