Infrared and Visible Image Fusion via Interactive Compensatory Attention Adversarial Learning

被引:16
|
作者
Wang, Zhishe [1 ]
Shao, Wenyu [1 ]
Chen, Yanlin [1 ]
Xu, Jiawei [2 ]
Zhang, Xiaoqin [2 ]
机构
[1] Taiyuan Univ Sci & Technol, Sch Appl Sci, Taiyuan 030024, Peoples R China
[2] Wenzhou Univ, Key Lab Intelligent Informat Safety & Emergency Z, Wenzhou 325035, Peoples R China
基金
中国国家自然科学基金;
关键词
Image fusion; attention interaction; attention compensation; dual discriminators; adversarial learning; NETWORK; NEST;
D O I
10.1109/TMM.2022.3228685
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The existing generative adversarial fusion methods generally concatenate source images or deep features, and extract local features through convolutional operations without considering their global characteristics, which tends to produce a limited fusion performance. Toward this end, we propose a novel interactive compensatory attention fusion network, termed ICAFusion. In particular, in the generator, we construct a multi-level encoder-decoder network with a triple path, and design infrared and visible paths to provide additional intensity and gradient information for the concatenating path. Moreover, we develop the interactive and compensatory attention modules to communicate their pathwise information, and model their long-range dependencies through a cascading channel-spatial model. The generated attention maps can more focus on infrared target perception and visible detail characterization, and are used to reconstruct the fusion image. Therefore, the generator takes full advantage of local and global features to further increase the representation ability of feature extraction and feature reconstruction. Extensive experiments illustrate that our ICAFusion obtains superior fusion performance and better generalization ability, which precedes other advanced methods in the subjective visual description and objective metric evaluation.
引用
收藏
页码:7800 / 7813
页数:14
相关论文
共 50 条
  • [31] FusionGAN: A generative adversarial network for infrared and visible image fusion
    Ma, Jiayi
    Yu, Wei
    Liang, Pengwei
    Li, Chang
    Jiang, Junjun
    INFORMATION FUSION, 2019, 48 : 11 - 26
  • [32] An attention-guided and wavelet-constrained generative adversarial network for infrared and visible image fusion
    Liu, Xiaowen
    Wang, Renhua
    Huo, Hongtao
    Yang, Xin
    Li, Jing
    INFRARED PHYSICS & TECHNOLOGY, 2023, 129
  • [33] Denoiser Learning for Infrared and Visible Image Fusion
    Liu, Jinyang
    Li, Shutao
    Tan, Lishan
    Dian, Renwei
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [34] Unpaired high-quality image-guided infrared and visible image fusion via adversarial network
    Li, Hang
    Guan, Zheng
    Wang, Xue
    Shao, Qiuhan
    COMPUTER AIDED GEOMETRIC DESIGN, 2024, 111
  • [35] Infrared and visible image fusion using improved generative adversarial networks
    Min L.
    Cao S.
    Zhao H.
    Liu P.
    Hongwai yu Jiguang Gongcheng/Infrared and Laser Engineering, 2022, 51 (04):
  • [36] Laplacian Pyramid Generative Adversarial Network for Infrared and Visible Image Fusion
    Yin, Haitao
    Xiao, Jinghu
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 1988 - 1992
  • [37] MAGAN: Multiattention Generative Adversarial Network for Infrared and Visible Image Fusion
    Huang, Shuying
    Song, Zixiang
    Yang, Yong
    Wan, Weiguo
    Kong, Xiangkai
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [38] Infrared and Visible Image Fusion with a Generative Adversarial Network and a Residual Network
    Xu, Dongdong
    Wang, Yongcheng
    Xu, Shuyan
    Zhu, Kaiguang
    Zhang, Ning
    Zhang, Xin
    APPLIED SCIENCES-BASEL, 2020, 10 (02):
  • [39] MAFusion: Multiscale Attention Network for Infrared and Visible Image Fusion
    Li, Xiaoling
    Chen, Houjin
    Li, Yanfeng
    Peng, Yahui
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71
  • [40] Unsupervised densely attention network for infrared and visible image fusion
    Li, Yang
    Wang, Jixiao
    Miao, Zhuang
    Wang, Jiabao
    MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (45-46) : 34685 - 34696