Infrared and Visible Image Fusion Using Dual-stream Generative Adversarial Network with Multiple Discriminators

被引:0
|
作者
Wu L. [1 ]
Kang J. [1 ]
Ji Y. [1 ]
Ma H. [1 ]
机构
[1] School of Electronic Engineering, Jiangsu Ocean University, Jiangsu, Lianyungang
来源
Binggong Xuebao/Acta Armamentarii | 2024年 / 45卷 / 06期
关键词
attention mechanism; differential image; generative adversarial network; image fusion; infrared image;
D O I
10.12382/bgxb.2023.0130
中图分类号
学科分类号
摘要
To address the problem of insufficiently retaining the source information during the process of fusing the infrared and visible images based on the existing methods, an algorithm for fusing the infrared and visible images using dual-stream generative adversarial networks (GAN) with multiple discriminators is improved. The improved GAN-based fusion framework consists of one generator and four discriminators, and utilizes the differential image as the auxiliary information to further improve the performance of the fusion network. The differential imagein the algorithm is not only used as the auxiliary information of source image to guide the generator to focus on the unique information of different modal images, but also used as the real data distribution to assist the differential discriminator in competitively training with the generator. In the improved network model, the generator adopts a dual encoder-single decoder structure, where the encoder aims to extract the features from different modalities mainly via a densely connected structure combined with an attention module, and the decoder is used to reconstruct the fused image based on the concatenated high-dimensional features. The discriminator evaluates whether the input image is the real image or the fusion image, and constrainedly optimizes the generator based on the evaluated results. Experimental results show that, compared with the other algorithms, the improved algorithm achieves better fusion results both in the subjective assessments and in the objective effects evaluated by the quantitative metrics. © 2024 China Ordnance Industry Corporation. All rights reserved.
引用
收藏
页码:1799 / 1812
页数:13
相关论文
共 23 条
  • [1] MA J Y, MA Y, LI C., Infrared and visible image fusion methods and applications: a survey, Information Fusion, 45, pp. 153-178, (2019)
  • [2] DING G P, TAO G, LI Y C, Et al., Infrared and visible images fusion based on non-subsampled contourlet transform and guided filter, Acta Armamentarii, 42, 9, pp. 1911-1922, (2021)
  • [3] ZHANG S, HUANG F Y, LIU B Q, Et al., A multi-modal image fusion framework based on guided filter and sparse representation, Optics and Lasers in Engineering, 137, (2021)
  • [4] ZHANG H, XU H, TIAN X, Et al., Image fusion meets deep learning: a survey and perspective, Information Fusion, 76, pp. 323-336, (2021)
  • [5] LI H, WU X J., DenseFuse: a fusion approach to infrared and visible images [J], IEEE Transactions on Image Processing, 28, 5, pp. 2614-2623, (2019)
  • [6] LI H, WU X J, DURRANI T., NestFuse: an infrared and visible image fusion architecture based on nest connection and spatial/ channel attention models [J], IEEE Transactions on Instrumentation and Measurement, 69, 12, pp. 9645-9656, (2020)
  • [7] ZHANG H, XU H, XIAO Y, Et al., Rethinking the image fusion: a fast unified image fusion network based on proportional maintenance of gradient and intensity, Proceedings of the AAAI Conference on Artificial Intelligence, 34, pp. 12797-12804, (2020)
  • [8] TANG W, HE F Z, LIU Y., YDTR: infrared and visible image fusion via Y-shape dynamic transformer[J], IEEE Transactions on Multimedia (Early Access), 25, pp. 5413-5428, (2022)
  • [9] MA J Y, YU W, LIANG P W, Et al., FusionGAN: a generative adversarial network for infrared and visible image fusion, Information Fusion, 48, pp. 11-26, (2019)
  • [10] MA J Y, XU H, JIANG J J, Et al., DDcGAN: a dual-discriminator conditional generative adversarial network for multiresolution image fusion [J], IEEE Transactions on Image Processing, 29, pp. 4980-4995, (2020)