Infrared and visible image fusion using improved generative adversarial networks

被引:0
|
作者
Min L. [1 ]
Cao S. [1 ]
Zhao H. [2 ]
Liu P. [2 ]
机构
[1] School of Mechanical Engineering, Shenyang Jianzhu University, Shenyang
[2] Key Laboratory of Optical-Electronics Information Processing, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang
关键词
Generative adversarial network; Image fusion; Semantic information; Spectral normalization;
D O I
10.3788/IRLA20210291
中图分类号
学科分类号
摘要
The infrared and visible image fusion technology can provide both the thermal radiation information of infrared images and the texture detail information of visible images. It has a wide range of applications in the fields of intelligent monitoring, target detection and tracking. The two type of images are based on different imaging principles. How to integrate the advantages of each type of image and ensure that the image will not distorted is the key to the fusion technology. Traditional fusion methods only superimpose images information and ignore the semantic information of images. To solve this problem, an improved generative adversarial network was proposed. The generator was designed with two branches of part detail feature and global semantic feature to capture the detail and semantic information of source images; the spectral normalization module was introduced into the discriminator, which would solve the problem that traditional generation adversarial networks were not easy to train and accelerates the network convergence; the perceptual loss was introduced to maintain the structural similarity between the fused image and source images, and further improve the fusion accuracy. The experimental results show that the proposed method is superior to other representative methods in subjective evaluation and objective indicators. Compared with the method based on the total variation model, the average gradient and spatial frequency are increased by 55.84% and 49.95%, respectively. Copyright ©2022 Infrared and Laser Engineering. All rights reserved.
引用
收藏
相关论文
共 17 条
  • [1] Shen Ying, Huang Chunhong, Huang Feng, Et al., Infrared and visible image fusion: review of key technologies, Infrared and Laser Engineering, 50, 9, (2021)
  • [2] Shen Yali, RGBT dual-model Siamese tracking network with feature fusion, Infrared and Laser Engineering, 50, 3, (2021)
  • [3] Chen J, Wu K, Cheng Z, Et al., A saliency-based multiscale approach for infrared and visible image fusion, Signal Processing, 182, 4, (2021)
  • [4] Huan Kewei, Li Xiangyang, Cao Yutong, Et al., Infrared and visible image fusion with convolutional neural network and NSST, Infrared and Laser Engineering, 51, 3, (2022)
  • [5] An W B, Wang H M., Infrared and visible image fusion with supervised convolutional neural network, Optik-International Journal for Light and Electron Optics, 219, 17, (2020)
  • [6] Pan Y, Pi D, Khan I A, Et al., DenseNetFuse: A study of deep unsupervised DenseNet to infrared and visual image fusion, Journal of Ambient Intelligence and Humanized Computing, 3, (2021)
  • [7] Goodfellow I J, Pouget-Abadie J, Mirza M, Et al., Generative adversarial networks, Advances in Neural Information Processing Systems, 3, pp. 2672-2680, (2014)
  • [8] Ma J, Wei Y, Liang P, Et al., FusionGAN: A generative adversarial network for infrared and visible image fusion, Information Fusion, 48, pp. 11-26, (2019)
  • [9] Ma J, Xu H, Jiang J, Et al., DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion, IEEE Transactions on Image Processing, 29, pp. 4980-4995, (2020)
  • [10] Arjovsky M, Chintala S, Bottou L., Wasserstein GAN [J], (2017)