AFSFusion: An Adjacent Feature Shuffle Combination Network for Infrared and Visible Image Fusion

被引:1
|
作者
Hu, Yufeng [1 ]
Xu, Shaoping [2 ]
Cheng, Xiaohui [2 ]
Zhou, Changfei [2 ]
Xiong, Minghai [2 ]
机构
[1] Nanchang Univ, Sch Qianhu, Nanchang 330031, Peoples R China
[2] Nanchang Univ, Sch Math & Comp Sci, Nanchang 330031, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 09期
关键词
infrared and visible image fusion; adjacent feature shuffle fusion; adaptive weight adjustment strategy; subjective and objective evaluation; INFORMATION;
D O I
10.3390/app13095640
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
To obtain fused images with excellent contrast, distinct target edges, and well-preserved details, we propose an adaptive image fusion network called the adjacent feature shuffle-fusion network (AFSFusion). The proposed network adopts a UNet-like architecture and incorporates key refinements to enhance network architecture and loss functions. Regarding the network architecture, the proposed two-branch adjacent feature fusion module, called AFSF, expands the number of channels to fuse the feature channels of several adjacent convolutional layers in the first half of the AFSFusion, enhancing its ability to extract, transmit, and modulate feature information. We replace the original rectified linear unit (ReLU) with leaky ReLU to alleviate the problem of gradient disappearance and add a channel shuffling operation at the end of AFSF to facilitate information interaction capability between features. Concerning loss functions, we propose an adaptive weight adjustment (AWA) strategy to assign weight values to the corresponding pixels of the infrared (IR) and visible images in the fused images, according to the VGG16 gradient feature response of the IR and visible images. This strategy efficiently handles different scene contents. After normalization, the weight values are used as weighting coefficients for the two sets of images. The weighting coefficients are applied to three loss items simultaneously: mean square error (MSE), structural similarity (SSIM), and total variation (TV), resulting in clearer objects and richer texture detail in the fused images. We conducted a series of experiments on several benchmark databases, and the results demonstrate the effectiveness of the proposed network architecture and the superiority of the proposed network compared to other state-of-the-art fusion methods. It ranks first in several objective metrics, showing the best performance and exhibiting sharper and richer edges of specific targets, which is more in line with human visual perception. The remarkable enhancement in performance is ascribed to the proposed AFSF module and AWA strategy, enabling balanced feature extraction, fusion, and modulation of image features throughout the process.
引用
收藏
页数:20
相关论文
共 50 条
  • [41] MFTCFNet: infrared and visible image fusion network based on multi-layer feature tightly coupled
    Hao, Shuai
    Li, Tong
    Ma, Xu
    Li, Tian-Qi
    Qi, Tian-Rui
    Li, Jia-Hao
    SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (11) : 8217 - 8228
  • [42] SFINet: A semantic feature interactive learning network for full-time infrared and visible image fusion
    Song, Wenhao
    Li, Qilei
    Gao, Mingliang
    Chehri, Abdellah
    Jeon, Gwanggil
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 261
  • [43] A dual-branch infrared and visible image fusion network using progressive image-wise feature transfer
    Xu, Shaoping
    Zhou, Changfei
    Xiao, Jian
    Tao, Wuyong
    Dai, Tianyu
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2024, 102
  • [44] VMDM-fusion: a saliency feature representation method for infrared and visible image fusion
    Yang, Yong
    Liu, Jia-Xiang
    Huang, Shu-Ying
    Lu, Hang-Yuan
    Wen, Wen-Ying
    SIGNAL IMAGE AND VIDEO PROCESSING, 2021, 15 (06) : 1221 - 1229
  • [45] VMDM-fusion: a saliency feature representation method for infrared and visible image fusion
    Yong Yang
    Jia-Xiang Liu
    Shu-Ying Huang
    Hang-Yuan Lu
    Wen-Ying Wen
    Signal, Image and Video Processing, 2021, 15 : 1221 - 1229
  • [46] Infrared and Visible Image Fusion via Attention-Based Adaptive Feature Fusion
    Wang, Lei
    Hu, Ziming
    Kong, Quan
    Qi, Qian
    Liao, Qing
    ENTROPY, 2023, 25 (03)
  • [47] Multi-feature decomposition and transformer-fusion: an infrared and visible image fusion network based on multi-feature decomposition and transformer
    Li, Xujun
    Duan, Zhicheng
    Chang, Jia
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (06)
  • [48] Infrared and visible image fusion based on global context network
    Li, Yonghong
    Shi, Yu
    Pu, Xingcheng
    Zhang, Suqiang
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (05)
  • [49] Infrared and visible image fusion with supervised convolutional neural network
    An, Wen-Bo
    Wang, Hong-Mei
    OPTIK, 2020, 219
  • [50] A Dual-branch Network for Infrared and Visible Image Fusion
    Fu, Yu
    Wu, Xiao-Jun
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 10675 - 10680