SDRSwin: A Residual Swin Transformer Network with Saliency Detection for Infrared and Visible Image Fusion

被引:0
|
作者
Li, Shengshi [1 ]
Wang, Guanjun [1 ,2 ]
Zhang, Hui [3 ]
Zou, Yonghua [1 ,2 ]
机构
[1] Hainan Univ, Sch Informat & Commun Engn, Haikou 570228, Peoples R China
[2] Hainan Univ, State Key Lab Marine Resource Utilizat South China, Haikou 570228, Peoples R China
[3] Hainan Univ, Sch Forestry, Key Lab Genet & Germplasm Innovat Trop Special For, Minist Educ, Haikou 570228, Peoples R China
基金
中国国家自然科学基金;
关键词
image fusion; saliency detection; residual Swin Transformer; infrared image; Hainan gibbon; INFORMATION MEASURE; PERFORMANCE; CLASSIFICATION;
D O I
10.3390/rs15184467
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Infrared and visible image fusion is a solution that generates an information-rich individual image with different modal information by fusing images obtained from various sensors. Salient detection can better emphasize the targets of concern. We propose a residual Swin Transformer fusion network based on saliency detection, termed SDRSwin, aiming to highlight the salient thermal targets in the infrared image while maintaining the texture details in the visible image. The SDRSwin network is trained with a two-stage training approach. In the first stage, we train an encoder-decoder network based on residual Swin Transformers to achieve powerful feature extraction and reconstruction capabilities. In the second stage, we develop a novel salient loss function to guide the network to fuse the salient targets in the infrared image and the background detail regions in the visible image. The extensive results indicate that our method has abundant texture details with clear bright infrared targets and achieves a better performance than the twenty-one state-of-the-art methods in both subjective and objective evaluation.
引用
收藏
页数:29
相关论文
共 50 条
  • [31] Infrared and visible image fusion with improved residual dense generative adversarial network
    Min L.
    Cao S.-J.
    Zhao H.-C.
    Liu P.-F.
    Tai B.-C.
    Kongzhi yu Juece/Control and Decision, 2023, 38 (03): : 721 - 728
  • [32] Infrared and Visible Image Fusion Based on Dual Channel Residual Dense Network
    Feng Xin
    Yang Jieming
    Zhang Hongde
    Qiu Guohang
    ACTA PHOTONICA SINICA, 2023, 52 (11)
  • [33] Semantic perceptive infrared and visible image fusion Transformer
    Yang, Xin
    Huo, Hongtao
    Li, Chang
    Liu, Xiaowen
    Wang, Wenxi
    Wang, Cheng
    PATTERN RECOGNITION, 2024, 149
  • [34] ITFuse: An interactive transformer for infrared and visible image fusion
    Tang, Wei
    He, Fazhi
    Liu, Yu
    PATTERN RECOGNITION, 2024, 156
  • [35] TBRAFusion: Infrared and visible image fusion based on two-branch residual attention Transformer
    Zhang, Wangwei
    Sun, Hao
    Zhou, Bin
    ELECTRONIC RESEARCH ARCHIVE, 2024, 33 (01): : 158 - 180
  • [36] Infrared and Visible Image Fusion Based on Adaptive Dual-channel PCNN and Saliency Detection
    An, Ying
    Fan, Xunli
    Chen, Li
    Liu, Pei
    2019 15TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND SECURITY (CIS 2019), 2019, : 20 - 25
  • [37] An interactively reinforced paradigm for joint infrared-visible image fusion and saliency object detection
    Wang, Di
    Liu, Jinyuan
    Liu, Risheng
    Fan, Xin
    INFORMATION FUSION, 2023, 98
  • [38] Infrared and visible image fusion based on saliency detection and two-scale transform decomposition
    Zhang, Siqi
    Li, Xiongfei
    Zhang, Xiaoli
    Zhang, Shuhan
    INFRARED PHYSICS & TECHNOLOGY, 2021, 114
  • [39] Adaptive infrared and visible image fusion method by using rolling guidance filter and saliency detection
    Lin, Yingcheng
    Cao, Dingxin
    Zhou, Xichuan
    OPTIK, 2022, 262
  • [40] THFuse: An infrared and visible image fusion network using transformer and hybrid feature extractor
    Chen, Jun
    Ding, Jianfeng
    Yu, Yang
    Gong, Wenping
    NEUROCOMPUTING, 2023, 527 : 71 - 82