SDRSwin: A Residual Swin Transformer Network with Saliency Detection for Infrared and Visible Image Fusion

被引:0
|
作者
Li, Shengshi [1 ]
Wang, Guanjun [1 ,2 ]
Zhang, Hui [3 ]
Zou, Yonghua [1 ,2 ]
机构
[1] Hainan Univ, Sch Informat & Commun Engn, Haikou 570228, Peoples R China
[2] Hainan Univ, State Key Lab Marine Resource Utilizat South China, Haikou 570228, Peoples R China
[3] Hainan Univ, Sch Forestry, Key Lab Genet & Germplasm Innovat Trop Special For, Minist Educ, Haikou 570228, Peoples R China
基金
中国国家自然科学基金;
关键词
image fusion; saliency detection; residual Swin Transformer; infrared image; Hainan gibbon; INFORMATION MEASURE; PERFORMANCE; CLASSIFICATION;
D O I
10.3390/rs15184467
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Infrared and visible image fusion is a solution that generates an information-rich individual image with different modal information by fusing images obtained from various sensors. Salient detection can better emphasize the targets of concern. We propose a residual Swin Transformer fusion network based on saliency detection, termed SDRSwin, aiming to highlight the salient thermal targets in the infrared image while maintaining the texture details in the visible image. The SDRSwin network is trained with a two-stage training approach. In the first stage, we train an encoder-decoder network based on residual Swin Transformers to achieve powerful feature extraction and reconstruction capabilities. In the second stage, we develop a novel salient loss function to guide the network to fuse the salient targets in the infrared image and the background detail regions in the visible image. The extensive results indicate that our method has abundant texture details with clear bright infrared targets and achieves a better performance than the twenty-one state-of-the-art methods in both subjective and objective evaluation.
引用
收藏
页数:29
相关论文
共 50 条
  • [1] SwinFuse: A Residual Swin Transformer Fusion Network for Infrared and Visible Images
    Wang, Zhishe
    Chen, Yanlin
    Shao, Wenyu
    Li, Hui
    Zhang, Lei
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71
  • [2] Infrared and Visible Image Fusion Algorithm Based on Improved Residual Swin Transformer and Sobel Operators
    Luo, Yongyu
    Luo, Zhongqiang
    IEEE ACCESS, 2024, 12 : 82134 - 82145
  • [3] GANSD: A generative adversarial network based on saliency detection for infrared and visible image fusion
    Fu, Yinghua
    Liu, Zhaofeng
    Peng, Jiansheng
    Gupta, Rohit
    Zhang, Dawei
    IMAGE AND VISION COMPUTING, 2025, 154
  • [4] Visible and infrared image fusion based on visual saliency detection
    Tan, Xizi
    Guo, Liqiang
    2020 19TH INTERNATIONAL SYMPOSIUM ON DISTRIBUTED COMPUTING AND APPLICATIONS FOR BUSINESS ENGINEERING AND SCIENCE (DCABES 2020), 2020, : 134 - 137
  • [5] SICFuse: Swin Transformer integrated with invertible neural network and correlation coefficient assistance for infrared and visible image fusion
    Guo, Xin
    Lu, Tongwei
    Chen, Lei
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (06)
  • [6] Infrared and Visible Image Fusion based on Saliency Detection and Infrared Target Segment
    Li, Jun
    Song, Minghui
    Peng, Yuanxi
    2ND INTERNATIONAL CONFERENCE ON COMPUTER ENGINEERING, INFORMATION SCIENCE AND INTERNET TECHNOLOGY, CII 2017, 2017, : 21 - 30
  • [7] A Generative Adversarial Network with Dual Discriminators for Infrared and Visible Image Fusion Based on Saliency Detection
    Zhang, Dazhi
    Hou, Jilei
    Wu, Wei
    Lu, Tao
    Zhou, Huabing
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2021, 2021
  • [8] Infrared and Visible Image Fusion with Convolutional Neural Network and Transformer
    Yang, Yang
    Ren, Zhennan
    Li, Beichen
    LASER & OPTOELECTRONICS PROGRESS, 2023, 60 (16)
  • [9] An infrared and visible image fusion using knowledge measures for intuitionistic fuzzy sets and Swin Transformer
    Khan, Muhammad Jabir
    Jiang, Shu
    Ding, Weiping
    Huang, Jiashuang
    Wang, Haipeng
    INFORMATION SCIENCES, 2024, 648
  • [10] Swin Transformer Fusion Network for Image Quality Assessment
    Kim, Hyeongmyeon
    Yim, Changhoon
    IEEE ACCESS, 2024, 12 : 57741 - 57754