CFRNet: Cross-Attention-Based Fusion and Refinement Network for Enhanced RGB-T Salient Object Detection

被引:0
|
作者
Deng, Biao [1 ,2 ]
Liu, Di [2 ]
Cao, Yang [2 ]
Liu, Hong [2 ]
Yan, Zhiguo [1 ]
Chen, Hu [2 ]
机构
[1] Dongfang Elect Autocontrol Engn Co LTD, Deyang 618000, Peoples R China
[2] Sichuan Univ, Coll Comp Sci, Chengdu 610000, Peoples R China
基金
中国国家自然科学基金;
关键词
RGB-T salient object detection; RGB-thermal fusion; cross-attention; fusion and refinement;
D O I
10.3390/s24227146
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Existing deep learning-based RGB-T salient object detection methods often struggle with effectively fusing RGB and thermal features. Therefore, obtaining high-quality features and fully integrating these two modalities are central research focuses. We developed an illumination prior-based coefficient predictor (MICP) to determine optimal interaction weights. We then designed a saliency-guided encoder (SG Encoder) to extract multi-scale thermal features incorporating saliency information. The SG Encoder guides the extraction of thermal features by leveraging their correlation with RGB features, particularly those with strong semantic relationships to salient object detection tasks. Finally, we employed a Cross-attention-based Fusion and Refinement Module (CrossFRM) to refine the fused features. The robust thermal features help refine the spatial focus of the fused features, aligning them more closely with salient objects. Experimental results demonstrate that our proposed approach can more accurately locate salient objects, significantly improving performance compared to 11 state-of-the-art methods.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] Thermal images-aware guided early fusion network for cross-illumination RGB-T salient object detection
    Wang, Han
    Song, Kechen
    Huang, Liming
    Wen, Hongwei
    Yan, Yunhui
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 118
  • [32] Lightweight Cross-Modal Information Mutual Reinforcement Network for RGB-T Salient Object Detection
    Lv, Chengtao
    Wan, Bin
    Zhou, Xiaofei
    Sun, Yaoqi
    Zhang, Jiyong
    Yan, Chenggang
    ENTROPY, 2024, 26 (02)
  • [33] CGMDRNet: Cross-Guided Modality Difference Reduction Network for RGB-T Salient Object Detection
    Chen, Gang
    Shao, Feng
    Chai, Xiongli
    Chen, Hangwei
    Jiang, Qiuping
    Meng, Xiangchao
    Ho, Yo-Sung
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (09) : 6308 - 6323
  • [34] Transformer-based Adaptive Interactive Promotion Network for RGB-T Salient Object Detection
    Zhu, Jinchao
    Zhang, Xiaoyu
    Dong, Feng
    Yan, Siyu
    Meng, Xianbang
    Li, Yuehua
    Tan, Panlong
    2022 34TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2022, : 1989 - 1994
  • [35] Feature aggregation with transformer for RGB-T salient object detection
    Zhang, Ping
    Xu, Mengnan
    Zhang, Ziyan
    Gao, Pan
    Zhang, Jing
    NEUROCOMPUTING, 2023, 546
  • [36] Adaptive interactive network for RGB-T salient object detection with double mapping transformer
    Dong, Feng
    Wang, Yuxuan
    Zhu, Jinchao
    Li, Yuehua
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (20) : 59169 - 59193
  • [37] Enabling modality interactions for RGB-T salient object detection
    Zhang, Qiang
    Xi, Ruida
    Xiao, Tonglin
    Huang, Nianchang
    Luo, Yongjiang
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2022, 222
  • [38] Scribble-Supervised RGB-T Salient Object Detection
    Liu, Zhengyi
    Huang, Xiaoshen
    Zhang, Guanghui
    Fang, Xianyong
    Wang, Linbo
    Tang, Bin
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 2369 - 2374
  • [39] Saliency Prototype for RGB-D and RGB-T Salient Object Detection
    Zhang, Zihao
    Wang, Jie
    Han, Yahong
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 3696 - 3705
  • [40] MSEDNet: Multi-scale fusion and edge-supervised network for RGB-T salient object detection
    Peng, Daogang
    Zhou, Weiyi
    Pan, Junzhen
    Wang, Danhao
    NEURAL NETWORKS, 2024, 171 : 410 - 422