CFRNet: Cross-Attention-Based Fusion and Refinement Network for Enhanced RGB-T Salient Object Detection

被引:0
|
作者
Deng, Biao [1 ,2 ]
Liu, Di [2 ]
Cao, Yang [2 ]
Liu, Hong [2 ]
Yan, Zhiguo [1 ]
Chen, Hu [2 ]
机构
[1] Dongfang Elect Autocontrol Engn Co LTD, Deyang 618000, Peoples R China
[2] Sichuan Univ, Coll Comp Sci, Chengdu 610000, Peoples R China
基金
中国国家自然科学基金;
关键词
RGB-T salient object detection; RGB-thermal fusion; cross-attention; fusion and refinement;
D O I
10.3390/s24227146
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Existing deep learning-based RGB-T salient object detection methods often struggle with effectively fusing RGB and thermal features. Therefore, obtaining high-quality features and fully integrating these two modalities are central research focuses. We developed an illumination prior-based coefficient predictor (MICP) to determine optimal interaction weights. We then designed a saliency-guided encoder (SG Encoder) to extract multi-scale thermal features incorporating saliency information. The SG Encoder guides the extraction of thermal features by leveraging their correlation with RGB features, particularly those with strong semantic relationships to salient object detection tasks. Finally, we employed a Cross-attention-based Fusion and Refinement Module (CrossFRM) to refine the fused features. The robust thermal features help refine the spatial focus of the fused features, aligning them more closely with salient objects. Experimental results demonstrate that our proposed approach can more accurately locate salient objects, significantly improving performance compared to 11 state-of-the-art methods.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Multi-enhanced Adaptive Attention Network for RGB-T Salient Object Detection
    Hao, Hao-Zhou
    Cheng, Yao
    Ji, Yi
    Li, Ying
    Liu, Chun-Ping
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [2] CGFNet: Cross-Guided Fusion Network for RGB-T Salient Object Detection
    Wang, Jie
    Song, Kechen
    Bao, Yanqi
    Huang, Liming
    Yan, Yunhui
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (05) : 2949 - 2961
  • [3] Modal complementary fusion network for RGB-T salient object detection
    Ma, Shuai
    Song, Kechen
    Dong, Hongwen
    Tian, Hongkun
    Yan, Yunhui
    APPLIED INTELLIGENCE, 2023, 53 (08) : 9038 - 9055
  • [4] Bidirectional Alternating Fusion Network for RGB-T Salient Object Detection
    Tu, Zhengzheng
    Lin, Danying
    Jiang, Bo
    Gu, Le
    Wang, Kunpeng
    Zhai, Sulan
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT VIII, 2025, 15038 : 34 - 48
  • [5] Modal complementary fusion network for RGB-T salient object detection
    Shuai Ma
    Kechen Song
    Hongwen Dong
    Hongkun Tian
    Yunhui Yan
    Applied Intelligence, 2023, 53 : 9038 - 9055
  • [6] Weighted Guided Optional Fusion Network for RGB-T Salient Object Detection
    Wang, Jie
    Li, Guoqiang
    Shi, Jie
    Xi, Jinwen
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (05)
  • [7] FEATURE ENHANCEMENT AND FUSION FOR RGB-T SALIENT OBJECT DETECTION
    Sun, Fengming
    Zhang, Kang
    Yuan, Xia
    Zhao, Chunxia
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 1300 - 1304
  • [8] Revisiting Feature Fusion for RGB-T Salient Object Detection
    Zhang, Qiang
    Xiao, Tonglin
    Huang, Nianchang
    Zhang, Dingwen
    Han, Jungong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (05) : 1804 - 1818
  • [9] Cross-Modality Double Bidirectional Interaction and Fusion Network for RGB-T Salient Object Detection
    Xie, Zhengxuan
    Shao, Feng
    Chen, Gang
    Chen, Hangwei
    Jiang, Qiuping
    Meng, Xiangchao
    Ho, Yo-Sung
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (08) : 4149 - 4163
  • [10] Cross-Collaboration Weighted Fusion Network for RGB-T Salient Detection
    Wang, Yumei
    Dongye, Changlei
    Zhao, Wenxiu
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT IV, ICIC 2024, 2024, 14865 : 301 - 312