TEFNet: Target-Aware Enhanced Fusion Network for RGB-T Tracking

被引:0
|
作者
Chen, Panfeng [1 ]
Gong, Shengrong [2 ]
Ying, Wenhao [2 ]
Du, Xin [3 ]
Zhong, Shan [2 ]
机构
[1] Huzhou Univ, Sch Informat Engn, Huzhou 313000, Peoples R China
[2] Changshu Inst Technol, Sch Comp Sci & Engn, Suzhou 215500, Peoples R China
[3] Suzhou Univ Sci & Technol, Sch Elect & Informat Engn, Suzhou 215009, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
RGB-T tracking; Background elimination; Complementary information;
D O I
10.1007/978-981-99-8549-4_36
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
RGB-T tracking leverages the fusion of visible (RGB) and thermal (T) modalities to achieve more robust object tracking. Existing popular RGBT trackers often fail to fully leverage background information and complementary information from different modalities. To address these issues, we propose the target-aware enhanced fusion network (TEFNet). TEFNet concatenates the features of template and search regions from each modality and then utilizes self-attention operations to enhance the single-modality features for the target by discriminating it from the background. Additionally, a background elimination module is introduced to reduce the background regions. To further fuse the complementary information across different modalities, a dual-layer fusion module based on channel attention, self-attention, and bidirectional cross-attention is constructed. This module diminishes the feature information of the inferior modality, and amplifies the feature information of the dominant modality, effectively eliminating the adverse effects caused by modality differences. Experimental results on the LasHeR and VTUAV datasets demonstrate that our method outperforms other representative RGB-T tracking approaches, with significant improvements of 6.6% and 7.1% in MPR and MSR on the VTUAV dataset respectively.
引用
收藏
页码:432 / 443
页数:12
相关论文
共 50 条
  • [21] MFGNet: Dynamic Modality-Aware Filter Generation for RGB-T Tracking
    Wang, Xiao
    Shu, Xiujun
    Zhang, Shiliang
    Jiang, Bo
    Wang, Yaowei
    Tian, Yonghong
    Wu, Feng
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 4335 - 4348
  • [22] Channel Exchanging for RGB-T Tracking
    Zhao, Long
    Zhu, Meng
    Ren, Honge
    Xue, Lingjixuan
    SENSORS, 2021, 21 (17)
  • [23] AGFNet: Adaptive Gated Fusion Network for RGB-T Semantic Segmentation
    Zhou, Xiaofei
    Wu, Xiaoling
    Bao, Liuxin
    Yin, Haibing
    Jiang, Qiuping
    Zhang, Jiyong
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2025,
  • [24] Enhanced Real-Time RGB-T Tracking by Complementary Learners
    Xu, Qingyu
    Kuai, Yangliu
    Yang, Junggang
    Deng, Xinpu
    JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2021, 30 (10)
  • [25] Tracking Algorithm for Siamese Network Based on Target-Aware Feature Selection
    Chen Zhiwang
    Zhang Zhongxin
    Song Juan
    Luo Hongfu
    Peng Yong
    ACTA OPTICA SINICA, 2020, 40 (09)
  • [26] Modal complementary fusion network for RGB-T salient object detection
    Ma, Shuai
    Song, Kechen
    Dong, Hongwen
    Tian, Hongkun
    Yan, Yunhui
    APPLIED INTELLIGENCE, 2023, 53 (08) : 9038 - 9055
  • [27] RGB-T object tracking: Benchmark and baseline
    Li, Chenglong
    Liang, Xinyan
    Lu, Yijuan
    Zhao, Nan
    Tang, Jin
    PATTERN RECOGNITION, 2019, 96
  • [28] Bidirectional Alternating Fusion Network for RGB-T Salient Object Detection
    Tu, Zhengzheng
    Lin, Danying
    Jiang, Bo
    Gu, Le
    Wang, Kunpeng
    Zhai, Sulan
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT VIII, 2025, 15038 : 34 - 48
  • [29] Modal complementary fusion network for RGB-T salient object detection
    Shuai Ma
    Kechen Song
    Hongwen Dong
    Hongkun Tian
    Yunhui Yan
    Applied Intelligence, 2023, 53 : 9038 - 9055
  • [30] Dynamic Tracking Aggregation with Transformers for RGB-T Tracking
    Liu, Xiaohu
    Lei, Zhiyong
    JOURNAL OF INFORMATION PROCESSING SYSTEMS, 2023, 19 (01): : 80 - 88