TEFNet: Target-Aware Enhanced Fusion Network for RGB-T Tracking

被引:0
|
作者
Chen, Panfeng [1 ]
Gong, Shengrong [2 ]
Ying, Wenhao [2 ]
Du, Xin [3 ]
Zhong, Shan [2 ]
机构
[1] Huzhou Univ, Sch Informat Engn, Huzhou 313000, Peoples R China
[2] Changshu Inst Technol, Sch Comp Sci & Engn, Suzhou 215500, Peoples R China
[3] Suzhou Univ Sci & Technol, Sch Elect & Informat Engn, Suzhou 215009, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
RGB-T tracking; Background elimination; Complementary information;
D O I
10.1007/978-981-99-8549-4_36
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
RGB-T tracking leverages the fusion of visible (RGB) and thermal (T) modalities to achieve more robust object tracking. Existing popular RGBT trackers often fail to fully leverage background information and complementary information from different modalities. To address these issues, we propose the target-aware enhanced fusion network (TEFNet). TEFNet concatenates the features of template and search regions from each modality and then utilizes self-attention operations to enhance the single-modality features for the target by discriminating it from the background. Additionally, a background elimination module is introduced to reduce the background regions. To further fuse the complementary information across different modalities, a dual-layer fusion module based on channel attention, self-attention, and bidirectional cross-attention is constructed. This module diminishes the feature information of the inferior modality, and amplifies the feature information of the dominant modality, effectively eliminating the adverse effects caused by modality differences. Experimental results on the LasHeR and VTUAV datasets demonstrate that our method outperforms other representative RGB-T tracking approaches, with significant improvements of 6.6% and 7.1% in MPR and MSR on the VTUAV dataset respectively.
引用
收藏
页码:432 / 443
页数:12
相关论文
共 50 条
  • [31] Lightweight Target-Aware Attention Learning Network-Based Target Tracking Method
    Zhao, Yanchun
    Zhang, Jiapeng
    Duan, Rui
    Li, Fusheng
    Zhang, Huanlong
    MATHEMATICS, 2022, 10 (13)
  • [32] Multi-Modal Fusion for End-to-End RGB-T Tracking
    Zhang, Lichao
    Danelljan, Martin
    Gonzalez-Garcia, Abel
    van de Weijer, Joost
    Khan, Fahad Shahbaz
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 2252 - 2261
  • [33] TATrack: Target-aware transformer for object tracking
    Huang, Kai
    Chu, Jun
    Leng, Lu
    Dong, Xingbo
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 127
  • [34] RGB-T tracking with frequency hybrid awareness
    Lei, Lei
    Li, Xianxian
    IMAGE AND VISION COMPUTING, 2024, 152
  • [35] Interactive context-aware network for RGB-T salient object detection
    Wang, Yuxuan
    Dong, Feng
    Zhu, Jinchao
    Chen, Jianren
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (28) : 72153 - 72174
  • [36] Target-Aware Fusion of Infrared and Visible Images
    Zhou, Yingjie
    Gao, Kun
    Dou, Zeyang
    Hua, Zizheng
    Wang, Hong
    IEEE ACCESS, 2018, 6 : 79039 - 79049
  • [37] Target-Aware State Estimation for Visual Tracking
    Zhou, Zikun
    Li, Xin
    Fan, Nana
    Wang, Hongpeng
    He, Zhenyu
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (05) : 2908 - 2920
  • [38] Toward Modalities Correlation for RGB-T Tracking
    Hu, Xiantao
    Zhong, Bineng
    Liang, Qihua
    Zhang, Shengping
    Li, Ning
    Li, Xianxian
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (10) : 9102 - 9111
  • [39] DaCFN: divide-and-conquer fusion network for RGB-T object detection
    Wang, Bofan
    Zhao, Haitao
    Zhuang, Yi
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2023, 14 (07) : 2407 - 2420
  • [40] Unified Single- Stage Transformer Network for Efficient RGB-T Tracking
    Xia, Jianqiang
    Shi, Dianxi
    Song, Ke
    Song, Linna
    Wang, Xiaolei
    Jin, Songchang
    Zhao, Chenran
    Cheng, Yu
    Jin, Lei
    Zhu, Zheng
    Li, Jianan
    Wang, Gang
    Xing, Junliang
    Zhao, Jian
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 1471 - 1479