Efficient transformer tracking with adaptive attention

被引:0
|
作者
Xiao, Dingkun [1 ]
Wei, Zhenzhong [1 ]
Zhang, Guangjun [1 ]
机构
[1] Beihang Univ, Sch Instrumentat & Optoelect Engn, Key Lab Precis Optomechatron Technol, Minist Educ, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
computer vision; convolution; convolutional neural nets; object tracking; target tracking; tracking;
D O I
10.1049/cvi2.12315
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, several trackers utilising Transformer architecture have shown significant performance improvement. However, the high computational cost of multi-head attention, a core component in the Transformer, has limited real-time running speed, which is crucial for tracking tasks. Additionally, the global mechanism of multi-head attention makes it susceptible to distractors with similar semantic information to the target. To address these issues, the authors propose a novel adaptive attention that enhances features through the spatial sparse attention mechanism with less than 1/4 of the computational complexity of multi-head attention. Our adaptive attention sets a perception range around each element in the feature map based on the target scale in the previous tracking result and adaptively searches for the information of interest. This allows the module to focus on the target region rather than background distractors. Based on adaptive attention, the authors build an efficient transformer tracking framework. It can perform deep interaction between search and template features to activate target information and aggregate multi-level interaction features to enhance the representation ability. The evaluation results on seven benchmarks show that the authors' tracker achieves outstanding performance with a speed of 43 fps and significant advantages in hard circumstances.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] GCAT: graph calibration attention transformer for robust object tracking
    Chen S.
    Hu X.
    Wang D.-H.
    Yan Y.
    Zhu S.
    Neural Computing and Applications, 2024, 36 (23) : 14151 - 14172
  • [32] Efficient Feature Interactions Learning with Gated Attention Transformer
    Long, Chao
    Zhu, Yanmin
    Liu, Haobing
    Yu, Jiadi
    WEB INFORMATION SYSTEMS ENGINEERING - WISE 2021, PT II, 2021, 13081 : 3 - 17
  • [33] PARAMETER-EFFICIENT VISION TRANSFORMER WITH LINEAR ATTENTION
    Zhao, Youpeng
    Tang, Huadong
    Jiang, Yingying
    Yong, A.
    Wu, Qiang
    Wang, Jun
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 1275 - 1279
  • [34] ClusterFormer: Neural Clustering Attention for Efficient and Effective Transformer
    Wang, Ningning
    Gan, Guobing
    Zhang, Peng
    Zhang, Shuai
    Wei, Junqiu
    Liu, Qun
    Jiang, Xin
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 2390 - 2402
  • [35] Query Selector-Efficient transformer with sparse attention
    Klimek, Jacek
    Klimek, Jakub
    Kraskiewicz, Witold
    Topolewski, Mateusz
    SOFTWARE IMPACTS, 2022, 11
  • [36] ParaFormer: Parallel Attention Transformer for Efficient Feature Matching
    Lu, Xiaoyong
    Yan, Yaping
    Kang, Bin
    Du, Songlin
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 2, 2023, : 1853 - 1860
  • [37] Efficient Transformer Inference with Statically Structured Sparse Attention
    Dai, Steve
    Genc, Hasan
    Venkatesan, Rangharajan
    Khailany, Brucek
    2023 60TH ACM/IEEE DESIGN AUTOMATION CONFERENCE, DAC, 2023,
  • [38] ScatterFormer: Efficient Voxel Transformer with Scattered Linear Attention
    He, Chenhang
    Li, Ruihuang
    Zhang, Guowen
    Zhang, Lei
    COMPUTER VISION - ECCV 2024, PT XXIX, 2025, 15087 : 74 - 92
  • [39] EAPT: Efficient Attention Pyramid Transformer for Image Processing
    Lin, Xiao
    Sun, Shuzhou
    Huang, Wei
    Sheng, Bin
    Li, Ping
    Feng, David Dagan
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 50 - 61
  • [40] Efficient Lightweight Image Denoising with Triple Attention Transformer
    Zhou, Yubo
    Lin, Jin
    Ye, Fangchen
    Qu, Yanyun
    Xie, Yuan
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 7, 2024, : 7704 - 7712