SiamPAT: Siamese point attention networks for robust visual tracking

被引:0
|
作者
Chen, Hang [1 ]
Zhang, Weiguo [1 ]
Yan, Danghui [1 ]
机构
[1] Northwestern Polytech Univ, Automat Coll, Xian, Peoples R China
基金
中国国家自然科学基金;
关键词
visual tracking; attention mechanism; Siamese point attention; object attention; OBJECT TRACKING;
D O I
10.1117/1.JEI.30.5.053001
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Attention mechanism originates from the study of human visual behavior, which has been widely used in various fields of artificial intelligence in recent years and has become an important part of neural network structure. Many attention mechanism-based trackers have gained improved performance in both accuracy and robustness. However, these trackers cannot suppress the influence of background information and distractors accurately and do not enhance the target object information, which limits the performance of these trackers. We propose new Siamese point attention (SPA) networks for robust visual tracking. SPA networks learn position attention and channel attention jointly on two branch information. To construct point attention, each point on the template feature is used to calculate the similarity on the search feature. The similarity calculation is based on the local information of the target object, which can reduce the influence of background, deformation, and rotation factors. We can obtain the region of interest by calculating the position attention from point attention. Position attention is integrated into the calculation of channel attention to reduce the influence of irrelevant areas. In addition, we also propose the object attention, and integrate it into the classification and regression module to further enhance the semantic information of the target object and improve the tracking accuracy. Extensive experiments are also conducted on five benchmark datasets. The experiment results show that our method achieves state-of-the-art performance. (C) 2021 SPIE and IS&T
引用
收藏
页数:17
相关论文
共 50 条
  • [41] A novel Siamese Attention Network for visual object tracking of autonomous vehicles
    Chen, Jia
    Ai, Yibo
    Qian, Yuhan
    Zhang, Weidong
    PROCEEDINGS OF THE INSTITUTION OF MECHANICAL ENGINEERS PART D-JOURNAL OF AUTOMOBILE ENGINEERING, 2021, 235 (10-11) : 2764 - 2775
  • [42] Siamese network visual tracking algorithm based on cascaded attention mechanism
    Pu L.
    Feng X.
    Hou Z.
    Yu W.
    Ma S.
    Beijing Hangkong Hangtian Daxue Xuebao/Journal of Beijing University of Aeronautics and Astronautics, 2020, 46 (12): : 2302 - 2310
  • [43] Global Context Attention for Robust Visual Tracking
    Choi, Janghoon
    SENSORS, 2023, 23 (05)
  • [44] Incremental focus of attention for robust visual tracking
    Toyama, K
    Hager, GD
    1996 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, PROCEEDINGS, 1996, : 189 - 195
  • [45] Multiple Context Features in Siamese Networks for Visual Object Tracking
    Morimitsu, Henrique
    COMPUTER VISION - ECCV 2018 WORKSHOPS, PT I, 2019, 11129 : 116 - 131
  • [46] Distractor-Aware Siamese Networks for Visual Object Tracking
    Zhu, Zheng
    Wang, Qiang
    Li, Bo
    Wu, Wei
    Yan, Junjie
    Hu, Weiming
    COMPUTER VISION - ECCV 2018, PT IX, 2018, 11213 : 103 - 119
  • [47] LEARNING CASCADED SIAMESE NETWORKS FOR HIGH PERFORMANCE VISUAL TRACKING
    Gao, Peng
    Ma, Yipeng
    Yuan, Ruyue
    Xiao, Liyi
    Wang, Fei
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 3078 - 3082
  • [48] Visual Product Tracking System Using Siamese Neural Networks
    Jalonen, Tuomas
    Laakom, Firas
    Gabbouj, Moncef
    Puoskari, Tuomas
    IEEE ACCESS, 2021, 9 : 76796 - 76805
  • [49] Feature Alignment and Aggregation Siamese Networks for Fast Visual Tracking
    Fan, Jiaqing
    Song, Huihui
    Zhang, Kaihua
    Yang, Kang
    Liu, Qingshan
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (04) : 1296 - 1307
  • [50] Deep Siamese Cross-Residual Learning for Robust Visual Tracking
    Wu, Fan
    Xu, Tingfa
    Guo, Jie
    Huang, Bo
    Xu, Chang
    Wang, Jihui
    Li, Xiangmin
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (20): : 15216 - 15227