Joint spatiotemporal regularization and scale-aware correlation filters for visual tracking

被引:5
|
作者
Xu, Libin [1 ]
Gao, Mingliang [1 ]
Li, Xuesong [1 ]
Zhai, Wenzhe [1 ]
Yu, Mengting [2 ]
Li, Zizhan [2 ]
机构
[1] Shandong Univ Technol, Sch Elect & Elect Engn, Zibo, Peoples R China
[2] Shandong Univ Technol, Sch Phys & Optoelect Engn, Zibo, Peoples R China
基金
中国国家自然科学基金;
关键词
visual tracking; correlation filters; spatio-temporal regularization; scale filter; ONLINE OBJECT TRACKING;
D O I
10.1117/1.JEI.30.4.043011
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Correlation filters (CF)-based tracking methods have drawn extensive attention due to their impressive performance in visual object tracking (VOT) challenges. However, the existing CF-based trackers incur the inherent problems of boundary effect and filter degradation. Meanwhile, the scale variants of targets result in tracking-drift problems, which degrade the tracking accuracy significantly. To address the aforementioned problems, a tracking model named spatiotemporal regularization and scale-aware correlation filters (STR-SACF) is proposed. The STR-SACF model consists of two CFs, namely translation filter and scale filter (SF). The translation filter improves the accuracy of target localization, whereas the SF achieves the optimal and fast scale estimation. The proposed tracking model can improve the tracking accuracy while reducing computational complexity. Moreover, a spatiotemporal regularization term is introduced into the translation filter and SF to suppress boundary effect and filter degradation. The proposed model is effectively optimized based on the alternating direction method of multipliers algorithm. Experimental results on four challenging tracking benchmarks, i.e., OTB2013, OTB2015, TC128, and VOT2016, demonstrate the superiority of the proposed tracker compared with more than 20 state-of-the-art trackers. (c) 2021 SPIE and IS&T [DOI: 10.1117/1.JEI .30.4.043011]
引用
收藏
页数:16
相关论文
共 50 条
  • [21] Scale-Aware Domain Adaptation for Robust UAV Tracking
    Fu, Changhong
    Li, Teng
    Ye, Junjie
    Zheng, Guangze
    Li, Sihang
    Lu, Peng
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (06) : 3764 - 3771
  • [22] A background-aware correlation filter with adaptive saliency-aware regularization for visual tracking
    Zhang, Jianming
    Yuan, Tingyu
    He, Yaoqi
    Wang, Jin
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (08): : 6359 - 6376
  • [23] A background-aware correlation filter with adaptive saliency-aware regularization for visual tracking
    Jianming Zhang
    Tingyu Yuan
    Yaoqi He
    Jin Wang
    Neural Computing and Applications, 2022, 34 : 6359 - 6376
  • [24] Spatial-aware correlation filters with adaptive weight maps for visual tracking
    Tang, Feng
    Ling, Qiang
    NEUROCOMPUTING, 2019, 358 : 369 - 384
  • [25] Scene-Aware Adaptive Updating for Visual Tracking via Correlation Filters
    Li, Fan
    Zhang, Sirou
    Qiao, Xiaoya
    SENSORS, 2017, 17 (11)
  • [26] Adaptive Spatial-Temporal Regularization for Correlation Filters Based Visual Object Tracking
    Chen, Fei
    Wang, Xiaodong
    SYMMETRY-BASEL, 2021, 13 (09):
  • [27] Learning Augmented Memory Joint Aberrance Repressed Correlation Filters for Visual Tracking
    Ji, Yuanfa
    He, Jianzhong
    Sun, Xiyan
    Bai, Yang
    Wei, Zhaochuan
    bin Ghazali, Kamarul Hawari
    SYMMETRY-BASEL, 2022, 14 (08):
  • [28] VISUAL TRACKING WITH SPARSE CORRELATION FILTERS
    Dong, Yanmei
    Yang, Min
    Pei, Mingtao
    2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2016, : 439 - 443
  • [29] Adaptive Spatial Regularization Correlation Filters for UAV Tracking
    Cao, Yulin
    Dong, Shihao
    Zhang, Jiawei
    Xu, Han
    Zhang, Yan
    Zheng, Yuhui
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2024, 17 : 7867 - 7877
  • [30] Robust Multi-Model Visual Tracking With Distractor-Aware Template-Coupled Correlation Filters Joint Learning
    Zhang, Haoyang
    Liu, Guixi
    Zhang, Yi
    Hao, Zhaohui
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 1813 - 1828