Adaptive Channel Selection for Robust Visual Object Tracking with Discriminative Correlation Filters

被引:58
|
作者
Xu, Tianyang [1 ]
Feng, Zhenhua [1 ,2 ]
Wu, Xiao-Jun [3 ]
Kittler, Josef [1 ]
机构
[1] Univ Surrey, Ctr Vis Speech & Signal Proc, Guildford GU2 7XH, Surrey, England
[2] Univ Surrey, Dept Comp Sci, Guildford GU2 7XH, Surrey, England
[3] Jiangnan Univ, Sch Artificial Intelligence & Comp Sci, Wuxi 214122, Jiangsu, Peoples R China
基金
英国工程与自然科学研究理事会; 中国国家自然科学基金;
关键词
Visual Object Tracking; Discriminative Correlation Filters; Adaptive Channel Selection; Adaptive Elastic Net;
D O I
10.1007/s11263-021-01435-1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Discriminative Correlation Filters (DCF) have been shown to achieve impressive performance in visual object tracking. However, existing DCF-based trackers rely heavily on learning regularised appearance models from invariant image feature representations. To further improve the performance of DCF in accuracy and provide a parsimonious model from the attribute perspective, we propose to gauge the relevance of multi-channel features for the purpose of channel selection. This is achieved by assessing the information conveyed by the features of each channel as a group, using an adaptive group elastic net inducing independent sparsity and temporal smoothness on the DCF solution. The robustness and stability of the learned appearance model are significantly enhanced by the proposed method as the process of channel selection performs implicit spatial regularisation. We use the augmented Lagrangian method to optimise the discriminative filters efficiently. The experimental results obtained on a number of well-known benchmarking datasets demonstrate the effectiveness and stability of the proposed method. A superior performance over the state-of-the-art trackers is achieved using less than 10% deep feature channels.
引用
收藏
页码:1359 / 1375
页数:17
相关论文
共 50 条
  • [41] Visual object tracking algorithm based on correlation filters
    Zhang, Lei
    Wang, Yan-Jie
    Liu, Yan-Ying
    Sun, Hong-Hai
    He, Shu-Wen
    Guangdianzi Jiguang/Journal of Optoelectronics Laser, 2015, 26 (07): : 1349 - 1357
  • [42] Location-Aware and Regularization-Adaptive Correlation Filters for Robust Visual Tracking
    Liu, Risheng
    Chen, Qianru
    Yao, Yuansheng
    Fan, Xin
    Luo, Zhongxuan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (06) : 2430 - 2442
  • [43] Robust Visual Tracking Based on Kernelized Correlation Filters
    Jiang, Min
    Shen, Jianyu
    Kong, Jun
    Wang, Benxuan
    2017 IEEE INTERNATIONAL CONFERENCE ON INFORMATION AND AUTOMATION (IEEE ICIA 2017), 2017, : 110 - 115
  • [44] Robust visual tracking with correlation filters and metric learning
    Yuan, Di
    Kang, Wei
    He, Zhenyu
    KNOWLEDGE-BASED SYSTEMS, 2020, 195
  • [45] A Robust Visual Tracking via Nonlocal Correlation Filters
    Wei, Yanxia
    Jiang, Zhen
    Chen, Dongxun
    SEVENTH INTERNATIONAL CONFERENCE ON OPTICAL AND PHOTONIC ENGINEERING (ICOPEN 2019), 2019, 11205
  • [46] Regularisation learning of correlation filters for robust visual tracking
    Jiang, Min
    Shen, Jianyu
    Kong, Jun
    Huo, Hongtao
    IET IMAGE PROCESSING, 2018, 12 (09) : 1586 - 1594
  • [47] Deformable Parts Correlation Filters for Robust Visual Tracking
    Lukezic, Alan
    Zajc, Luka Cehovin
    Kristan, Matej
    IEEE TRANSACTIONS ON CYBERNETICS, 2018, 48 (06) : 1849 - 1861
  • [48] SCSTCF: Spatial-Channel Selection and Temporal Regularized Correlation Filters for visual tracking
    Zhang, Jianming
    Feng, Wenjun
    Yuan, Tingyu
    Wang, Jin
    Sangaiah, Arun Kumar
    Applied Soft Computing, 2022, 118
  • [49] SCSTCF: Spatial-Channel Selection and Temporal Regularized Correlation Filters for visual tracking
    Zhang, Jianming
    Feng, Wenjun
    Yuan, Tingyu
    Wang, Jin
    Sangaiah, Arun Kumar
    APPLIED SOFT COMPUTING, 2022, 118
  • [50] Discriminative visual tracking via spatially smooth and steep correlation filters
    Wang, Wuwei
    Zhang, Ke
    Lv, Meibo
    Wang, Jingyu
    INFORMATION SCIENCES, 2021, 578 : 147 - 165