Robust Visual Tracking via an Improved Background Aware Correlation Filter

被引:15
|
作者
Sheng, Xiaoxiao [1 ]
Liu, Yungang [1 ]
Liang, Huijun [1 ]
Li, Fengzhong [1 ]
Man, Yongchao [1 ]
机构
[1] Shandong Univ, Sch Control Sci & Engn, Jinan 250061, Shandong, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Visual tracking; correlation filter; feature fusion; scale search; OBJECT TRACKING;
D O I
10.1109/ACCESS.2019.2900666
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, there emerge many excellent algorithms in the field of visual object tracking. Especially, the background aware correlation filter (BACF) has received much attention, owing to its ability to cope with the boundary effect. However, in the related works, there exist two aspects of imperfections: 1) only histograms of oriented gradients (HOG) is extracted, through which the visual information of targets cannot be fully expressed; and 2) the scale estimation strategy is imperfect in terms of scale parameters, which makes it impossible to accurately track the targets with large-scale changes. To overcome the imperfections, an improved BACF method of robust visual object tracking is proposed to achieve the location of targets with higher accuracy in complex scenarios allowing scale variation, occlusion, rotation, illumination variation, and so on. Crucially, a feature fusion strategy based on HOG and color names is integrated to extract a powerful feature of targets, and a modified scale estimation strategy is designed to enhance the ability to track targets with large-scale changes. The effectiveness and robustness of the proposed method are demonstrated through evaluations on OTB2103 and OTB2015 benchmarks. Particularly, compared with other state-of-theart correlation filter-based trackers and deep learning-based trackers, the proposed method is competitive in terms of accuracy and success rate.
引用
收藏
页码:24877 / 24888
页数:12
相关论文
共 50 条
  • [1] Robust visual tracking via a hybrid correlation filter
    Wang, Yong
    Luo, Xinbin
    Ding, Lu
    Wu, Jingjing
    Fu, Shan
    MULTIMEDIA TOOLS AND APPLICATIONS, 2019, 78 (22) : 31633 - 31648
  • [2] Robust visual tracking via a hybrid correlation filter
    Yong Wang
    Xinbin Luo
    Lu Ding
    Jingjing Wu
    Shan Fu
    Multimedia Tools and Applications, 2019, 78 : 31633 - 31648
  • [3] Robust Scalable Part-Based Visual Tracking for UAV with Background-Aware Correlation Filter
    Fu, Changhong
    Zhang, Yinqiang
    Duan, Ran
    Xie, Zongwu
    2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO), 2018, : 2245 - 2252
  • [4] Robust Visual Tracking via Adaptive Kernelized Correlation Filter
    Wang, Bo
    Wang, Desheng
    Liao, Qingmin
    FOURTH INTERNATIONAL CONFERENCE ON WIRELESS AND OPTICAL COMMUNICATIONS, 2016, 9902
  • [5] Robust visual tracking via constrained correlation filter coding
    Liu, Fanghui
    Zhou, Tao
    Fu, Keren
    Yang, Jie
    PATTERN RECOGNITION LETTERS, 2016, 84 : 163 - 169
  • [6] A background-aware correlation filter with adaptive saliency-aware regularization for visual tracking
    Jianming Zhang
    Tingyu Yuan
    Yaoqi He
    Jin Wang
    Neural Computing and Applications, 2022, 34 : 6359 - 6376
  • [7] A background-aware correlation filter with adaptive saliency-aware regularization for visual tracking
    Zhang, Jianming
    Yuan, Tingyu
    He, Yaoqi
    Wang, Jin
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (08): : 6359 - 6376
  • [8] Visual Tracking via Adaptive Context-Aware Correlation Filter
    Liu, Peng
    Wang, Feng
    Liu, Ming
    Ming, Delie
    PROCEEDINGS OF 2020 IEEE 4TH INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC 2020), 2020, : 1380 - 1384
  • [9] Object Tracking via Unified Deep Background-aware Correlation Filter
    Li, Junwei
    Zhou, Xiaolong
    Chen, Shengyong
    PROCEEDINGS OF 2018 IEEE INTERNATIONAL CONFERENCE ON REAL-TIME COMPUTING AND ROBOTICS (IEEE RCAR), 2018, : 550 - 555
  • [10] Robust Visual Tracking via Local-Global Correlation Filter
    Fan, Heng
    Xiang, Jinhai
    THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 4025 - 4031