Visual Object Tracking Using Structured Sparse PCA-Based Appearance Representation and Online Learning

被引:3
|
作者
Yoon, Gang-Joon [1 ]
Hwang, Hyeong Jae [2 ]
Yoon, Sang Min [3 ]
机构
[1] Natl Inst Math Sci, 70 Yuseong Daero, Daejeon 34047, South Korea
[2] Artificial Intelligence Res Inst, 22,Daewangpangyo Ro 712Beon Gil, Seongnam Si 463400, Gyeonggi Do, South Korea
[3] Kookmin Univ, Coll Comp Sci, 77 Jeongneung Ro, Seoul 02707, South Korea
基金
新加坡国家研究基金会;
关键词
visual object tracking structured sparse PCA; appearance model; online learning; structured visual dictionary; FILTER;
D O I
10.3390/s18103513
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Visual object tracking is a fundamental research area in the field of computer vision and pattern recognition because it can be utilized by various intelligent systems. However, visual object tracking faces various challenging issues because tracking is influenced by illumination change, pose change, partial occlusion and background clutter. Sparse representation-based appearance modeling and dictionary learning that optimize tracking history have been proposed as one possible solution to overcome the problems of visual object tracking. However, there are limitations in representing high dimensional descriptors using the standard sparse representation approach. Therefore, this study proposes a structured sparse principal component analysis to represent the complex appearance descriptors of the target object effectively with a linear combination of a small number of elementary atoms chosen from an over-complete dictionary. Using an online dictionary for learning and updating by selecting similar dictionaries that have high probability makes it possible to track the target object in a variety of environments. Qualitative and quantitative experimental results, including comparison to the current state of the art visual object tracking algorithms, validate that the proposed tracking algorithm performs favorably with changes in the target object and environment for benchmark video sequences.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] Learning Appearance Manifolds with Structured Sparse Representation for Robust Visual Tracking
    Bai, Tianxiang
    Li, Y. F.
    Shao, Zhanpeng
    2013 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2013, : 5788 - 5793
  • [2] Online Visual Object Tracking with Supervised Sparse Representation and Learning
    Bai, Tianxiang
    Li, Y. F.
    Shao, Zhanpeng
    2014 13TH INTERNATIONAL CONFERENCE ON CONTROL AUTOMATION ROBOTICS & VISION (ICARCV), 2014, : 827 - 832
  • [3] Robust visual tracking with structured sparse representation appearance model
    Bai, Tianxiang
    Li, Y. F.
    PATTERN RECOGNITION, 2012, 45 (06) : 2390 - 2404
  • [4] Structured Sparse Representation Appearance Model for Robust Visual Tracking
    Bai, Tianxiang
    Li, Y. F.
    Tang, Yazhe
    2011 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2011,
  • [5] Discriminative Sparse Representation for Online Visual Object Tracking
    Bai, Tianxiang
    Li, Y. F.
    Zhou, Xiaolong
    2012 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO 2012), 2012,
  • [6] Robust visual tracking based on online learning sparse representation
    Zhang, Shengping
    Yao, Hongxun
    Zhou, Huiyu
    Sun, Xin
    Liu, Shaohui
    NEUROCOMPUTING, 2013, 100 : 31 - 40
  • [7] Robust Online Object Tracking with a Structured Sparse Representation Model
    Bo, Chunjuan
    Wang, Dong
    KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS, 2016, 10 (05): : 2346 - 2362
  • [8] Visual tracking based on the sparse representation of the PCA subspace
    Chen D.-B.
    Zhu M.
    Wang H.-L.
    Optoelectronics Letters, 2017, 13 (05) : 392 - 396
  • [9] Visual tracking based on the sparse representation of the PCA subspace
    陈典兵
    朱明
    王慧利
    OptoelectronicsLetters, 2017, 13 (05) : 392 - 396
  • [10] Online Object Tracking using Sparse Prototypes by Learning Visual Prior
    Divya, S.
    Latha, K.
    2013 INTERNATIONAL CONFERENCE ON COMMUNICATIONS AND SIGNAL PROCESSING (ICCSP), 2013, : 597 - 601