Discriminative subspace learning with sparse representation view-based model for robust visual tracking

被引:43
|
作者
Xie, Yuan [1 ]
Zhang, Wensheng [1 ]
Qu, Yanyun [2 ]
Zhang, Yinghua [1 ]
机构
[1] Chinese Acad Sci, Inst Automat, Res Ctr Precis Sensing & Control, Beijing 100190, Peoples R China
[2] Xiamen Univ, Dept Comp Sci, Video & Image Lab, Xiamen 361005, Peoples R China
基金
中国国家自然科学基金;
关键词
Discriminative subspace learning; Spectral regression; Sparse representation; Object tracking; SELECTION;
D O I
10.1016/j.patcog.2013.07.010
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we propose a robust tracking algorithm to handle drifting problem. This algorithm consists of two parts: the first part is the G&D part that combines Generative model and Discriminative model for tracking, and the second part is the View-Based model for target appearance that corrects the result of the G&D part if necessary. In GRID part, we use the Maximum Margin Projection (MMP) to construct a graph model to preserve both local geometrical and discriminant structures of the data manifold in low dimensions. Therefore, such discriminative subspace combined with traditional generative subspace can benefit from both models. In addition, we address the problem of learning maximum margin projection under the Spectral Regression (SR) which results in significant savings in computational time. To further solve the drift, an online learned sparsely represented view-based model of the target is complementary to the G&D part. When the result of G&D part is unreliable, the view-based model can rectify the result in order to avoid drifting. Experimental results on several challenging video sequences demonstrate the effectiveness and robustness of our approach. (C) 2013 Elsevier Ltd. All rights reserved.
引用
收藏
页码:1383 / 1394
页数:12
相关论文
共 50 条
  • [31] Robust visual tracking based on incremental tensor subspace learning
    Li, Xi
    Hu, Weiming
    Zhang, Zhongfei
    Zhang, Xiaoqin
    Luo, Guan
    2007 IEEE 11TH INTERNATIONAL CONFERENCE ON COMPUTER VISION, VOLS 1-6, 2007, : 960 - +
  • [32] Robust visual tracking of infrared object via sparse representation model
    Ma, Junkai
    Luo, Haibo
    Chang, Zheng
    Hui, Bin
    INTERNATIONAL SYMPOSIUM ON OPTOELECTRONIC TECHNOLOGY AND APPLICATION 2014: IMAGE PROCESSING AND PATTERN RECOGNITION, 2014, 9301
  • [33] Robust Ship Tracking via Multi-view Learning and Sparse Representation
    Chen, Xinqiang
    Wang, Shengzheng
    Shi, Chaojian
    Wu, Huafeng
    Zhao, Jiansen
    Fu, Junjie
    JOURNAL OF NAVIGATION, 2019, 72 (01): : 176 - 192
  • [34] Robust visual tracking based on generative and discriminative model collaboration
    Jianfang Dou
    Qin Qin
    Zimei Tu
    Multimedia Tools and Applications, 2017, 76 : 15839 - 15866
  • [35] Robust visual tracking based on generative and discriminative model collaboration
    Dou, Jianfang
    Qin, Qin
    Tu, Zimei
    MULTIMEDIA TOOLS AND APPLICATIONS, 2017, 76 (14) : 15839 - 15866
  • [36] Robust Visual Tracking via Discriminative Structural Sparse Feature
    Wang, Fenglei
    Zhang, Jun
    Guo, Qiang
    Liu, Pan
    Tu, Dan
    ADVANCES IN IMAGE AND GRAPHICS TECHNOLOGIES (IGTA 2015), 2015, 525 : 438 - 446
  • [37] Discriminative object tracking with subspace representation
    Devi, Rajkumari Bidyalakshmi
    Chanu, Yambem Jina
    Singh, Khumanthem Manglem
    VISUAL COMPUTER, 2021, 37 (05): : 1207 - 1219
  • [38] Discriminative object tracking with subspace representation
    Rajkumari Bidyalakshmi Devi
    Yambem Jina Chanu
    Khumanthem Manglem Singh
    The Visual Computer, 2021, 37 : 1207 - 1219
  • [39] Robust visual tracking using discriminative sparse collaborative map
    Zhou, Zhenghua
    Zhang, Weidong
    Zhao, Jianwei
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2019, 10 (11) : 3201 - 3212
  • [40] Robust Visual Tracking via Discriminative Sparse Point Matching
    Wen, Hui
    Ge, Shiming
    Yang, Rui
    Chen, Shuixian
    Sun, Limin
    PROCEEDINGS OF THE 2014 9TH IEEE CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA), 2014, : 1243 - 1246