Multi-view region proposal network predictive learning for tracking

被引:1
|
作者
Guo, Wen [1 ]
Li, Dong [1 ]
Liang, Bowen [1 ]
Shan, Bin [1 ]
机构
[1] Shandong Technol & Business Univ, Sch Informat & Elect Engn, Yantai 264005, Peoples R China
基金
中国国家自然科学基金;
关键词
Region proposal prediction; Multi-view multi-expert learning; Visual tracking; Prediction learning; VISUAL TRACKING; GAUSSIAN-PROCESSES; HISTOGRAMS;
D O I
10.1007/s00530-022-01001-w
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Visual tracking is one of the most challenging problems in computer vision. Most state-of-the-art visual trackers suffer from three challenging problems: nondiverse discriminate feature representation, coarse object locator, and limited quantities of positive samples. In this paper, a multi-view multi-expert region proposal prediction algorithm for tracking is proposed to solve the above problems concurrently in one framework. The proposed algorithm integrates multiple views and exploits powerful multiple sources of information, which can solve nondiverse discriminate feature representation problem effectively. It builds multiple SVM classifier models on the expanded bounding boxes and adds the regional suggestion network module to accurately optimize it to predict optimal object location, which naturally alleviates the coarse object locator and limited quantities of positive samples problems at the same time. A comprehensive evaluation of the proposed approach on various benchmark sequences has been performed. The evaluation results demonstrate our method can significantly improve the tracking performance by combining the advantages of lightweight region proposal network predictive learning model and multi-view expert groups. The experimental results demonstrate the proposed approach outperforms other state-of-the-art visual trackers.
引用
收藏
页码:333 / 346
页数:14
相关论文
共 50 条
  • [31] Common and Unique Features Learning in Multi-view Network Embedding
    Shang, Yifan
    Ye, Xiucai
    Sakurai, Tetsuya
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [32] Multi-view Network Embedding with Structure and Semantic Contrastive Learning
    Shang, Yifan
    Ye, Xiucai
    Sakurai, Tetsuya
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 870 - 875
  • [33] MULTI-VIEW JOINT LEARNING NETWORK FOR PEDESTRIAN GENDER CLASSIFICATION
    Cai, Lei
    Zeng, Huanqiang
    Zhu, Jianqing
    Cao, Jiuwen
    Hou, Junhui
    Cai, Canhui
    2017 INTERNATIONAL SYMPOSIUM ON INTELLIGENT SIGNAL PROCESSING AND COMMUNICATION SYSTEMS (ISPACS 2017), 2017, : 23 - 27
  • [34] Multi-view Graph Neural Network for Fair Representation Learning
    Zhang, Guixian
    Yuan, Guan
    Cheng, Debo
    He, Ludan
    Bing, Rui
    Li, Jiuyong
    Zhang, Shichao
    WEB AND BIG DATA, APWEB-WAIM 2024, PT III, 2024, 14963 : 208 - 223
  • [35] Adversarial learning for multi-view network embedding on incomplete graphs
    Li, Chaozhuo
    Wang, Senzhang
    Yang, Dejian
    Yu, Philip S.
    Liang, Yanbo
    Li, Zhoujun
    KNOWLEDGE-BASED SYSTEMS, 2019, 180 : 91 - 103
  • [36] A NOVEL MULTI-VIEW LABELLING NETWORK BASED ON PAIRWISE LEARNING
    Zhang, Yue
    Caliskan, Akin
    Hilton, Adrian
    Guillemaut, Jean-Yves
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3682 - 3686
  • [37] Unsupervised Multi-view Learning
    Huang, Ling
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 6442 - 6443
  • [38] A review on multi-view learning
    Yu, Zhiwen
    Dong, Ziyang
    Yu, Chenchen
    Yang, Kaixiang
    Fan, Ziwei
    Chen, C. L. Philip
    FRONTIERS OF COMPUTER SCIENCE, 2025, 19 (07)
  • [39] Multi-scale Region Proposal Network Trained by Multi-domain Learning for Visual Object Tracking
    Fang, Yang
    Ko, Seunghyun
    Jo, Geun-Sik
    NEURAL INFORMATION PROCESSING (ICONIP 2017), PT III, 2017, 10636 : 330 - 339
  • [40] Efficient Multi-View Multi-Target Tracking Using a Distributed Camera Network
    He, Li
    Liu, Guoliang
    Tian, Guohui
    Zhang, Jianhua
    Ji, Ze
    IEEE SENSORS JOURNAL, 2020, 20 (04) : 2056 - 2063