Multi-target occlusion tracking algorithm employing RGB-D spatio-temporal context model

被引:0
|
作者
Wan Q. [1 ,2 ]
Zhu X.-L. [4 ]
Xiao Y.-P. [1 ]
Sun J. [1 ]
Wang Y.-N. [2 ,3 ]
Yan J.-E. [1 ]
Yang J.-Y. [1 ]
机构
[1] College of Electrical & Information Engineering, Hunan Institute of Engineering, Xiangtan
[2] National Engineering Research Laboratory for Robot Vision Perception and Control, Hunan University, Changsha
[3] College of Electrical and Information Engineering, Hunan University, Changsha
[4] College of Mathematics and Computing Science, Xiangtan University, Xiangtan
基金
湖南省自然科学基金; 中国国家自然科学基金;
关键词
Maximum a posteriori (MAP); Occlusion tracking; RGB-D; Spatio-temporal context; Temporal consistency;
D O I
10.7641/CTA.2021.00734
中图分类号
学科分类号
摘要
In order to improve the accuracy of real-time RGB-D target occlusion tracking and solve the problems of model drift and tracking loss in multi-target occlusion tracking, this paper proposes a multi-target occlusion tracking algorithm based on RGB-D spatio-temporal context model. Firstly, the multi-target detection and location region is obtained, and then the target temporal context model and the target spatial context model are used to establish the target RGB-D spatio-temporal context model through target spatio-temporal context feature extraction. Then, when the tracker judges the tracking state, the color and depth features are adaptively fused by calculating the time consistency to determine the target position in the current frame. Finally, when the tracker discriminates multi-target occlusion, the depth probability is introduced, and the depth probability information feature is used to constrain. The maximum a posteriori (MAP) correlation model is used to effectively solve the problem of target occlusion tracking. The qualitative comparison experiments and quantitative results on public datasets clothing store dataset and Princeton tracking benchmark dataset show that the proposed algorithm has good occlusion tracking performance, can better solve the problem of multi-target occlusion tracking, and improve the accuracy and robustness of target occlusion tracking. © 2021, Editorial Department of Control Theory & Applications South China University of Technology. All right reserved.
引用
收藏
页码:2019 / 2030
页数:11
相关论文
共 26 条
  • [1] WANG Z, YOON S, PARK D S., Online adaptive multiple pedestrian tracking in monocular surveillance video, Neural Computing and Applications, 28, 1, pp. 127-141, (2017)
  • [2] AHMAD B I, MURPHY J K, LANGDON P M, Et al., Bayesian intent prediction in object tracking using bridging distributions, IEEE Transactions on Cybernetics, (2016)
  • [3] KAMPKER A, SEFATI M, RACHMAN A S, Et al., Towards multi-object detection and tracking in urban scenario under uncertainties, Computer Vision and Pattern Recognition, (2018)
  • [4] DAS S K, DASH S, ROUT B K, Et al., An approach for tracking of mobile robot with vision sensor, International Conference Intelligent Sustainable Systems, (2017)
  • [5] XIAO J, STOLKIN R, GAO Y, Et al., Robust fusion of color and depth data for RGB-D target tracking using adaptive range-invariant depth models and spatio-temporal consistency constraints, IEEE Transactions on Cybernetics, 48, 8, pp. 2485-2499, (2018)
  • [6] GAO S, YE Q X, XING J L, Et al., Beyond group: multiple person tracking via minimal topology-energy-variation, IEEE Transactions on Image Processing, 26, 12, pp. 5575-5589, (2017)
  • [7] GARG S, HASSAN E, KUMAR S, Et al., A hierarchical frame-byframe association method based on graph matching for multi-object tracking, International Symposium on Visual Computing, pp. 138-150, (2015)
  • [8] KHAN S, SHAH M., Object based segmentation of video using color, motion and spatial information, Computer Vision and Pattern Recognition, (2001)
  • [9] SANTNER J, LEISTNER C, SAFFARI A, Et al., PROST: parallel robust online simple tracking, Computer Vision and Pattern Recognition, (2010)
  • [10] LIU Wanjun, DONG Shuaihan, QU Haicheng, Anti-occlusion visual tracking algorithm based on spatio-temporal context learning, Journal of Image and Graphics, 21, 8, pp. 1057-1067, (2016)