Object detection and segmentation algorithm in complex dynamic scene

被引:0
|
作者
Xu B. [1 ]
Niu Y. [1 ]
Lyu J. [1 ]
机构
[1] School of Instrumentation Science and Opto-Electronics Engineering, Beijing University of Aeronautics and Astronautics, Beijing
来源
Niu, Yanxiong (niuyx@buaa.edu.cn) | 2016年 / Beijing University of Aeronautics and Astronautics (BUAA)卷 / 42期
关键词
Dynamic scene; Image segmentation; Motion object; Saliency detection; Scale invariant feature transform (SIFT) Flow;
D O I
10.13700/j.bh.1001-5965.2015.0113
中图分类号
学科分类号
摘要
In complex conditions of dynamic scenes, it is difficult to detect and segment objects accurately in image sequence. According to the image characteristics of the object in complex conditions, we propose an object detection and segmentation model which was fused with scale invariant feature transform (SIFT) Flow characteristics in dynamic scene. Through analyzing the advantages of the movement characteristic information by SIFT Flow, and combining the color and brightness information in Commission Internationale de L'Eclairage (CIE) Lab, we establish a four-dimensional vector space. We utilize the improved multi-scale center-surround comparison method to generate salient map in each channel and fuse by linear superposition, then establish the dynamic scene saliency object model in image sequence. Finally, mean-shift clustering algorithm and morphology are used to achieve object segmentation accurately. Experimental results indicate that the proposed method can segment more complete object region than the traditional method in complex dynamic scenes and aerial video. And it also has good robustness and high segmentation accuracy. © 2016, Beijing University of Aeronautics and Astronautics (BUAA). All right reserved.
引用
收藏
页码:310 / 317
页数:7
相关论文
共 20 条
  • [1] Pojala C., Somnath S., Neighborhood supported model level fuzzy aggregation for moving object segmentation, IEEE Transactions on Image Processing, 23, 2, pp. 645-657, (2014)
  • [2] Li W.Y., Wang P., Qiao H., A survey of visual attention based methods for object tracking, Acta Automatica Sinica, 40, 4, pp. 561-576, (2014)
  • [3] Xie X.M., Research on key techniques of moving object detection and recognition in complex background, pp. 1-30, (2012)
  • [4] Zivkovic Z., van der Heijden F., Efficient adaptive density estimation per image pixel for the task of background subtraction, Pattern Recognition Letters, 27, 7, pp. 773-780, (2006)
  • [5] Lin H.H., Chuang J.H., Liu T.L., Regularized background adaptation: A novel learning rate control scheme for Gaussian mixture modeling, IEEE Transactions on Image Processing, 20, 3, pp. 822-836, (2011)
  • [6] Xu L., Chen J.N., Jia J.Y., A segmentation based variational model for accurate optical flow estimation, Proceedings of the 10th European Conference on Computer Vision, 5302, 1, pp. 671-684, (2008)
  • [7] Liu C., Yuen J., Torralba A., SIFT Flow: Dense correspondence across scenes and its applications, IEEE Transactions on Pattern Analysis and Machine Intelligence, 33, 5, pp. 978-994, (2011)
  • [8] Chen X., Zhao H.W., Liu P.P., Et al., Automatic salient object detection via maximum entropy estimation, Optics Letters, 38, 10, pp. 1727-1729, (2013)
  • [9] Itti L., Automatic foveation for video compression using a neurobiological model of visual attention, IEEE Transactions on Image Processing, 13, 10, pp. 1304-1318, (2004)
  • [10] Hou X.D., Zhang L.Q., Saliency detection: A spectral residual approach, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-8, (2007)