Fully automatic person segmentation in unconstrained video using spatio-temporal conditional random fields

被引:6
|
作者
Bhole, Chetan [1 ]
Pal, Christopher [2 ]
机构
[1] Univ Rochester, Rochester, NY 14620 USA
[2] Univ Montreal, Montreal, PQ, Canada
关键词
Person segmentation; Video segmentation; Conditional random field; Optical flow; Fully automatic; POSE ESTIMATION;
D O I
10.1016/j.imavis.2016.04.007
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The segmentation of objects and people in particular is an important problem in computer vision. In this paper, we focus on automatically segmenting a person from challenging video sequences in which we place no constraint on camera viewpoint, camera motion or the movements of a person in the scene. Our approach uses the most confident predictions from a pose detector as a form of anchor or keyframe stick figure prediction which helps guide the segmentation of other more challenging frames in the video. Since even state of the art pose detectors are unreliable on many frames especially given that we are interested in segmentations with no camera or motion constraints only the poses or stick figure predictions for frames with the highest confidence in a localized temporal region anchor further processing. The stick figure predictions within confident keyframes are used to extract color, position and optical flow features. Multiple conditional random fields (CRFs) are used to process blocks of video in batches, using a two dimensional CRF for detailed keyframe segmentation as well as 3D CRFs for propagating segmentations to the entire sequence of frames belonging to batches. Location information derived from the pose is also used to refine the results. Importantly, no hand labeled training data is required by our method. We discuss the use of a continuity method that reuses learnt parameters between batches of frames and show how pose predictions can also be improved by our model. We provide an extensive evaluation of our approach, comparing it with a variety of alternative grab cut based methods and a prior state of the art method. We also release our evaluation data to the community to facilitate further experiments. We find that our approach yields state of the art qualitative and quantitative performance compared to prior work and more heuristic alternative approaches. (C) 2016 Elsevier B.V. All rights reserved.
引用
收藏
页码:58 / 68
页数:11
相关论文
共 50 条
  • [41] Efficient probabilistic spatio-temporal video object segmentation
    Ahmed, Rakib
    Karmakar, Gour C.
    Dooley, Laurence S.
    6TH IEEE/ACIS INTERNATIONAL CONFERENCE ON COMPUTER AND INFORMATION SCIENCE, PROCEEDINGS, 2007, : 807 - +
  • [42] Spatio-temporal Attention Network for Video Instance Segmentation
    Liu, Xiaoyu
    Ren, Haibing
    Ye, Tingmeng
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 725 - 727
  • [43] Semantic spatio-temporal segmentation for extracting video objects
    Mao, JH
    Ma, KK
    IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA COMPUTING AND SYSTEMS, PROCEEDINGS VOL 1, 1999, : 738 - 743
  • [44] Morphological spatio-temporal simplification for video image segmentation
    Wang, DM
    Labit, C
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 1997, 11 (02) : 161 - 170
  • [45] A spatio-temporal video analysis system for object segmentation
    Xia, JH
    Wang, YL
    ISPA 2003: PROCEEDINGS OF THE 3RD INTERNATIONAL SYMPOSIUM ON IMAGE AND SIGNAL PROCESSING AND ANALYSIS, PTS 1 AND 2, 2003, : 812 - 815
  • [46] SPATIO-TEMPORAL HUMAN MOTION ESTIMATION USING DYNAMIC CONDITIONAL RANDOM FIELD
    Ardiyanto, Igi
    INTERNATIONAL JOURNAL OF INNOVATIVE COMPUTING INFORMATION AND CONTROL, 2018, 14 (02): : 747 - 755
  • [47] Spatio-temporal segmentation with edge relaxation and optimization using fully parallel methods
    Szirányi, T
    Czúni, L
    15TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOL 4, PROCEEDINGS: APPLICATIONS, ROBOTICS SYSTEMS AND ARCHITECTURES, 2000, : 820 - 823
  • [48] Capturing the spatio-temporal continuity for video semantic segmentation
    Chen, Xin
    Wu, Aming
    Han, Yahong
    IET IMAGE PROCESSING, 2019, 13 (14) : 2813 - 2820
  • [49] Guest Editorial: Spatio-temporal Feature Learning for Unconstrained Video Analysis
    Yahong Han
    Liqiang Nie
    Fei Wu
    Multimedia Tools and Applications, 2018, 77 : 29209 - 29211
  • [50] Guest Editorial: Spatio-temporal Feature Learning for Unconstrained Video Analysis
    Han, Yahong
    Nie, Liqiang
    Wu, Fei
    MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (22) : 29209 - 29211