Motion segment decomposition of RGB-D sequences for human behavior understanding

被引:40
|
作者
Devanne, Maxime [1 ,2 ]
Berretti, Stefano [2 ]
Pala, Pietro [2 ]
Wannous, Hazem [1 ]
Daoudi, Mohamed [1 ]
Del Bimbo, Alberto [2 ]
机构
[1] Univ Lille, Telecom Lille, CNRS, UMR 9189,CRIStAL, F-59000 Lille, France
[2] Univ Florence, MICC, Florence, Italy
关键词
3D human behavior understanding; Temporal modeling; Shape space analysis; Online activity detection; ACTION RECOGNITION; DICTIONARY;
D O I
10.1016/j.patcog.2016.07.041
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we propose a framework for analyzing and understanding human behavior from depth videos. The proposed solution first employs shape analysis of the human pose across time to decompose the full motion into short temporal segments representing elementary motions. Then, each segment is characterized by human motion and depth appearance around hand joints to describe the change in pose of the body and the interaction with objects. Finally, the sequence of temporal segments is modeled through a Dynamic Naive Bayes classifier, which captures the dynamics of elementary motions characterizing human behavior. Experiments on four challenging datasets evaluate the potential of the proposed approach in different contexts, including gesture or activity recognition and online activity detection. Competitive results in comparison with state-of-the-art methods are reported. (C) 2016 Elsevier Ltd. All rights reserved.
引用
收藏
页码:222 / 233
页数:12
相关论文
共 50 条
  • [41] FALL DETECTION IN RGB-D VIDEOS BY COMBINING SHAPE AND MOTION FEATURES
    Kumar, Durga Priya
    Yun, Yixiao
    Gu, Irene Yu-Hua
    2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING PROCEEDINGS, 2016, : 1337 - 1341
  • [42] Improving RGB-D SLAM in dynamic environments: A motion removal approach
    Sun, Yuxiang
    Liu, Ming
    Meng, Max Q. -H.
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2017, 89 : 110 - 122
  • [43] Motion Detection Based on RGB-D Data and Scene Flow Clustering
    Xiang, Xuezhi
    Xu, Wangwang
    Bai, Erwei
    Yan, Zike
    Zhang, Lei
    PROCEEDINGS OF THE 2016 12TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION (WCICA), 2016, : 814 - 817
  • [44] Motion Recovery Using the Image Interpolation Algorithm and an RGB-D Camera
    Wang, Jiefei
    Garratt, Matthew
    Li, Ping
    Anavatti, Sreenatha
    2014 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS IEEE-ROBIO 2014, 2014, : 683 - 688
  • [45] LMA based Emotional Motion Representation using RGB-D Camera
    Kim, Woo Hyun
    Park, Jeong Woo
    Lee, Won Hyong
    Chung, Myung Jin
    Lee, Hui Sung
    PROCEEDINGS OF THE 8TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI 2013), 2013, : 163 - +
  • [46] Deep understanding of shopper behaviours and interactions using RGB-D vision
    Paolanti, Marina
    Pietrini, Rocco
    Mancini, Adriano
    Frontoni, Emanuele
    Zingaretti, Primo
    MACHINE VISION AND APPLICATIONS, 2020, 31 (7-8) : 7 - 8
  • [47] A Multi-purpose RGB-D Dataset for Understanding Everyday Objects
    Akizuki, Shuichi
    Hashimoto, Manabu
    PROCEEDINGS OF THE 15TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS, VOL 5: VISAPP, 2020, : 470 - 475
  • [48] Deep understanding of shopper behaviours and interactions using RGB-D vision
    Marina Paolanti
    Rocco Pietrini
    Adriano Mancini
    Emanuele Frontoni
    Primo Zingaretti
    Machine Vision and Applications, 2020, 31
  • [49] Introduction to the special issue on visual understanding and applications with RGB-D cameras
    Liu, Zicheng
    Beetz, Michael
    Cremers, Daniel
    Gall, Juergen
    Li, Wanqing
    Pangercic, Dejan
    Sturm, Juergen
    Tai, Yu-Wing
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2014, 25 (01) : 1 - 1
  • [50] Joint Task-Recursive Learning for RGB-D Scene Understanding
    Zhang, Zhenyu
    Cui, Zhen
    Xu, Chunyan
    Jie, Zequn
    Li, Xiang
    Yang, Jian
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2020, 42 (10) : 2608 - 2623