Automatic Detection of Emotion Valence on Faces Using Consumer Depth Cameras

被引:10
|
作者
Savran, Arman [1 ]
Gur, Ruben [2 ]
Verma, Ragini [1 ]
机构
[1] Univ Penn, Dept Radiol, Philadelphia, PA 19104 USA
[2] Univ Penn, Dept Psychiat, Philadelphia, PA 19104 USA
来源
2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW) | 2013年
关键词
FACIAL ACTION;
D O I
10.1109/ICCVW.2013.17
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Detection of positive and negative emotions can provide an insight into the person's level of satisfaction, social responsiveness and clues like the need for help. Therefore, automatic perception of affect valence is a key for novel human-computer interaction applications. However, robust recognition with conventional 2D cameras is still not possible in realistic conditions, in the presence of high illumination and pose variations. While the recent progress in 3D data expression recognition has alleviated some of these challenges, however, the high complexity and cost of these 3D systems renders them impractical. In this paper, we present the first practical 3D expression recognition using cheap consumer depth cameras. Despite the low fidelity facial depth data, we show that with appropriate preprocessing and feature extraction recognition is possible. Our method for emotion detection uses novel surface approximation and curvature estimation based descriptors on point cloud data, is robust to noise and computationally efficient. Experiments show that using only low fidelity 3D data of consumer cameras, we get 77.4% accuracy in emotion valence detection. Fusing mean curvature features with luminance data, boosts the accuracy to 89.4%.
引用
收藏
页码:75 / 82
页数:8
相关论文
共 50 条
  • [31] Simultaneous Localization and Calibration: Self-Calibration of Consumer Depth Cameras
    Zhou, Qian-Yi
    Koltun, Vladlen
    2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, : 454 - 460
  • [32] Towards Automatic Detection of Monkey Faces
    Zhang, Manning
    Guo, Susu
    Xie, Xiaohua
    2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 2564 - 2569
  • [33] Automatic Detection of Temples in consumer Images using histogram of Gradient
    Solanki, Madhav Singh
    Goswami, Laxmi
    Sharma, Kanta Prasad
    Sikka, Rishi
    PROCEEDINGS OF 2019 INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND KNOWLEDGE ECONOMY (ICCIKE' 2019), 2019, : 104 - 108
  • [34] 3D scanning of cultural heritage with consumer depth cameras
    Enrico Cappelletto
    Pietro Zanuttigh
    Guido M. Cortelazzo
    Multimedia Tools and Applications, 2016, 75 : 3631 - 3654
  • [35] Rapid Creation of Photorealistic Virtual Reality Content with Consumer Depth Cameras
    Chen, Chih-Fan
    Bolas, Mark
    Rosenberg, Evan Suma
    2017 IEEE VIRTUAL REALITY (VR), 2017, : 473 - 474
  • [36] Facial feature locating using active appearance models with contour constraints from consumer depth cameras
    Wang, Qingxiang
    Ren, Xiaoqiang
    Journal of Theoretical and Applied Information Technology, 2012, 45 (02) : 593 - 597
  • [37] Predicting consumer behavior with two emotion appraisal dimensions: Emotion valence and agency in gift giving
    de Hooge, Ilona E.
    INTERNATIONAL JOURNAL OF RESEARCH IN MARKETING, 2014, 31 (04) : 380 - 394
  • [38] On the automatic detection and monitoring of Leaves and Grapes using in-field optical cameras
    Blanco, Giacomo
    Oldani, Federico
    Salza, Dario
    Rossi, Claudio
    PROCEEDINGS OF 2023 IEEE INTERNATIONAL WORKSHOP ON METROLOGY FOR AGRICULTURE AND FORESTRY, METROAGRIFOR, 2023, : 704 - 709
  • [39] Automatic Detection and Classification of Road Lane Markings Using Onboard Vehicular Cameras
    de Paula, Mauricio Braga
    Jung, Claudio Rosito
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2015, 16 (06) : 3160 - 3169
  • [40] Change detection methods for automatic scene analysis by using mobile surveillance cameras
    Marcenaro, L
    Oberti, F
    Regazzoni, CS
    2000 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOL I, PROCEEDINGS, 2000, : 244 - 247