Automatic Detection of Emotion Valence on Faces Using Consumer Depth Cameras

被引:10
|
作者
Savran, Arman [1 ]
Gur, Ruben [2 ]
Verma, Ragini [1 ]
机构
[1] Univ Penn, Dept Radiol, Philadelphia, PA 19104 USA
[2] Univ Penn, Dept Psychiat, Philadelphia, PA 19104 USA
来源
2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW) | 2013年
关键词
FACIAL ACTION;
D O I
10.1109/ICCVW.2013.17
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Detection of positive and negative emotions can provide an insight into the person's level of satisfaction, social responsiveness and clues like the need for help. Therefore, automatic perception of affect valence is a key for novel human-computer interaction applications. However, robust recognition with conventional 2D cameras is still not possible in realistic conditions, in the presence of high illumination and pose variations. While the recent progress in 3D data expression recognition has alleviated some of these challenges, however, the high complexity and cost of these 3D systems renders them impractical. In this paper, we present the first practical 3D expression recognition using cheap consumer depth cameras. Despite the low fidelity facial depth data, we show that with appropriate preprocessing and feature extraction recognition is possible. Our method for emotion detection uses novel surface approximation and curvature estimation based descriptors on point cloud data, is robust to noise and computationally efficient. Experiments show that using only low fidelity 3D data of consumer cameras, we get 77.4% accuracy in emotion valence detection. Fusing mean curvature features with luminance data, boosts the accuracy to 89.4%.
引用
收藏
页码:75 / 82
页数:8
相关论文
共 50 条
  • [21] Automatic person detection and tracking using fuzzy controlled active cameras
    Bernardin, Keni
    de Camp, Florian Van
    Stiefelhagen, Rainer
    2007 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOLS 1-8, 2007, : 3758 - +
  • [22] A priori patient-specific collision avoidance in radiotherapy using consumer grade depth cameras
    Cardan, Rex A.
    Popple, Richard A.
    Fiveash, John
    MEDICAL PHYSICS, 2017, 44 (07) : 3430 - 3436
  • [23] CALIBRATION OF DEPTH CAMERAS USING DENOISED DEPTH IMAGES
    Pahwa, Ramanpreet Singh
    Do, Minh N.
    Ng, Tian Tsong
    Hua, Binh-Son
    2014 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2014, : 3459 - 3463
  • [24] Detection of collaborative activity with Kinect depth cameras
    Sevrin, Loic
    Noury, Norbert
    Abouchi, Nacer
    Jumel, Fabrice
    Massot, Bertrand
    Saraydaryan, Jacques
    2016 38TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC), 2016, : 5973 - 5976
  • [25] Automatic Piano Tutoring System Using Consumer-Level Depth Camera
    Rho, Seungmin
    Hwang, Jae-In
    Kim, Junho
    2014 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS (ICCE), 2014, : 5 - 6
  • [26] Automatic Calibration of Multiple Cameras and Depth Sensors with a Spherical Target
    Kuemmerle, Julius
    Kuehner, Tilman
    Lauer, Martin
    2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2018, : 5584 - 5591
  • [27] Automatic, fast, online calibration between depth and color cameras
    Mikhelson, Ilya V.
    Lee, Philip G.
    Sahakian, Alan V.
    Wu, Ying
    Katsaggelos, Aggelos K.
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2014, 25 (01) : 218 - 226
  • [28] Hand Recognition Using Depth Cameras
    Cardona Lopez, Alexander
    TECCIENCIA, 2015, 10 (19) : 73 - 80
  • [29] 3D scanning of cultural heritage with consumer depth cameras
    Cappelletto, Enrico
    Zanuttigh, Pietro
    Cortelazzo, Guido M.
    MULTIMEDIA TOOLS AND APPLICATIONS, 2016, 75 (07) : 3631 - 3654
  • [30] Performance Comparison of LIDAR and Consumer Depth Cameras in Agricultural exploitations.
    Correa, C.
    Garrido, M.
    Moya, A.
    Valero, C.
    Barreiro, P.
    VII CONGRESO IBERICO DE AGROINGENIERIA Y CIENCIAS HORTICOLAS: INNOVAR Y PRODUCIR PARA EL FUTURO. INNOVATING AND PRODUCING FOR THE FUTURE, 2014, : 1617 - 1622