Emotion recognition through facial expression analysis based on a neurofuzzy network

被引:156
作者
Ioannou, SV [1 ]
Raouzaiou, AT [1 ]
Tzouvaras, VA [1 ]
Mailis, TP [1 ]
Karpouzis, KC [1 ]
Kollias, SD [1 ]
机构
[1] Natl Tech Univ Athens, Sch Elect & Comp Engn, Image Video & Multimedia Syst Lab, GR-15773 Zografos, Greece
关键词
facial expression analysis; MPEG-4 facial animation parameters; activation evaluation emotion representation; neurofuzzy network; rule extraction; adaptation;
D O I
10.1016/j.neunet.2005.03.004
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Extracting and validating emotional cues through analysis of users' facial expressions is of high importance for improving the level of interaction in man machine communication systems. Extraction of appropriate facial features and consequent recognition of the user's emotional state that can be robust to facial expression variations among different users is the topic of this paper. Facial animation parameters (FAPs) defined according to the ISO MPEG-4 standard are extracted by a robust facial analysis system, accompanied by appropriate confidence measures of the estimation accuracy. A novel neurofuzzy system is then created, based on rules that have been defined through analysis of FAP variations both at the discrete emotional space, as well as in the 21) continuous activation-evaluation one. The neurofuzzy system allows for further learning and adaptation to specific users' facial expression characteristics, measured though FAP estimation in real life application of the system, using analysis by clustering of the obtained FAP values. Experimental studies with emotionally expressive datasets, generated in the EC IST ERMIS project indicate the good performance and potential of the developed technologies. (c) 2005 Elsevier Ltd. All rights reserved.
引用
收藏
页码:423 / 435
页数:13
相关论文
共 33 条
[1]  
BALOMENOS T, 2004, WORKSH MULT INT REL
[2]   Recognizing facial expressions in image sequences using local parameterized models of image motion [J].
Black, MJ ;
Yacoob, Y .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 1997, 25 (01) :23-48
[3]  
COHN JF, 1998, 6 ACM INT MULT C BRI
[4]  
DIETTERICH TG, 2000, P 1 INT C MULT CLASS
[5]   Classifying facial actions [J].
Donato, G ;
Bartlett, MS ;
Hager, JC ;
Ekman, P ;
Sejnowski, TJ .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1999, 21 (10) :974-989
[6]  
EKMAN P, 1975, UNMAKSING FACE
[7]  
FRANSENS R, 2003, 9 IEEE INT C COMP VI, V2
[8]  
GORODNICHY D, 2002, P INT C AUT FAC GEST
[9]   TRAINING FEEDFORWARD NETWORKS WITH THE MARQUARDT ALGORITHM [J].
HAGAN, MT ;
MENHAJ, MB .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1994, 5 (06) :989-993
[10]   Solving fuzzy relational equations through logical filtering [J].
Hirota, K ;
Pedrycz, W .
FUZZY SETS AND SYSTEMS, 1996, 81 (03) :355-363