Emotional facial expression classification for multimodal user interfaces

被引:0
|
作者
Cerezo, Eva [1 ]
Hupont, Isabelle [1 ]
机构
[1] Univ Zaragoza, Dept Informat & Ingn Sistemas, Zaragoza 50018, Spain
关键词
facial expression; multimodal interface;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present a simple and computationally feasible method to perform automatic emotional classification of facial expressions. We propose the use of 10 characteristic points (that are part of the MPEG4 feature points) to extract relevant emotional information (basically five distances, presence of wrinkles and mouth shape). The method defines and detects the six basic emotions (plus the neutral one) in terms of this information and has been fine-tuned with a data-base of 399 images. For the moment, the method is applied to static images. Application to sequences is being now developed. The extraction of such information about the user is of great interest for the development of new multimodal user interfaces.
引用
收藏
页码:405 / 413
页数:9
相关论文
共 50 条
  • [31] A LANGUAGE PERSPECTIVE ON THE DEVELOPMENT OF PLASTIC MULTIMODAL USER INTERFACES
    Sottet, Jean-Sebastien
    Calvary, Gaelle
    Coutaz, Joelle
    Favre, Jean-Marie
    Vanderdonckt, Jean
    Stanciulescu, Adrian
    Lepreux, Sophie
    JOURNAL ON MULTIMODAL USER INTERFACES, 2007, 1 (02) : 1 - 12
  • [32] A model for the implicit satisfaction of IHM multimodal user interfaces
    Kamel, Nadjet
    Selouani, Sid-Ahmed
    Hamam, Habib
    2008 CANADIAN CONFERENCE ON ELECTRICAL AND COMPUTER ENGINEERING, VOLS 1-4, 2008, : 267 - 270
  • [33] Integrative rapid-prototyping for multimodal user interfaces
    Schuller, B.W.
    Lang, M.K.
    VDI Berichte, 2002, (1678): : 279 - 284
  • [34] A novel dialog model for the design of multimodal user interfaces
    Schaefer, R
    Bleul, S
    Mueller, W
    ENGINEERING HUMAN COMPUTER INTERACTION AND INTERACTIVE SYSTEMS, 2005, 3425 : 221 - 223
  • [35] Cross-disciplinary approaches to multimodal user interfaces
    Volpe, Gualtiero
    Camurri, Antonio
    Dutoit, Thierry
    Mancini, Maurizio
    JOURNAL ON MULTIMODAL USER INTERFACES, 2010, 4 (01) : 1 - 2
  • [36] Development of voice-based multimodal user interfaces
    Sena, Claudia Pinto P.
    Santos, Celso A. S.
    SIGMAP 2006: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND MULTIMEDIA APPLICATIONS, 2006, : 310 - +
  • [37] Integrative rapid-prototyping for multimodal user interfaces
    Schuller, BW
    Lang, MK
    USEWARE 2002, 2002, 1678 : 279 - 284
  • [38] Multimodal learning for facial expression recognition
    Zhang, Wei
    Zhang, Youmei
    Ma, Lin
    Guan, Jingwei
    Gong, Shijie
    PATTERN RECOGNITION, 2015, 48 (10) : 3191 - 3202
  • [39] Multimodal Emotional Classification Based on Meaningful Learning
    Filali, Hajar
    Riffi, Jamal
    Boulealam, Chafik
    Mahraz, Mohamed Adnane
    Tairi, Hamid
    BIG DATA AND COGNITIVE COMPUTING, 2022, 6 (03)
  • [40] Virtual characters as emotional interaction element in the user interfaces
    Ortiz, Amalia
    Oyarzun, David
    Carretero, Maria del Puy
    Garay-Vitoria, Nestor
    ARTICULATED MOTION AND DEFORMABLE OBJECTS, PROCEEDINGS, 2006, 4069 : 234 - 243