Contextual factors and adaptative multimodal human-computer interaction: Multi-level specification of emotion and expressivity in Embodied Conversational Agents

被引:0
|
作者
Lamolle, M
Mancini, M
Pelachaud, C
Abrilian, S
Martin, JC
Devillers, L
机构
[1] Univ Paris 08, IUT Montreuil, LINC, F-93100 Montreuil, France
[2] CNRS, LIMSI, F-91403 Orsay, France
来源
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper we present an Embodied Conversational Agent (ECA) model able to display rich verbal and non-verbal behaviors. The selection of these behaviors should depend not only on factors related to her individuality such as her culture, her social and professional role, her personality, but also on a set of contextual variables (such as her interlocutor, the social conversation setting), and other dynamic variables (belief, goal, emotion). We describe the representation scheme and the computational model of behavior expressivity of the Expressive Agent System that we have developed. We explain how the multi-level annotation of a corpus of emotionally rich TV video interviews can provide context-dependent knowledge as input for the specification of the ECA (e.g. which contextual cues and levels of representation are required for enabling the proper recognition of the emotions).
引用
收藏
页码:225 / 239
页数:15
相关论文
共 12 条
  • [1] Enhancing human-computer interaction with embodied conversational agents
    Foster, Mary Ellen
    Universal Access in Human-Computer Interaction: Ambient Interaction, Pt 2, Proceedings, 2007, 4555 : 828 - 837
  • [2] On the Design of and Interaction with Conversational Agents: An Organizing and Assessing Review of Human-Computer Interaction Research
    Diederich, Stephan
    Brendel, Alfred Benedikt
    Morana, Stefan
    Kolbe, Lutz
    JOURNAL OF THE ASSOCIATION FOR INFORMATION SYSTEMS, 2022, 23 (01): : 96 - 138
  • [3] Multi-level feature optimization and multimodal contextual fusion for sentiment analysis and emotion classification
    Huddar, Mahesh G.
    Sannakki, Sanjeev S.
    Rajpurohit, Vijay S.
    COMPUTATIONAL INTELLIGENCE, 2020, 36 (02) : 861 - 881
  • [4] MULTI-PLATFORM INTELLIGENT SYSTEM FOR MULTIMODAL HUMAN-COMPUTER INTERACTION
    Jarosz, Mateusz
    Nawrocki, Piotr
    Sniezynski, Bartlomiej
    Indurkhya, Bipin
    COMPUTING AND INFORMATICS, 2021, 40 (01) : 83 - 103
  • [5] Joint Multi-cue Learning for Emotion Recognition in Human-Computer Interaction
    Zhang, Feixiang
    Sun, Xiao
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT XI, 2025, 15041 : 399 - 411
  • [6] MULTI-LEVEL REPRESENTATION OF GESTURE AS COMMAND FOR HUMAN COMPUTER INTERACTION
    Vatavu, Radu-Daniel
    Pentiuc, Stefan-Gheorghe
    COMPUTING AND INFORMATICS, 2008, 27 (06) : 837 - 851
  • [7] Multi-physiological signal fusion for objective emotion recognition in educational human-computer interaction
    Wu, Wanmeng
    Zuo, Enling
    Zhang, Weiya
    Meng, Xiangjie
    FRONTIERS IN PUBLIC HEALTH, 2024, 12
  • [8] Experimental Study on Appropriate Reality of Agents as a Multi-modal Interface for Human-Computer Interaction
    Tanaka, Kaori
    Matsui, Tatsunori
    Kojima, Kazuaki
    HUMAN-COMPUTER INTERACTION: INTERACTION TECHNIQUES AND ENVIRONMENTS, PT II, 2011, 6762 : 613 - 622
  • [9] Emotion recognition for human-computer interaction using high-level descriptors (vol 14, 12122, 2024)
    Singla, Chaitanya
    Singh, Sukhdev
    Sharma, Preeti
    Mittal, Nitin
    Gared, Fikreselam
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [10] Multi-level context extraction and attention-based contextual inter-modal fusion for multimodal sentiment analysis and emotion classification
    Mahesh G. Huddar
    Sanjeev S. Sannakki
    Vijay S. Rajpurohit
    International Journal of Multimedia Information Retrieval, 2020, 9 : 103 - 112