Modeling accessibility of embodied agents for multi-modal dialogue in complex virtual worlds

被引:0
|
作者
Sampath, D [1 ]
Rickel, J [1 ]
机构
[1] Univ So Calif, Inst Informat Sci, Marina Del Rey, CA 90292 USA
来源
INTELLIGENT VIRTUAL AGENTS | 2003年 / 2792卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Virtual humans are an important part of immersive virtual worlds, where they interact with human users in the roles of mentors, guides, teammates, companions or adversaries. A good dialogue model is essential for achieving realistic interaction between humans and agents. Any such model requires modeling accessibility of individuals, so that agents know which individuals are accessible for communication, by what modality (e.g. speech, gestures) and at what degree they can see or hear each other. This work presents a computational model of accessibility that is domain independent and capable of handling multiple individuals inhabiting a complex virtual world.
引用
收藏
页码:119 / 126
页数:8
相关论文
共 50 条
  • [31] A multi-modal haptic interface for virtual reality and robotics
    Folgheraiter, Michele
    Gini, Giuseppina
    Vercesi, Dario
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2008, 52 (3-4) : 465 - 488
  • [32] Multi-modal Preference Modeling for Product Search
    Guo, Yangyang
    Cheng, Zhiyong
    Nie, Liqiang
    Xu, Xin-Shun
    Kankanhalli, Mohan
    PROCEEDINGS OF THE 2018 ACM MULTIMEDIA CONFERENCE (MM'18), 2018, : 1865 - 1873
  • [33] A Multi-Modal Haptic Interface for Virtual Reality and Robotics
    Michele Folgheraiter
    Giuseppina Gini
    Dario Vercesi
    Journal of Intelligent and Robotic Systems, 2008, 52 : 465 - 488
  • [34] MULTI-MODAL EAR AND FACE MODELING AND RECOGNITION
    Mahoor, Mohammad H.
    Cadavid, Steven
    Abdel-Mottaleb, Mohamed
    2009 16TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1-6, 2009, : 4137 - +
  • [35] Multi-modal Correlation Modeling and Ranking for Retrieval
    Zhang, Hong
    Meng, Fanlian
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2009, 2009, 5879 : 637 - 646
  • [36] Functionalized corroles as diatherapeutic, multi-modal imaging agents
    Pribisko, Melanie A.
    Lim, Punnajit
    Termini, John
    Grubbs, Robert H.
    Palmer, Joshua H.
    Gray, Harry B.
    ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY, 2013, 245
  • [37] Evaluating the equity implications of ridehailing through a multi-modal accessibility framework
    Abdelwahab, Bilal
    Palm, Matthew
    Shalaby, Amer
    Farber, Steven
    JOURNAL OF TRANSPORT GEOGRAPHY, 2021, 95
  • [38] Embodied reporting agents as an approach to creating narratives from live virtual worlds
    Tallyn, E
    Koleva, B
    Logan, B
    Fielding, D
    Benford, S
    Gelmini, G
    Madden, N
    VIRTUAL STORYTELLING: USING VIRTUAL REALITY TECHNOLOGIES FOR STORYTELLING, PROCEEDINGS, 2005, 3805 : 179 - 188
  • [39] Human-robot dialogue annotation for multi-modal common ground
    Bonial, Claire
    Lukin, Stephanie M.
    Abrams, Mitchell
    Baker, Anthony
    Donatelli, Lucia
    Foots, Ashley
    Hayes, Cory J.
    Henry, Cassidy
    Hudson, Taylor
    Marge, Matthew
    Pollard, Kimberly A.
    Artstein, Ron
    Traum, David
    Voss, Clare R.
    LANGUAGE RESOURCES AND EVALUATION, 2024,
  • [40] Scene-Aware Prompt for Multi-modal Dialogue Understanding and Generation
    Li, Bin
    Weng, Yixuan
    Ma, Ziyu
    Sun, Bin
    Li, Shutao
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, NLPCC 2022, PT II, 2022, 13552 : 179 - 191