Multi-modal embodied agents scripting

被引:0
|
作者
Arafa, Y [1 ]
Mamdani, A [1 ]
机构
[1] Univ London Imperial Coll Sci Technol & Med, Dept EEE, IIS, London SW7 2BT, England
关键词
embodied agent; lifelike characters; MPEG-4; mark-up languages; automated animation scripting; CML; animated expression;
D O I
10.1109/ICMI.2002.1167038
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Embodied agents present ongoing challenging agenda for research in multi-modal user interfaces and human-computer-interaction. Such agent metaphors will only be widely applicable to online applications when there is a standardised way to map underlying engines with the visual presentation of the agents. This paper delineates the functions and specifications of a mark-up language for scripting the animation of virtual characters. The language is called: Character Mark-up Language (CML) and is an XML-based character attribute definition and animation scripting language designed to aid in the rapid incorporation of lifelike characters/agents into online applications or virtual reality, worlds. This multi-modal scripting language is designed to be easily understandable by human animators and easily, generated by a software process such as software agents. CML is constructed based jointly on motion and multi-modal capabilities of virtual life-like figures. The paper further illustrates the constructs of the language and describes a realtime execution architecture that demonstrates the use of such a language as a 4G language to easily utilise and integrate MPEG-4 media objects in online interfaces and virtual environments.
引用
收藏
页码:454 / 459
页数:6
相关论文
共 50 条
  • [1] Towards a Theoretical Framework for Learning Multi-modal Patterns for Embodied Agents
    Noceti, Nicoletta
    Caputo, Barbara
    Castellini, Claudio
    Baldassarre, Luca
    Barla, Annalisa
    Rosasco, Lorenzo
    Odone, Francesca
    Sandini, Giulio
    IMAGE ANALYSIS AND PROCESSING - ICIAP 2009, PROCEEDINGS, 2009, 5716 : 239 - +
  • [2] Embodied Conversational AI Agents in a Multi-modal Multi-agent Competitive Dialogue
    Divekar, Rahul R.
    Mou, Xiangyang
    Chen, Lisha
    de Bayser, Maira Gatti
    Guerra, Melina Alberio
    Su, Hui
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 6512 - 6514
  • [3] Modeling accessibility of embodied agents for multi-modal dialogue in complex virtual worlds
    Sampath, D
    Rickel, J
    INTELLIGENT VIRTUAL AGENTS, 2003, 2792 : 119 - 126
  • [4] Embodied multi-modal communication from the perspective of activity theory
    Julian Williams
    Educational Studies in Mathematics, 2009, 70 : 201 - 210
  • [6] droidlet: modular, heterogenous, multi-modal agents
    Pratik, Anurag
    Chintala, Soumith
    Srinet, Kavya
    Gandhi, Dhiraj
    Qian, Rebecca
    Sun, Yuxuan
    Drew, Ryan
    Elkafrawy, Sara
    Tiwari, Anoushka
    Hart, Tucker
    Williamson, Mary
    Gupta, Abhinav
    Szlam, Arthur
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 13716 - 13723
  • [7] Multi-modal contrast agents: A first step
    Watkin, KL
    McDonald, MA
    ACADEMIC RADIOLOGY, 2002, 9 : S285 - S289
  • [8] Embodied navigation with multi-modal information: A survey from tasks to methodology
    Wu, Yuchen
    Zhang, Pengcheng
    Gu, Meiying
    Zheng, Jin
    Bai, Xiao
    INFORMATION FUSION, 2024, 112
  • [9] COORDINATED MULTI-MODAL EXPRESSION AND EMBODIED MEANING IN THE EMERGENCE OF SYMBOLIC COMMUNICATION
    Brown, J. Erin
    EVOLUTION OF LANGUAGE, PROCEEDINGS, 2010, : 375 - 376
  • [10] Functionalized corroles as diatherapeutic, multi-modal imaging agents
    Pribisko, Melanie A.
    Lim, Punnajit
    Termini, John
    Grubbs, Robert H.
    Palmer, Joshua H.
    Gray, Harry B.
    ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY, 2013, 245