Multi-modal embodied agents scripting

被引:0
|
作者
Arafa, Y [1 ]
Mamdani, A [1 ]
机构
[1] Univ London Imperial Coll Sci Technol & Med, Dept EEE, IIS, London SW7 2BT, England
来源
FOURTH IEEE INTERNATIONAL CONFERENCE ON MULTIMODAL INTERFACES, PROCEEDINGS | 2002年
关键词
embodied agent; lifelike characters; MPEG-4; mark-up languages; automated animation scripting; CML; animated expression;
D O I
10.1109/ICMI.2002.1167038
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Embodied agents present ongoing challenging agenda for research in multi-modal user interfaces and human-computer-interaction. Such agent metaphors will only be widely applicable to online applications when there is a standardised way to map underlying engines with the visual presentation of the agents. This paper delineates the functions and specifications of a mark-up language for scripting the animation of virtual characters. The language is called: Character Mark-up Language (CML) and is an XML-based character attribute definition and animation scripting language designed to aid in the rapid incorporation of lifelike characters/agents into online applications or virtual reality, worlds. This multi-modal scripting language is designed to be easily understandable by human animators and easily, generated by a software process such as software agents. CML is constructed based jointly on motion and multi-modal capabilities of virtual life-like figures. The paper further illustrates the constructs of the language and describes a realtime execution architecture that demonstrates the use of such a language as a 4G language to easily utilise and integrate MPEG-4 media objects in online interfaces and virtual environments.
引用
收藏
页码:454 / 459
页数:6
相关论文
共 50 条
  • [31] Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-Modal Fake News Detection
    Chen, Jinyin
    Jia, Chengyu
    Zheng, Haibin
    Chen, Ruoxi
    Fu, Chenbo
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2023, 10 (06): : 3144 - 3158
  • [32] Multi-Modal Pedestrian Detection with Large Misalignment Based on Modal-Wise Regression and Multi-Modal IoU
    Wanchaitanawong, Napat
    Tanaka, Masayuki
    Shibata, Takashi
    Okutomi, Masatoshi
    PROCEEDINGS OF 17TH INTERNATIONAL CONFERENCE ON MACHINE VISION APPLICATIONS (MVA 2021), 2021,
  • [33] Multi-modal traffic in TRANSIMS
    Nagel, K
    PEDESTRIAN AND EVACUATION DYNAMICS, 2002, : 161 - 172
  • [34] Multi-modal Video Summarization
    Huang, Jia-Hong
    ICMR 2024 - Proceedings of the 2024 International Conference on Multimedia Retrieval, 2024, : 1214 - 1218
  • [35] Intelligent multi-modal systems
    Tsui, KC
    Azvine, B
    Djian, D
    Voudouris, C
    Xu, LQ
    BT TECHNOLOGY JOURNAL, 1998, 16 (03): : 134 - 144
  • [36] Interactive multi-modal suturing
    Payandeh, Shahram
    Shi, Fuhan
    VIRTUAL REALITY, 2010, 14 (04) : 241 - 253
  • [37] Multi-modal of object trajectories
    Partsinevelos, P.
    JOURNAL OF SPATIAL SCIENCE, 2008, 53 (01) : 17 - 30
  • [38] A MULTI-MODAL VIEW OF MEMORY
    HERRMANN, DJ
    SEARLEMAN, A
    BULLETIN OF THE PSYCHONOMIC SOCIETY, 1988, 26 (06) : 503 - 503
  • [39] Multi-modal Extreme Classification
    Mittal, Anshul
    Dahiya, Kunal
    Malani, Shreya
    Ramaswamy, Janani
    Kuruvilla, Seba
    Ajmera, Jitendra
    Chang, Keng-Hao
    Agarwal, Sumeet
    Kar, Purushottam
    Varma, Manik
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 12383 - 12392
  • [40] Intelligent multi-modal systems
    Hong Kong Polytechnic University, Hong Kong
    不详
    不详
    不详
    不详
    不详
    不详
    不详
    不详
    不详
    不详
    不详
    不详
    不详
    不详
    不详
    不详
    不详
    不详
    不详
    不详
    不详
    不详
    BT Technol J, 3 (134-144):