Multi-modal embodied agents scripting

被引:0
|
作者
Arafa, Y [1 ]
Mamdani, A [1 ]
机构
[1] Univ London Imperial Coll Sci Technol & Med, Dept EEE, IIS, London SW7 2BT, England
来源
FOURTH IEEE INTERNATIONAL CONFERENCE ON MULTIMODAL INTERFACES, PROCEEDINGS | 2002年
关键词
embodied agent; lifelike characters; MPEG-4; mark-up languages; automated animation scripting; CML; animated expression;
D O I
10.1109/ICMI.2002.1167038
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Embodied agents present ongoing challenging agenda for research in multi-modal user interfaces and human-computer-interaction. Such agent metaphors will only be widely applicable to online applications when there is a standardised way to map underlying engines with the visual presentation of the agents. This paper delineates the functions and specifications of a mark-up language for scripting the animation of virtual characters. The language is called: Character Mark-up Language (CML) and is an XML-based character attribute definition and animation scripting language designed to aid in the rapid incorporation of lifelike characters/agents into online applications or virtual reality, worlds. This multi-modal scripting language is designed to be easily understandable by human animators and easily, generated by a software process such as software agents. CML is constructed based jointly on motion and multi-modal capabilities of virtual life-like figures. The paper further illustrates the constructs of the language and describes a realtime execution architecture that demonstrates the use of such a language as a 4G language to easily utilise and integrate MPEG-4 media objects in online interfaces and virtual environments.
引用
收藏
页码:454 / 459
页数:6
相关论文
共 50 条
  • [41] Unsupervised Multi-modal Learning
    Iqbal, Mohammed Shameer
    ADVANCES IN ARTIFICIAL INTELLIGENCE (AI 2015), 2015, 9091 : 343 - 346
  • [42] Multi-modal Video Summarization
    Huang, Jia-Hong
    PROCEEDINGS OF THE 4TH ANNUAL ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2024, 2024, : 1214 - 1218
  • [43] Interactive multi-modal suturing
    Shahram Payandeh
    Fuhan Shi
    Virtual Reality, 2010, 14 : 241 - 253
  • [44] Multi-modal network Protocols
    Balan, RK
    Akella, A
    Seshan, S
    ACM SIGCOMM COMPUTER COMMUNICATION REVIEW, 2002, 32 (01) : 60 - 60
  • [45] Multi-modal nanomedicine for glioblastoma
    Ofek, Paula
    Calderon, Marcelo
    Sheikhi-Mehrabadi, Fatemeh
    Ferber, Shiran
    Haag, Rainer
    Satchi-Fainaro, Ronit
    CANCER RESEARCH, 2014, 74 (19)
  • [46] A Survey on Multi-modal Summarization
    Jangra, Anubhav
    Mukherjee, Sourajit
    Jatowt, Adam
    Saha, Sriparna
    Hasanuzzaman, Mohammad
    ACM COMPUTING SURVEYS, 2023, 55 (13S)
  • [47] Multi-Modal Interaction Device
    Kim, Yul Hee
    Byeon, Sang-Kyu
    Kim, Yu-Joon
    Choi, Dong-Soo
    Kim, Sang-Youn
    INTERNATIONAL CONFERENCE ON MECHANICAL DESIGN, MANUFACTURE AND AUTOMATION ENGINEERING (MDMAE 2014), 2014, : 327 - 330
  • [48] Developments in multi-modal ticketing
    Clarke, W.R.
    Public Transport International, 1993, 42 (04):
  • [49] A MULTI-MODAL QUESTIONNAIRE FOR STRESS
    LEFEBVRE, RC
    SANDFORD, SL
    JOURNAL OF HUMAN STRESS, 1985, 11 (02): : 69 - 75
  • [50] Multi-modal spatial querying
    Egenhofer, MJ
    ADVANCES IN GIS RESEARCH II, 1997, : 785 - 799