Integration of Multi-modal Cues in Synthetic Attention Processes to Drive Virtual Agent Behavior

被引:0
|
作者
Seele, Sven [1 ]
Haubrich, Tobias [1 ]
Metzler, Tim [1 ]
Schild, Jonas [1 ,2 ]
Herpers, Rainer [1 ,3 ,4 ]
Grzegorzek, Marcin [5 ]
机构
[1] Bonn Rhein Sieg Univ Appl Sci, Inst Visual Comp, Grantham Allee 20, D-53757 St Augustin, Germany
[2] Univ Appl Sci & Arts, Hsch Hannover, Hannover, Germany
[3] Univ New Brunswick, Fredericton, NB, Canada
[4] York Univ, Toronto, ON, Canada
[5] Univ Siegen, Res Grp Pattern Recognit, Siegen, Germany
来源
INTELLIGENT VIRTUAL AGENTS, IVA 2017 | 2017年 / 10498卷
关键词
intelligent virtual agents; synthetic perception; virtual attention; VISION; MEMORY;
D O I
10.1007/978-3-319-67401-8_50
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Simulations and serious games require realistic behavior of multiple intelligent agents in real-time. One particular issue is how attention and multi-modal sensory memory can be modeled in a natural but effective way, such that agents controllably react to salient objects or are distracted by other multi-modal cues from their current intention. We propose a conceptual framework that provides a solution with adherence to three main design goals: natural behavior, real-time performance, and controllability. As a proof of concept, we implement three major components and showcase effectiveness in a real-time game engine scenario. Within the exemplified scenario, a visual sensor is combined with static saliency probes and auditory cues. The attention model weighs bottom-up attention against intention-related top-down processing, controllable by a designer using memory and attention inhibitor parameters. We demonstrate our case and discuss future extensions.
引用
收藏
页码:403 / 412
页数:10
相关论文
共 50 条
  • [1] Multi-modal Information Integration for Interactive Multi-agent Systems
    Toru Yamaguchi
    Makoto Sato
    Tomohiro Takagi
    Journal of Intelligent and Robotic Systems, 1998, 23 : 183 - 199
  • [2] Multi-modal information integration for interactive multi-agent systems
    Yamaguchi, T
    Sato, M
    Takagi, T
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 1998, 23 (2-4) : 183 - 199
  • [3] Integration of Multi-Modal Data for Monitoring of Eating Behavior
    Ghosh, Tonmoy
    ProQuest Dissertations and Theses Global, 2022,
  • [4] Virtual Reality Testing of Multi-Modal Integration in Schizophrenic Patients
    Sorkin, Anna
    Peled, Avi
    Weinshall, Daphna
    MEDICINE MEETS VIRTUAL REALITY 13: THE MAGICAL NEXT BECOMES THE MEDICAL NOW, 2005, 111 : 508 - 514
  • [5] User behavior fusion in dialog management with multi-modal history cues
    Minghao Yang
    Jianhua Tao
    Linlin Chao
    Hao Li
    Dawei Zhang
    Hao Che
    Tingli Gao
    Bin Liu
    Multimedia Tools and Applications, 2015, 74 : 10025 - 10051
  • [6] User behavior fusion in dialog management with multi-modal history cues
    Yang, Minghao
    Tao, Jianhua
    Chao, Linlin
    Li, Hao
    Zhang, Dawei
    Che, Hao
    Gao, Tingli
    Liu, Bin
    MULTIMEDIA TOOLS AND APPLICATIONS, 2015, 74 (22) : 10025 - 10051
  • [7] Measuring multi-modal integration in schizophrenic patients with virtual reality technology
    Sorkin, A
    Peled, A
    Weinshall, D
    SCHIZOPHRENIA BULLETIN, 2005, 31 (02) : 377 - 377
  • [8] Multi-modal orientation cues in homing pigeons
    Walcott, C
    INTEGRATIVE AND COMPARATIVE BIOLOGY, 2005, 45 (03) : 574 - 581
  • [9] Interpretable multi-modal data integration
    Osorio, Daniel
    NATURE COMPUTATIONAL SCIENCE, 2022, 2 (01): : 8 - 9
  • [10] Implementation of a Virtual Assistant System Based on Deep Multi-modal Data Integration
    Baek, Sungdae
    Kim, Jonghong
    Lee, Junwon
    Lee, Minho
    JOURNAL OF SIGNAL PROCESSING SYSTEMS FOR SIGNAL IMAGE AND VIDEO TECHNOLOGY, 2024, 96 (03): : 179 - 189