Multimodal saliency-based bottom-up attention a framework for the humanoid robot iCub

被引:66
|
作者
Ruesch, Jonas [1 ]
Lopes, Manuel [2 ]
Bernardino, Alexandre [2 ]
Hoernstein, Jonas [2 ]
Santos-Victor, Jose [2 ]
Pfeifer, Rolf [1 ]
机构
[1] Univ Zurich, Dept Informat, Artificial Intelligence Lab, CH-8006 Zurich, Switzerland
[2] Inst Super Tecn, Inst Syst & Robot, Lisbon, Portugal
关键词
D O I
10.1109/ROBOT.2008.4543329
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This work presents a multimodal bottom-up attention system for the humanoid robot iCub where the robot's decisions to move eyes and neck are based on visual and acoustic saliency maps. We introduce a modular and distributed software architecture which is capable of fusing visual and acoustic saliency maps into one egocentric frame of reference. This system endows the With with an emergent exploratory behavior reacting to combined visual and auditory saliency. The developed software modules provide a flexible foundation for the open iCub platform and for further experiments and developments, including higher levels of attention and representation of the peripersonal space.
引用
收藏
页码:962 / +
页数:2
相关论文
共 50 条
  • [41] Bottom-up saliency model generation using superpixels
    Polatsek, Patrik
    Benesova, Wanda
    PROCEEDINGS SCCG: 2015 31ST SPRING CONFERENCE ON COMPUTER GRAPHICS, 2015, : 120 - 128
  • [42] Saliency-based Sequential Image Attention with Multiset Prediction
    Welleck, Sean
    Mao, Jialin
    Cho, Kyunghyun
    Zhang, Zheng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [43] Nonlinear Data Fusion in Saliency-Based Visual Attention
    Bahmani, Hamed
    Nasrabadi, Ali Motie
    Gholpayeghani, Mohammad Reza Hashemi
    2008 4TH INTERNATIONAL IEEE CONFERENCE INTELLIGENT SYSTEMS, VOLS 1 AND 2, 2008, : 152 - +
  • [44] Bottom-up Model of Visual Saliency: A Viewpoint based on Efficient Coding Hypothesis
    Zhu, Hao
    Han, Biao
    PROCEEDINGS OF THE 2014 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2014, : 2136 - 2141
  • [45] Feature-based attention: it is all bottom-up priming
    Theeuwes, Jan
    PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY B-BIOLOGICAL SCIENCES, 2013, 368 (1628)
  • [46] Implementation of Visual Attention System Using Artificial Retina Chip and Bottom-Up Saliency Map Model
    Kim, Bumhwi
    Okuno, Hirotsugu
    Yagi, Tetsuya
    Lee, Minho
    NEURAL INFORMATION PROCESSING, PT III, 2011, 7064 : 416 - +
  • [47] A Top-down and Bottom-up Visual Attention Model for Humanoid Object Approaching and Obstacle Avoidance
    Chame, Hendry Ferreira
    Chevallereau, Christine
    PROCEEDINGS OF 13TH LATIN AMERICAN ROBOTICS SYMPOSIUM AND 4TH BRAZILIAN SYMPOSIUM ON ROBOTICS - LARS/SBR 2016, 2016, : 25 - 30
  • [48] A generalised framework for saliency-based point feature detection
    Brown, Mark
    Windridge, David
    Guillemaut, Jean-Yves
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2017, 157 : 117 - 137
  • [49] Developing a Robot's Empathetic Reactive Response Inspired by a Bottom-Up Attention Model
    Gomez, Randy
    Fang, Yu
    Thill, Serge
    Ragel, Ricardo
    Brock, Heike
    Nakamura, Keisuke
    Vasylkiv, Yurii
    Nichols, Eric
    Merino, Luis
    SOCIAL ROBOTICS, ICSR 2021, 2021, 13086 : 85 - 95
  • [50] A pipeline for estimating human attention toward objects with on-board cameras on the iCub humanoid robot
    Hanifi, Shiva
    Maiettini, Elisa
    Lombardi, Maria
    Natale, Lorenzo
    FRONTIERS IN ROBOTICS AND AI, 2024, 11