A generic neural network for multi-modal sensorimotor learning

被引:0
|
作者
Carenzi, F [1 ]
Bendahan, P [1 ]
Roschin, VY [1 ]
Frolov, AA [1 ]
Gorce, P [1 ]
Maier, MA [1 ]
机构
[1] Univ Paris 06, INSERM, U483, F-75005 Paris, France
关键词
Hebbian teaming; multi-network architecture; multi-modal sensory information; sensorimotor integration;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A generic neural network module has been developed, which learns to combine multi-modal sensory information to produce adequate motor commands. The module learns in the first step to combine multi-modal sensory information, based on which it subsequently teams to control a kinematic arm. The module can learn to combine two sensory inputs whatever their modality. We report the architecture and teaming strategy of the module and characterize its performance by simulations in two situations of reaching by a linear arm with multiple degrees of freedom: (1) mapping of tactile and arm-related proprioceptive information, and (2) mapping of gaze and arm-related proprioceptive information. (C) 2004 Elsevier B.V. All rights reserved.
引用
收藏
页码:525 / 533
页数:9
相关论文
共 50 条
  • [21] Multi-Modal Temporal Hypergraph Neural Network for Flotation Condition Recognition
    Fan, Zunguan
    Feng, Yifan
    Wang, Kang
    Li, Xiaoli
    ENTROPY, 2024, 26 (03)
  • [22] Multi-modal Identity Verification Based on Improved BP Neural Network
    Luan Fang-jun
    Li Kai
    Ma Si-liang
    PROCEEDINGS OF THE 2009 2ND INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING, VOLS 1-9, 2009, : 2153 - 2157
  • [23] Deep Convolutional Neural Network for Multi-Modal Image Restoration and Fusion
    Deng, Xin
    Dragotti, Pier Luigi
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (10) : 3333 - 3348
  • [24] A Multi-modal Deep Neural Network Model for Forested Landslide Detection
    Tang, Xiaochuan
    Tu, Zihan
    Ren, Xuqing
    Fang, Chengyong
    Wang, Yu
    Liu, Xin
    Fan, Xuanmei
    Wuhan Daxue Xuebao (Xinxi Kexue Ban)/Geomatics and Information Science of Wuhan University, 2024, 49 (09): : 1566 - 1573
  • [25] A multi-modal neural network for identifying exon-intron boundaries
    Yoshihara, I
    Kamimai, Y
    Yamamori, K
    Yasunaga, M
    KNOWLEDGE-BASED INTELLIGENT INFORMATION ENGINEERING SYSTEMS & ALLIED TECHNOLOGIES, PTS 1 AND 2, 2001, 69 : 998 - 1002
  • [26] Single-shot hyperspectral imaging based on dual attention neural network with multi-modal learning
    He, Tianyue
    Zhang, Qican
    Zhou, Mingwei
    Kou, Tingdong
    Shen, Junfei
    OPTICS EXPRESS, 2022, 30 (06) : 9790 - 9813
  • [27] An Adaptive Dual-channel Multi-modal graph neural network for few-shot learning
    Yang, Jieyi
    Dong, Yihong
    Li, Guoqing
    KNOWLEDGE-BASED SYSTEMS, 2025, 310
  • [28] Robust multi-modal pedestrian detection using deep convolutional neural network with ensemble learning model
    Jain, Deepak Kumar
    Zhao, Xudong
    Garcia, Salvador
    Neelakandan, Subramani
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 249
  • [29] Generalized Zero-Shot Learning Via Multi-Modal Aggregated Posterior Aligning Neural Network
    Chen, Xingyu
    Li, Jin
    Lan, Xuguang
    Zheng, Nanning
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 177 - 187
  • [30] Learning to decode to future success for multi-modal neural machine translation
    Huang, Yan
    Zhang, TianYuan
    Xu, Chun
    JOURNAL OF ENGINEERING RESEARCH, 2023, 11 (02):