A generic neural network for multi-modal sensorimotor learning

被引:0
|
作者
Carenzi, F [1 ]
Bendahan, P [1 ]
Roschin, VY [1 ]
Frolov, AA [1 ]
Gorce, P [1 ]
Maier, MA [1 ]
机构
[1] Univ Paris 06, INSERM, U483, F-75005 Paris, France
关键词
Hebbian teaming; multi-network architecture; multi-modal sensory information; sensorimotor integration;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A generic neural network module has been developed, which learns to combine multi-modal sensory information to produce adequate motor commands. The module learns in the first step to combine multi-modal sensory information, based on which it subsequently teams to control a kinematic arm. The module can learn to combine two sensory inputs whatever their modality. We report the architecture and teaming strategy of the module and characterize its performance by simulations in two situations of reaching by a linear arm with multiple degrees of freedom: (1) mapping of tactile and arm-related proprioceptive information, and (2) mapping of gaze and arm-related proprioceptive information. (C) 2004 Elsevier B.V. All rights reserved.
引用
收藏
页码:525 / 533
页数:9
相关论文
共 50 条
  • [1] A generic neural network for multi-modal sensorimotor learning
    Carenzi, F
    Bendahan, P
    Roschin, VY
    Frolov, AA
    Gorce, P
    Maier, MA
    NEUROCOMPUTING, 2004, 58 : 525 - 533
  • [2] Multi-modal Network Representation Learning
    Zhang, Chuxu
    Jiang, Meng
    Zhang, Xiangliang
    Ye, Yanfang
    Chawla, Nitesh, V
    KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 3557 - 3558
  • [3] A novel multi-modal neural network approach for dynamic and generic sports video summarization
    Narwal, Pulkit
    Duhan, Neelam
    Bhatia, Komal Kumar
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 126
  • [4] Mineral: Multi-modal Network Representation Learning
    Kefato, Zekarias T.
    Sheikh, Nasrullah
    Montresor, Alberto
    MACHINE LEARNING, OPTIMIZATION, AND BIG DATA, MOD 2017, 2018, 10710 : 286 - 298
  • [5] Interactive Learning of a Dual Convolution Neural Network for Multi-Modal Action Recognition
    Li, Qingxia
    Gao, Dali
    Zhang, Qieshi
    Wei, Wenhong
    Ren, Ziliang
    MATHEMATICS, 2022, 10 (21)
  • [6] Multi-modal Neural Network for Traffic Event Detection
    Chen, Qi
    Wang, Wei
    2019 IEEE 2ND INTERNATIONAL CONFERENCE ON ELECTRONICS AND COMMUNICATION ENGINEERING (ICECE 2019), 2019, : 26 - 30
  • [7] CLMTR: a generic framework for contrastive multi-modal trajectory representation learning
    Liang, Anqi
    Yao, Bin
    Xie, Jiong
    Zheng, Wenli
    Shen, Yanyan
    Ge, Qiqi
    GEOINFORMATICA, 2024, : 233 - 253
  • [8] Multi-modal anchor adaptation learning for multi-modal summarization
    Chen, Zhongfeng
    Lu, Zhenyu
    Rong, Huan
    Zhao, Chuanjun
    Xu, Fan
    NEUROCOMPUTING, 2024, 570
  • [9] Channel Estimation Algorithm Based on Multi-modal Neural Network
    Xue, Wenli
    Zhu, Hongwei
    Nian, Zhongyuan
    Wu, Xueyang
    Cui, Mingshi
    Mu, Chunfang
    Yang, Weiming
    Chen, Zhigang
    2024 9TH INTERNATIONAL CONFERENCE ON SIGNAL AND IMAGE PROCESSING, ICSIP, 2024, : 206 - 210
  • [10] NEURAL NETWORK EVALUATION OF MULTI-MODAL STARTLE EYEBLINK MEASUREMENTS
    Lovelace, Christopher T.
    Derakhshani, Reza
    Burgoon, Judee K.
    PSYCHOPHYSIOLOGY, 2009, 46 : S68 - S68