Learning to Act through Evolution of Neural Diversity in Random Neural Networks

被引:2
|
作者
Pedersen, Joachim Winther [1 ]
Risi, Sebastian [1 ]
机构
[1] IT Univ Copenhagen, Copenhagen, Denmark
关键词
DYNAMICS; PLASTICITY;
D O I
10.1145/3583131.3590460
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Biological nervous systems consist of networks of diverse, sophisticated information processors in the form of neurons of different classes. In most artificial neural networks (ANNs), neural computation is abstracted to an activation function that is usually shared between all neurons within a layer or even the whole network; training of ANNs focuses on synaptic optimization. In this paper, we propose the optimization of neuro-centric parameters to attain a set of diverse neurons that can perform complex computations. Demonstrating the promise of the approach, we show that evolving neural parameters alone allows agents to solve various reinforcement learning tasks without optimizing any synaptic weights. While not aiming to be an accurate biological model, parameterizing neurons to a larger degree than the current common practice, allows us to ask questions about the computational abilities afforded by neural diversity in random neural networks. The presented results open up interesting future research directions, such as combining evolved neural diversity with activity-dependent plasticity.
引用
收藏
页码:1248 / 1256
页数:9
相关论文
共 50 条
  • [1] LEARNING AND EVOLUTION IN NEURAL NETWORKS
    NOLFI, S
    PARISI, D
    ELMAN, JL
    ADAPTIVE BEHAVIOR, 1994, 3 (01) : 5 - 28
  • [2] Learning by optimization in random neural networks
    Atalay, V
    ADVANCES IN COMPUTER AND INFORMATION SCIENCES '98, 1998, 53 : 143 - 148
  • [3] Deep Learning with Random Neural Networks
    Gelenbe, Erol
    Yin, Yongha
    PROCEEDINGS OF SAI INTELLIGENT SYSTEMS CONFERENCE (INTELLISYS) 2016, VOL 2, 2018, 16 : 450 - 462
  • [4] Deep Learning with Random Neural Networks
    Gelenbe, Erol
    Yin, Yongha
    2016 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2016, : 1633 - 1638
  • [5] The evolution of learning: beyond neural networks
    Goodin, Alma Dzib
    REVISTA CHILENA DE NEUROPSICOLOGIA, 2013, 8 (01): : 20 - 25
  • [6] Deep Learning with Dense Random Neural Networks
    Gelenbe, Erol
    Yin, Yonghua
    MAN-MACHINE INTERACTIONS 5, ICMMI 2017, 2018, 659 : 3 - 18
  • [7] Pseudo Random Number Generation through Reinforcement Learning and Recurrent Neural Networks
    Pasqualini, Luca
    Parton, Maurizio
    ALGORITHMS, 2020, 13 (11)
  • [8] Sparse Neural Networks with Large Learning Diversity
    Gripon, Vincent
    Berrou, Claude
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2011, 22 (07): : 1087 - 1096
  • [9] Forming neural networks design through evolution
    Volna, Eva
    ARTIFICIAL NEURAL NETWORKS AND INTELLIGENT INFORMATION PROCESSING, PROCEEDINGS, 2007, : 13 - 20
  • [10] Evolution, development and learning with predictor neural networks
    Lakhman, Konstantin
    Burtsev, Mikhail
    ALIFE 2014: THE FOURTEENTH INTERNATIONAL CONFERENCE ON THE SYNTHESIS AND SIMULATION OF LIVING SYSTEMS, 2014, : 457 - 464