A more powerful Random Neural Network model in supervised learning applications

被引:0
|
作者
Basterrech, Sebastian [1 ]
Rubino, Gerardo [2 ]
机构
[1] VSB Tech Univ Ostrava, IT4Innovat, Ostrava, Czech Republic
[2] INRIA, Rennes, France
关键词
Random Neural Network; Supervised Learning; Pattern Recognition; Numerical Optimization; Gradient Descent;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Since the early 1990s, Random Neural Networks (RNNs) have gained importance in the Neural Networks and Queueing Networks communities. RNNs are inspired by biological neural networks and they are also an extension of open Jackson's networks in Queueing Theory. In 1993, a learning algorithm of gradient type was introduced in order to use RNNs in supervised learning tasks. This method considers only the weight connections among the neurons as adjustable parameters. All other parameters are deemed fixed during the training process. The RNN model has been successfully utilized in several types of applications such as: supervised learning problems, pattern recognition, optimization, image processing, associative memory. In this contribution we present a modification of the classic model obtained by extending the set of adjustable parameters. The modification increases the potential of the RNN model in supervised learning tasks keeping the same network topology and the same time complexity of the algorithm. We describe the new equations implementing a gradient descent learning technique for the model.
引用
收藏
页码:201 / 206
页数:6
相关论文
共 50 条
  • [31] Stability of the Random Neural Network Model
    Gelenbe, Erol
    NEURAL COMPUTATION, 1990, 2 (02) : 239 - 247
  • [32] STABILITY OF THE RANDOM NEURAL NETWORK MODEL
    GELENBE, E
    LECTURE NOTES IN COMPUTER SCIENCE, 1990, 412 : 56 - 68
  • [33] Learning Algorithm and Retrieval Process for the Multiple Classes Random Neural Network Model
    Jose Aguilar
    Neural Processing Letters, 2001, 13 : 81 - 91
  • [34] Learning algorithm and retrieval process for the multiple classes random neural network model
    Aguilar, J
    NEURAL PROCESSING LETTERS, 2001, 13 (01) : 81 - 91
  • [35] Random Separation Learning for Neural Network Ensembles
    Liu, Yong
    2017 10TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING, BIOMEDICAL ENGINEERING AND INFORMATICS (CISP-BMEI), 2017,
  • [36] Deep Reinforcement Learning with the Random Neural Network
    Serrano, Will
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 110
  • [37] RANDOM NEURAL NETWORK METHODS AND DEEP LEARNING
    Yin, Yonghua
    PROBABILITY IN THE ENGINEERING AND INFORMATIONAL SCIENCES, 2021, 35 (01) : 6 - 36
  • [38] Learning in the multiple class random neural network
    Gelenbe, E
    Hussain, KF
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2002, 13 (06): : 1257 - 1267
  • [39] A Dual Network Adaptive Learning Algorithm for Supervised Neural Network with Contour Preserving Classification for Soft Real Time Applications
    Fuangkhon, Piyabute
    Tanprasert, Thitipong
    HYBRID ARTIFICIAL INTELLIGENCE SYSTEMS, PT 1, 2010, 6076 : 128 - 135
  • [40] Graph Random Neural Networks for Semi-Supervised Learning on Graphs
    Feng, Wenzheng
    Zhang, Jie
    Dong, Yuxiao
    Han, Yu
    Luan, Huanbo
    Xu, Qian
    Yang, Qiang
    Kharlamov, Evgeny
    Tang, Jie
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33