A more powerful Random Neural Network model in supervised learning applications

被引:0
|
作者
Basterrech, Sebastian [1 ]
Rubino, Gerardo [2 ]
机构
[1] VSB Tech Univ Ostrava, IT4Innovat, Ostrava, Czech Republic
[2] INRIA, Rennes, France
关键词
Random Neural Network; Supervised Learning; Pattern Recognition; Numerical Optimization; Gradient Descent;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Since the early 1990s, Random Neural Networks (RNNs) have gained importance in the Neural Networks and Queueing Networks communities. RNNs are inspired by biological neural networks and they are also an extension of open Jackson's networks in Queueing Theory. In 1993, a learning algorithm of gradient type was introduced in order to use RNNs in supervised learning tasks. This method considers only the weight connections among the neurons as adjustable parameters. All other parameters are deemed fixed during the training process. The RNN model has been successfully utilized in several types of applications such as: supervised learning problems, pattern recognition, optimization, image processing, associative memory. In this contribution we present a modification of the classic model obtained by extending the set of adjustable parameters. The modification increases the potential of the RNN model in supervised learning tasks keeping the same network topology and the same time complexity of the algorithm. We describe the new equations implementing a gradient descent learning technique for the model.
引用
收藏
页码:201 / 206
页数:6
相关论文
共 50 条
  • [41] Supervised learning of random quantum circuits via scalable neural networks
    Cantori, Simone
    Vitali, David
    Pilati, Sebastiano
    QUANTUM SCIENCE AND TECHNOLOGY, 2023, 8 (02)
  • [42] Distantly Supervised Neural Network Model for Relation Extraction
    Wang, Zhen
    Chang, Baobao
    Sui, Zhifang
    CHINESE COMPUTATIONAL LINGUISTICS AND NATURAL LANGUAGE PROCESSING BASED ON NATURALLY ANNOTATED BIG DATA (CCL 2015), 2015, 9427 : 253 - 266
  • [43] Adaptive Recurrent Neural Network Training Algorithm for Nonlinear Model Identification Using Supervised Learning
    Akpan, Vincent A.
    Hassapis, George D.
    2010 AMERICAN CONTROL CONFERENCE, 2010, : 4937 - 4942
  • [44] Distributed ARTMAP: a neural network for fast distributed supervised learning
    Carpenter, GA
    Milenova, BL
    Noeske, BW
    NEURAL NETWORKS, 1998, 11 (05) : 793 - 813
  • [45] Supervised Learning for Neural Network Using Ant Colony Optimization
    Rathee, Ravinder
    Rani, Seema
    Dagar, Anita
    PROCEEDINGS OF THE 2014 INTERNATIONAL CONFERENCE ON RELIABILTY, OPTIMIZATION, & INFORMATION TECHNOLOGY (ICROIT 2014), 2014, : 331 - 334
  • [46] Robust error measure for supervised neural network learning with outliers
    Liano, K
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 1996, 7 (01): : 246 - 250
  • [47] Diabetes Prediction with Supervised Learning Algorithms of Artificial Neural Network
    Sapon, Muhammad Akmal
    Ismail, Khadijah
    Zainudin, Suehazlyn
    Ping, Chew Sue
    SOFTWARE AND COMPUTER APPLICATIONS, 2011, 9 : 57 - 61
  • [48] Distributed ARTMAP: A neural network for fast distributed supervised learning
    Ctr. Adaptive Syst. Dept. Cogn. N., Boston University, Boston, MA 02215, United States
    Neural Netw., 5 (793-813):
  • [49] A rapid supervised learning neural network for function interpolation and approximation
    Chen, CLP
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 1996, 7 (05): : 1220 - 1230
  • [50] Supervised Learning in a Single Layer Dynamic Synapses Neural Network
    Yousefi, Ali
    Dibazar, Alireza A.
    Berger, Theodore W.
    2011 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2011, : 2250 - 2257