RANDOM NEURAL NETWORK MODEL FOR SUPERVISED LEARNING PROBLEMS

被引:4
|
作者
Basterrech, S. [1 ]
Rubino, G. [2 ]
机构
[1] VSB Tech Univ Ostrava, Natl Supercomp Ctr, Ostrava, Czech Republic
[2] INRIA Rennes, F-35042 Rennes, France
关键词
neural networks; random neural networks; supervised learning; pattern recognition; G-networks; QUEUING-NETWORKS; PACKET NETWORK; CLASSIFICATION; MATRIX;
D O I
10.14311/NNW.2015.25.024
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Random Neural Networks (RNNs) are a class of Neural Networks (NNs) that can also be seen as a specific type of queuing network. They have been successfully used in several domains during the last 25 years, as queuing networks to analyze the performance of resource sharing in many engineering areas, as learning tools and in combinatorial optimization, where they are seen as neural systems, and also as models of neurological aspects of living beings. In this article we focus on their learning capabilities, and more specifically, we present a practical guide for using the RNN to solve supervised learning problems. We give a general description of these models using almost indistinctly the terminology of Queuing Theory and the neural one. We present the standard learning procedures used by RNNs, adapted from similar well-established improvements in the standard NN field. We describe in particular a set of learning algorithms covering techniques based on the use of first order and, then, of second order derivatives. We also discuss some issues related to these objects and present new perspectives about their use in supervised learning problems. The tutorial describes their most relevant applications, and also provides a large bibliography.
引用
收藏
页码:457 / 499
页数:43
相关论文
共 50 条
  • [21] Statistical pattern recognition problems and the multiple classes random neural network model
    Aguilar, J
    PROGRESS IN PATTERN RECOGNITION, IMAGE ANALYSIS AND APPLICATIONS, 2004, 3287 : 336 - 341
  • [22] A new neural network model for solving random interval linear programming problems
    Arjmandzadeh, Ziba
    Safi, Mohammadreza
    Nazemi, Alireza
    NEURAL NETWORKS, 2017, 89 : 11 - 18
  • [23] Neural Topic Model with Attention for Supervised Learning
    Wang, Xinyi
    Yang, Yi
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108
  • [24] Supervised learning in neural networks without feedback network
    Brandt, RD
    Lin, F
    PROCEEDINGS OF THE 1996 IEEE INTERNATIONAL SYMPOSIUM ON INTELLIGENT CONTROL, 1996, : 86 - 90
  • [25] FORCED FORMATION OF A GEOMETRICAL FEATURE SPACE BY A NEURAL-NETWORK MODEL WITH SUPERVISED LEARNING
    TAKEDA, T
    MIZOE, H
    KISHI, K
    MATSUOKA, T
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 1993, E76A (07) : 1129 - 1132
  • [26] Self-Supervised Representation Learning on Neural Network Weights for Model Characteristic Prediction
    Schurholt, Konstantin
    Kostadinov, Dimche
    Borth, Damian
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [27] Supervised Learning for Convolutional Neural Network with Barlow Twins
    Murugan, Ramyaa
    Mojoo, Jonathan
    Kurita, Takio
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT IV, 2022, 13532 : 484 - 495
  • [28] A dynamic growing neural network for supervised or unsupervised learning
    Tian, Daxin
    Liu, Yanheng
    Wei, Da
    WCICA 2006: SIXTH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION, VOLS 1-12, CONFERENCE PROCEEDINGS, 2006, : 2886 - 2890
  • [29] Modeling of Supervised ADALINE Neural Network Learning Technique
    Pellakuri, Vidyullatha
    Rao, D. Rajeswara
    Murhty, J. V. R.
    PROCEEDINGS OF THE 2016 2ND INTERNATIONAL CONFERENCE ON CONTEMPORARY COMPUTING AND INFORMATICS (IC3I), 2016, : 17 - 22
  • [30] A Neural Network for Semi-supervised Learning on Manifolds
    Genkin, Alexander
    Sengupta, Anirvan M.
    Chklovskii, Dmitri
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: THEORETICAL NEURAL COMPUTATION, PT I, 2019, 11727 : 375 - 386