Solving the linear interval tolerance problem for weight initialization of neural networks

被引:29
|
作者
Adam, S. P. [1 ,2 ]
Karras, D. A. [3 ]
Magoulas, G. D. [4 ]
Vrahatis, M. N. [1 ]
机构
[1] Univ Patras, Dept Math, Computat Intelligence Lab, GR-26110 Patras, Greece
[2] Technol Educ Inst Epirus, Dept Comp Engn, Arta 47100, Greece
[3] Technol Educ Inst Sterea Hellas, Dept Automat, Psahna 34400, Evia, Greece
[4] Univ London, Birkbeck Coll, Dept Comp Sci & Informat Syst, London WC1E 7HX, England
关键词
Neural networks; Weight initialization; Interval analysis; Linear interval tolerance problem; FEEDFORWARD NETWORKS; STATISTICAL TESTS; TRAINING SPEED; HIGH-DIMENSION; BACKPROPAGATION; ALGORITHM; INTELLIGENCE;
D O I
10.1016/j.neunet.2014.02.006
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Determining good initial conditions for an algorithm used to train a neural network is considered a parameter estimation problem dealing with uncertainty about the initial weights. Interval analysis approaches model uncertainty in parameter estimation problems using intervals and formulating tolerance problems. Solving a tolerance problem is defining lower and upper bounds of the intervals so that the system functionality is guaranteed within predefined limits. The aim of this paper is to show how the problem of determining the initial weight intervals of a neural network can be defined in terms of solving a linear interval tolerance problem. The proposed linear interval tolerance approach copes with uncertainty about the initial weights without any previous knowledge or specific assumptions on the input data as required by approaches such as fuzzy sets or rough sets. The proposed method is tested on a number of well known benchmarks for neural networks trained with the back-propagation family of algorithms. Its efficiency is evaluated with regards to standard performance measures and the results obtained are compared against results of a number of well known and established initialization methods. These results provide credible evidence that the proposed method outperforms classical weight initialization methods. (C) 2014 Elsevier Ltd. All rights reserved.
引用
收藏
页码:17 / 37
页数:21
相关论文
共 50 条
  • [21] A New Weight Initialization Method for Sigmoidal Feedforward Artificial Neural Networks
    Sodhi, Sartaj Singh
    Chandra, Pravin
    Tanwar, Sharad
    PROCEEDINGS OF THE 2014 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2014, : 291 - 298
  • [22] Variance-Aware Weight Initialization for Point Convolutional Neural Networks
    Hermosilla, Pedro
    Schelling, Michael
    Ritschel, Tobias
    Ropinski, Timo
    COMPUTER VISION - ECCV 2022, PT XXVIII, 2022, 13688 : 74 - 89
  • [23] Neural networks for solving linear inequality systems
    Cichocki, A
    Bargiela, A
    PARALLEL COMPUTING, 1997, 22 (11) : 1455 - 1475
  • [24] Neural networks for solving linear inequality systems
    Nottingham Trent Univ, Nottingham, United Kingdom
    Parallel Comput, 11 (1455-1475):
  • [25] NEURAL NETWORKS - PROBLEM-SOLVING TOOLS
    HOOTMAN, J
    IEEE MICRO, 1989, 9 (06) : 4 - +
  • [26] ON PROBLEM-SOLVING WITH HOPFIELD NEURAL NETWORKS
    KAMGARPARSI, B
    KAMGARPARSI, B
    BIOLOGICAL CYBERNETICS, 1990, 62 (05) : 415 - 423
  • [27] USE NEURAL NETWORKS FOR PROBLEM-SOLVING
    CHITRA, SP
    CHEMICAL ENGINEERING PROGRESS, 1993, 89 (04) : 44 - 52
  • [28] PROBLEM-SOLVING IN ARTIFICIAL NEURAL NETWORKS
    HAMPSON, S
    PROGRESS IN NEUROBIOLOGY, 1994, 42 (02) : 229 - 281
  • [29] PROBLEM-SOLVING USING NEURAL NETWORKS
    HRIPCSAK, G
    M D COMPUTING, 1988, 5 (03): : 25 - 37
  • [30] Methods of Solving Linear Fractional Programming Problem - an interval approach
    Murugan, Yamini
    Thamaraiselvan, Nirmala
    IAENG International Journal of Applied Mathematics, 2024, 54 (03) : 488 - 494