A constructive algorithm for feedforward neural networks with incremental training

被引:42
|
作者
Liu, DR [1 ]
Chang, TS
Zhang, Y
机构
[1] Univ Illinois, Dept Elect & Comp Engn, Chicago, IL 60607 USA
[2] Univ Calif Davis, Dept Elect & Comp Engn, Davis, CA 95616 USA
基金
美国国家科学基金会;
关键词
constructive algorithm; feedforward neural networks; incremental training; linear programming; quadratic programming;
D O I
10.1109/TCSI.2002.805733
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
We develop, in this brief, a new constructive learning algorithm for feedforward neural networks. We employ an incremental training procedure where training patterns are learned one by one. Our algorithm starts with a single training pattern and a single hidden-layer neuron. During the course of neural network training, when the algorithm gets stuck in a local minimum, we will attempt to escape from the local minimum by using the weight scaling technique. It is only after several consecutive failed attempts in escaping from a local minimum that will we allow the network to grow by adding a hidden-layer neuron. At this stage, we employ an optimization procedure based on quadratic/linear programming to select initial weights for the newly added neuron. Our optimization procedure tends to make the network reach the error tolerance with no or little training after adding a hidden-layer neuron. Our simulation results indicate that the present constructive algorithm can obtain neural networks very close to minimal structures (with the least possible number of hidden-layer neurons) and that convergence (to a solution) in neural network training can be guaranteed. We tested our algorithm extensively using a widely used benchmark problem, i.e., the parity problem.
引用
收藏
页码:1876 / 1879
页数:4
相关论文
共 50 条
  • [21] A modified backpropagation training algorithm for feedforward neural networks
    Kathirvalavakumar, T
    Thangavel, P
    NEURAL PROCESSING LETTERS, 2006, 23 (02) : 111 - 119
  • [22] THE UD RLS ALGORITHM FOR TRAINING FEEDFORWARD NEURAL NETWORKS
    Bilski, Jaroslaw
    INTERNATIONAL JOURNAL OF APPLIED MATHEMATICS AND COMPUTER SCIENCE, 2005, 15 (01) : 115 - 123
  • [23] A parallel algorithm for gradient training of feedforward neural networks
    Hanzalek, Z
    PARALLEL COMPUTING, 1998, 24 (5-6) : 823 - 839
  • [24] A Modified Backpropagation Training Algorithm for Feedforward Neural Networks*
    T. Kathirvalavakumar
    P. Thangavel
    Neural Processing Letters, 2006, 23 : 111 - 119
  • [25] An Optimal PID Control Algorithm for Training Feedforward Neural Networks
    Jing, Xingjian
    Cheng, Li
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2013, 60 (06) : 2273 - 2283
  • [26] Feedforward neural networks training with optimal bounded ellipsoid algorithm
    Rubio Avila, Jose De Jesus
    Ramirez, Andres Ferreyra
    Aviles-Cruz, Carlos
    PROCEEDINGS OF THE 9TH WSEAS INTERNATIONAL CONFERENCE ON NEURAL NETWORKS (NN' 08): ADVANCED TOPICS ON NEURAL NETWORKS, 2008, : 174 - 180
  • [27] The Upstart Algorithm: A Method for Constructing and Training Feedforward Neural Networks
    Frean, Marcus
    NEURAL COMPUTATION, 1990, 2 (02) : 198 - 209
  • [28] An improved genetic algorithm for training layered feedforward neural networks
    Liu Ping
    Cheng Yi-yu
    Journal of Zhejiang University-SCIENCE A, 2000, 1 (3): : 322 - 326
  • [29] A New Variant of the GQR Algorithm for Feedforward Neural Networks Training
    Bilski, Jaroslaw
    Kowalczyk, Bartosz
    ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING (ICAISC 2021), PT I, 2021, 12854 : 41 - 53
  • [30] Step acceleration based training algorithm for feedforward neural networks
    Li, YL
    Wang, KQ
    Zhang, D
    16TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOL II, PROCEEDINGS, 2002, : 84 - 87