SELECTIVE TRAINING OF FEEDFORWARD ARTIFICIAL NEURAL NETWORKS USING MATRIX PERTURBATION-THEORY

被引:9
|
作者
HUNT, SD [1 ]
DELLER, JR [1 ]
机构
[1] MICHIGAN STATE UNIV,DEPT ELECT ENGN,E LANSING,MI 48824
基金
美国国家科学基金会;
关键词
SELECTIVE TRAINING; EFFICIENT TRAINING; FEEDFORWARD NETWORKS; ARTIFICIAL NEURAL NETWORKS; PERTURBATION THEORY; NONLINEAR OPTIMIZATION; SUPERVISED LEARNING; NEW TRAINING ALGORITHM; PATTERN RECOGNITION;
D O I
10.1016/0893-6080(95)00030-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Many training algorithms for feedforward neural networks suffer from slow convergence. A new training method is presented which exploits results from matrix perturbation theory for significant training time improvement. This theory is used to assess the effect of a particular training pattern on the weight estimates prior to its inclusion in any iteration. Data which do not significantly change the weights are nor used in that iteration obviating the computational expense of updating.
引用
收藏
页码:931 / 944
页数:14
相关论文
共 50 条