ONLINE LEARNING WITH MINIMAL DEGRADATION IN FEEDFORWARD NETWORKS

被引:15
|
作者
DEANGULO, VR [1 ]
TORRAS, C [1 ]
机构
[1] INST CIBERNET,E-08028 BARCELONA,SPAIN
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 1995年 / 6卷 / 03期
关键词
D O I
10.1109/72.377971
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Dealing with nonstationary processes requires quick adaptation while at the same time avoiding catastrophic forgetting, A neural learning technique that satisfies these requirements, without sacrifying the benefits of distributed representations, is presented, It relies on a formalization of the problem as the minimization of the error over the previously learned input-output (i-o) patterns, subject to the constraint of perfect encoding of the new pattern, Then this constrained optimization problem is transformed into an unconstrained one with hidden-unit activations as variables. This new formulation naturally leads to an algorithm for solving the problem, which we call learning with minimal degradation (LMD), Some experimental comparisons of the performance of LMD with back propagation are provided which, besides showing the advantages of using LMD, reveal the dependence of forgetting on the learning fate in backpropagation. We also explain why overtraining affects forgetting and fault tolerance, which are seen as related problems.
引用
收藏
页码:657 / 668
页数:12
相关论文
共 50 条
  • [41] On adaptive learning rate that guarantees convergence in feedforward networks
    Behera, Laxmidhar
    Kumar, Swagat
    Patnaik, Awhan
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2006, 17 (05): : 1116 - 1125
  • [42] Sensitivity analysis for selective learning by feedforward neural networks
    Engelbrecht, Andries P.
    2001, IOS Press (45)
  • [43] Feedforward neural networks initialization based on discriminant learning
    Chumachenko, Kateryna
    Iosifidis, Alexandros
    Gabbouj, Moncef
    NEURAL NETWORKS, 2022, 146 : 220 - 229
  • [44] A learning scheme for hardware implementation of feedforward neural networks
    Choi, MR
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL I AND II, 1999, : 508 - 512
  • [45] A fast learning algorithm for training feedforward neural networks
    Goel, Ashok Kumar
    Saxena, Suresh C.
    Bhanot, Surekha
    INTERNATIONAL JOURNAL OF SYSTEMS SCIENCE, 2006, 37 (10) : 709 - 722
  • [46] An efficient learning algorithm for binary feedforward neural networks
    Zeng X.
    Zhou J.
    Zheng X.
    Zhong S.
    Zhou, Jianxin (zhoujx0219@163.com), 2016, Harbin Institute of Technology (48): : 148 - 154
  • [47] On learning feedforward neural networks with noise injection into inputs
    Seghouane, AK
    Moudden, Y
    Fleury, G
    NEURAL NETWORKS FOR SIGNAL PROCESSING XII, PROCEEDINGS, 2002, : 149 - 158
  • [48] A fast learning strategy for multilayer feedforward neural networks
    Chen, Huawei
    Zhong, Hualan
    Yuan, Haiying
    Jin, Fan
    WCICA 2006: SIXTH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION, VOLS 1-12, CONFERENCE PROCEEDINGS, 2006, : 3019 - +
  • [49] Sensitivity analysis for selective learning by feedforward neural networks
    Engelbrecht, AP
    FUNDAMENTA INFORMATICAE, 2001, 46 (03) : 219 - 252
  • [50] Implementation of Kolmogorov learning algorithm for feedforward neural networks
    Neruda, R
    Stedry, A
    Drkosová, J
    COMPUTATIONAL SCIENCE -- ICCS 2001, PROCEEDINGS PT 2, 2001, 2074 : 986 - 995