ONLINE LEARNING WITH MINIMAL DEGRADATION IN FEEDFORWARD NETWORKS

被引:15
|
作者
DEANGULO, VR [1 ]
TORRAS, C [1 ]
机构
[1] INST CIBERNET,E-08028 BARCELONA,SPAIN
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 1995年 / 6卷 / 03期
关键词
D O I
10.1109/72.377971
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Dealing with nonstationary processes requires quick adaptation while at the same time avoiding catastrophic forgetting, A neural learning technique that satisfies these requirements, without sacrifying the benefits of distributed representations, is presented, It relies on a formalization of the problem as the minimization of the error over the previously learned input-output (i-o) patterns, subject to the constraint of perfect encoding of the new pattern, Then this constrained optimization problem is transformed into an unconstrained one with hidden-unit activations as variables. This new formulation naturally leads to an algorithm for solving the problem, which we call learning with minimal degradation (LMD), Some experimental comparisons of the performance of LMD with back propagation are provided which, besides showing the advantages of using LMD, reveal the dependence of forgetting on the learning fate in backpropagation. We also explain why overtraining affects forgetting and fault tolerance, which are seen as related problems.
引用
收藏
页码:657 / 668
页数:12
相关论文
共 50 条
  • [21] A new learning algorithm for feedforward neural networks
    Liu, DR
    Chang, TS
    Zhang, Y
    PROCEEDINGS OF THE 2001 IEEE INTERNATIONAL SYMPOSIUM ON INTELLIGENT CONTROL (ISIC'01), 2001, : 39 - 44
  • [22] A fast learning method for feedforward neural networks
    Wang, Shitong
    Chung, Fu-Lai
    Wang, Jun
    Wu, Jun
    NEUROCOMPUTING, 2015, 149 : 295 - 307
  • [23] A Study of Learning Issues in Feedforward Neural Networks
    Teso-Fz-Betono, Adrian
    Zulueta, Ekaitz
    Cabezas-Olivenza, Mireya
    Teso-Fz-Betono, Daniel
    Fernandez-Gamiz, Unai
    MATHEMATICS, 2022, 10 (17)
  • [24] Diffusion learning algorithms for feedforward neural networks
    Skorohod B.A.
    Cybernetics and Systems Analysis, 2013, 49 (03) : 334 - 346
  • [25] LEARNING IN FEEDFORWARD NEURAL NETWORKS BY IMPROVING THE PERFORMANCE
    GORDON, MB
    PERETO, P
    RODRIGUEZGIRONES, M
    PHYSICA A, 1992, 185 (1-4): : 402 - 410
  • [26] Two Criteria for Learning in Feedforward Neural Networks
    彭汉川
    甘强
    韦钰
    Journal of Southeast University(English Edition), 1997, (02) : 46 - 49
  • [27] LEARNING FROM EXAMPLES IN FEEDFORWARD BOOLEAN NETWORKS
    SHUKLA, P
    SINHA, TK
    PHYSICAL REVIEW E, 1993, 47 (04): : 2962 - 2965
  • [28] STEREOPSIS BY CONSTRAINT LEARNING FEEDFORWARD NEURAL NETWORKS
    KHOTANZAD, A
    BOKIL, A
    LEE, YW
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 1993, 4 (02): : 332 - 342
  • [29] ON SOURCE NETWORKS WITH MINIMAL BREAKDOWN DEGRADATION
    WITSENHAUSEN, HS
    BELL SYSTEM TECHNICAL JOURNAL, 1980, 59 (06): : 1083 - 1087
  • [30] An online learning algorithm with dimension selection using minimal hyper basis function networks
    Nishida, Kyosuke
    Yamauchi, Koichiro
    Omori, Takashi
    Systems and Computers in Japan, 2006, 37 (11) : 11 - 21