Hybrid no-propagation learning for multilayer neural networks

被引:29
|
作者
Adhikari, Shyam Prasad [1 ,2 ]
Yang, Changju [1 ]
Slot, Krzysztof [3 ]
Strzelecki, Michal [4 ]
Kim, Hyongsuk [1 ,2 ]
机构
[1] Chonbuk Natl Univ, Div Elect Engn, Jeonju 561756, South Korea
[2] Chonbuk Natl Univ, IRRC, Jeonju 56754896, Jeonbuk, South Korea
[3] Lodz Univ Technol, Inst Appl Comp Sci, Stefanowskiego 18-22, PL-90924 Lodz, Poland
[4] Tech Univ Lodz, Inst Elect, Wolczanska 211-215, PL-90924 Lodz, Poland
基金
新加坡国家研究基金会;
关键词
No-propagation; Backpropagation; Delta rule; Random weight change; Multilayer neural network; On-chip learning; PERTURBATION; ARCHITECTURE;
D O I
10.1016/j.neucom.2018.08.034
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A hybrid learning algorithm suitable for hardware implementation of multi- layer neural networks is proposed. Though backpropagation is a powerful learning method for multilayer neural networks, its hardware implementation is difficult due to complexities of the neural synapses and the operations involved in error backpropagation. We propose a learning algorithm with performance comparable to but easier than backpropagation to be implemented in hardware for on-chip learning of multi-layer neural networks. In the proposed learning algorithm, a multilayer neural network is trained with a hybrid of gradient- based delta rule and a stochastic algorithm, called Random Weight Change. The parameters of the output layer are learned using the delta rule, whereas the inner layer parameters are learned using Random Weight Change, thereby the overall multilayer neural network is trained without the need for error backpropagation. Experimental results showing better performance of the proposed hybrid learning rule than either of its constituent learning algorithms, and comparable to that of backpropagation on the benchmark MNIST dataset are presented. Hardware architecture illustrating the ease of implementation of the proposed learning rule in analog hardware vis-a-vis the backpropagation algorithm is also presented. (c) 2018 Published by Elsevier B.V.
引用
收藏
页码:28 / 35
页数:8
相关论文
共 50 条
  • [1] Hybrid learning of mapping and its Jacobian in multilayer neural networks
    Lee, JW
    Oh, JH
    NEURAL COMPUTATION, 1997, 9 (05) : 937 - 958
  • [2] One-sided Dynamic Undersampling No-Propagation Neural Networks for imbalance problem
    Fan, Qi
    Wang, Zhe
    Gao, Daqi
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2016, 53 : 62 - 73
  • [3] Hybrid Back-Propagation/Genetic Algorithm for multilayer feedforward neural networks
    Lu, C
    Shi, BX
    2000 5TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING PROCEEDINGS, VOLS I-III, 2000, : 571 - 574
  • [4] Learning with regularizers in multilayer neural networks
    Saad, D
    Rattray, M
    PHYSICAL REVIEW E, 1998, 57 (02) : 2170 - 2176
  • [5] Optimal learning in multilayer neural networks
    Winther, O.
    Lautrup, B.
    Zhang, J.-B.
    Physical Review E. Statistical Physics, Plasmas, Fluids, and Related Interdisciplinary Topics, 1997, 55 (1-B pt B):
  • [6] Optimal learning in multilayer neural networks
    Winther, O
    Lautrup, B
    Zhang, JB
    PHYSICAL REVIEW E, 1997, 55 (01) : 836 - 844
  • [7] LEARNING ALGORITHMS FOR MULTILAYER NEURAL NETWORKS
    AVEDYAN, ED
    AUTOMATION AND REMOTE CONTROL, 1995, 56 (04) : 541 - 551
  • [8] Quantizability and learning complexity in multilayer neural networks
    Fu, LM
    IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART C-APPLICATIONS AND REVIEWS, 1998, 28 (02): : 295 - 300
  • [9] An algorithm of supervised learning for multilayer neural networks
    Tang, Z
    Wang, XG
    Tamura, H
    Ishii, M
    NEURAL COMPUTATION, 2003, 15 (05) : 1125 - 1142
  • [10] Effect of batch learning in multilayer neural networks
    Fukumizu, K
    ICONIP'98: THE FIFTH INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING JOINTLY WITH JNNS'98: THE 1998 ANNUAL CONFERENCE OF THE JAPANESE NEURAL NETWORK SOCIETY - PROCEEDINGS, VOLS 1-3, 1998, : 67 - 70