An efficient parallel algorithm for LISSOM neural network

被引:6
|
作者
Chang, LC [1 ]
Chang, FJ [1 ]
机构
[1] Natl Taiwan Univ, Dept Bioenvironm Syst Engn, Taipei 10617, Taiwan
关键词
laterally interconnected synergetically self-organizing map; parallel neural networks; parallel implementation; balancing load;
D O I
10.1016/S0167-8191(02)00166-7
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
We present a parallel algorithm for laterally interconnected synergetically self-organizing map (LISSOM) neural network, a self-organizing map with lateral excitatory and inhibitory connections, to enhance the computational efficiency. A general strategy of balancing workload for different sizes of LISSOM networks on parallel computers is described. The parallel algorithm of LISSOM is implemented on IBM SP2 and PC cluster. The results demonstrate the efficiency of this LISSOM parallel algorithm in processing networks with large sizes. Parallel implementation for different input dimensions in networks of the same size (i.e., 20 x 20) show that the speedup can sustain at a high level. We demonstrate the LISSOM can be applied to complex problems through the parallel algorithm devised in this study. (C) 2002 Elsevier Science B.V. All rights reserved.
引用
收藏
页码:1611 / 1633
页数:23
相关论文
共 50 条
  • [21] The research on Optimization neural network structure parallel genetic algorithm
    Yu, Mingyan
    Yan, Ying
    Liu, Haiyuan
    Zhi, HeCai
    ADVANCES IN MANUFACTURING TECHNOLOGY, PTS 1-4, 2012, 220-223 : 2564 - +
  • [22] Gabor wavelet neural network algorithm based on parallel structure
    College of Electronics and Communication Engineering, South China University of Technology, Guangzhou 510640, China
    不详
    Guangxue Jingmi Gongcheng, 2006, 2 (247-250):
  • [23] Parallel Implementation of the Givens Rotations in the Neural Network Learning Algorithm
    Bilski, Jaroslaw
    Kowalczyk, Bartosz
    Zurada, Jacek M.
    ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING, ICAISC 2017, PT I, 2017, 10245 : 14 - 24
  • [24] Parallel learning evolutionary algorithm based on neural network ensemble
    Xiao, Sha
    Yu, Dong
    Li, Yibin
    2006 IEEE INTERNATIONAL CONFERENCE ON INFORMATION ACQUISITION, VOLS 1 AND 2, CONFERENCE PROCEEDINGS, 2006, : 70 - 74
  • [25] Parallel Batch Pattern Training Algorithm for Deep Neural Network
    Turchenko, Volodymyr
    Golovko, Vladimir
    2014 INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING & SIMULATION (HPCS), 2014, : 697 - 702
  • [26] Efficient learning algorithm for associative memory neural network
    Fudan Univ, Shanghai, China
    Zidonghua Xuebao, 5 (721-727):
  • [27] Efficient Parallel Algorithm for Optimal DAG Structure Search on Parallel Computer with Torus Network
    Honda, Hirokazu
    Tamada, Yoshinori
    Suda, Reiji
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2016, 2016, 10048 : 483 - 502
  • [28] EPMC: efficient parallel memory compression in deep neural network training
    Zailong Chen
    Shenghong Yang
    Chubo Liu
    Yikun Hu
    Kenli Li
    Keqin Li
    Neural Computing and Applications, 2022, 34 : 757 - 769
  • [29] EPMC: efficient parallel memory compression in deep neural network training
    Chen, Zailong
    Yang, Shenghong
    Liu, Chubo
    Hu, Yikun
    Li, Kenli
    Li, Keqin
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (01): : 757 - 769
  • [30] An incremental algorithm for parallel training of the size and the weights in a feedforward neural network
    Hlavácková-Schindler, K
    Fischer, MM
    NEURAL PROCESSING LETTERS, 2000, 11 (02) : 131 - 138