Reduction of training computation by network optimization of Integration Neural Network approximator

被引:1
|
作者
Iwata, Yoshiharu [1 ]
Wakamatsu, Hidefumi [1 ]
机构
[1] Osaka Univ, Suita, Osaka 5650871, Japan
关键词
D O I
10.1109/SII55687.2023.10039273
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In constructing approximators for simulations, such as the finite element method using machine learning, there is a conflict between reducing training data generation time and improving approximation accuracy. To solve this problem, we proposed a Hybrid Neural Network and Integration Neural Network as an approximator for simulations with high accuracy, even with a small number of data. This method combines a simple perceptron approximator that mimics multiple regression analysis created based on deductive knowledge (linear approximator) and a neural network approximator created based on inductive knowledge (nonlinear approximator). This combination is based on Weierstrass' approximation theorem. In this study, by applying the approximator theorem one step further, we investigated the reduction of learning computational complexity by simplifying the network structure of the Integration Neural Network and improving the network structure. As a result, we found that approximators with almost the same accuracy can be constructed, and the number of weight updates in the learning process can be reduced to about 5%.
引用
收藏
页数:5
相关论文
共 50 条
  • [21] Weight training for performance optimization in fuzzy neural network
    Chang, HC
    Juang, YT
    KNOWLEDGE-BASED INTELLIGENT INFORMATION AND ENGINEERING SYSTEMS, PT 1, PROCEEDINGS, 2005, 3681 : 596 - 603
  • [22] Optimization of the local search in the training for SAMANN neural network
    Medvedev, Viktor
    Dzemyda, Gintautas
    JOURNAL OF GLOBAL OPTIMIZATION, 2006, 35 (04) : 607 - 623
  • [23] Optimization of the Local Search in the Training for SAMANN Neural Network
    Viktor Medvedev
    Gintautas Dzemyda
    Journal of Global Optimization, 2006, 35 : 607 - 623
  • [24] Optimization of memory access for the convolutional neural network training
    Wang J.
    Hao Z.
    Li H.
    Xi'an Dianzi Keji Daxue Xuebao/Journal of Xidian University, 2020, 47 (02): : 98 - 107
  • [25] Neural network surgery: Combining training with topology optimization
    Schiessler, Elisabeth J.
    Aydin, Roland C.
    Linka, Kevin
    Cyron, Christian J.
    NEURAL NETWORKS, 2021, 144 : 384 - 393
  • [26] Hybrid Algorithm for the Optimization of Training Convolutional Neural Network
    Albeahdili, Hayder M.
    Han, Tony
    Islam, Naz E.
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2015, 6 (10) : 79 - 85
  • [27] Combining SOM and evolutionary computation algorithms for RBF neural network training
    Zhen-Yao Chen
    R. J. Kuo
    Journal of Intelligent Manufacturing, 2019, 30 : 1137 - 1154
  • [28] DYNAMICS AND NEURAL NETWORK COMPUTATION
    HOPFIELD, JJ
    INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, 1990, : 633 - 644
  • [29] Decoupled neural network training with re-computation and weight prediction
    Peng, Jiawei
    Xu, Yicheng
    Lin, Zhiping
    Weng, Zhenyu
    Yang, Zishuo
    Zhuang, Huiping
    PLOS ONE, 2023, 18 (02):
  • [30] Combining SOM and evolutionary computation algorithms for RBF neural network training
    Chen, Zhen-Yao
    Kuo, R. J.
    JOURNAL OF INTELLIGENT MANUFACTURING, 2019, 30 (03) : 1137 - 1154