Reduction of training computation by network optimization of Integration Neural Network approximator

被引:1
|
作者
Iwata, Yoshiharu [1 ]
Wakamatsu, Hidefumi [1 ]
机构
[1] Osaka Univ, Suita, Osaka 5650871, Japan
关键词
D O I
10.1109/SII55687.2023.10039273
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In constructing approximators for simulations, such as the finite element method using machine learning, there is a conflict between reducing training data generation time and improving approximation accuracy. To solve this problem, we proposed a Hybrid Neural Network and Integration Neural Network as an approximator for simulations with high accuracy, even with a small number of data. This method combines a simple perceptron approximator that mimics multiple regression analysis created based on deductive knowledge (linear approximator) and a neural network approximator created based on inductive knowledge (nonlinear approximator). This combination is based on Weierstrass' approximation theorem. In this study, by applying the approximator theorem one step further, we investigated the reduction of learning computational complexity by simplifying the network structure of the Integration Neural Network and improving the network structure. As a result, we found that approximators with almost the same accuracy can be constructed, and the number of weight updates in the learning process can be reduced to about 5%.
引用
收藏
页数:5
相关论文
共 50 条
  • [31] Neural network as a function approximator and its application in solving differential equations
    Zeyu Liu
    Yantao Yang
    Qingdong Cai
    Applied Mathematics and Mechanics, 2019, 40 : 237 - 248
  • [32] Neural network as a function approximator and its application in solving differential equations
    Liu, Zeyu
    Yang, Yantao
    Cai, Qingdong
    APPLIED MATHEMATICS AND MECHANICS-ENGLISH EDITION, 2019, 40 (02) : 237 - 248
  • [33] Neural network as a function approximator and its application in solving differential equations
    Zeyu LIU
    Yantao YANG
    Qingdong CAI
    AppliedMathematicsandMechanics(EnglishEdition), 2019, 40 (02) : 237 - 248
  • [34] Optimization of Neural Network Computation with use of Residual Number System for Tasks of Design of Neural Network Systems of Automatic Control
    Tikhonov, E. E.
    Sosin, A., I
    Evdokimov, A. A.
    2018 INTERNATIONAL SCIENTIFIC MULTI-CONFERENCE ON INDUSTRIAL ENGINEERING AND MODERN TECHNOLOGIES (FAREASTCON), 2018,
  • [35] Logarithmic Compression for Memory Footprint Reduction in Neural Network Training
    Hirose, Kazutoshi
    Uematsu, Ryota
    Ando, Kota
    Orimo, Kentaro
    Ueyoshi, Kodai
    Ikebe, Masayuki
    Asai, Tetsuya
    Takamaeda-Yamazaki, Shinya
    Motomura, Masato
    2017 FIFTH INTERNATIONAL SYMPOSIUM ON COMPUTING AND NETWORKING (CANDAR), 2017, : 291 - 297
  • [36] Integration of particle swarm optimization-based fuzzy neural network and artificial neural network for supplier selection
    Kuo, R. J.
    Hong, S. Y.
    Huang, Y. C.
    APPLIED MATHEMATICAL MODELLING, 2010, 34 (12) : 3976 - 3990
  • [37] Smart Security Audit: Reinforcement Learning with a Deep Neural Network Approximator
    Pozdniakov, Konstantin
    Alonso, Eduardo
    Stankovic, Vladimir
    Tam, Kimberly
    Jones, Kevin
    2020 INTERNATIONAL CONFERENCE ON CYBER SITUATIONAL AWARENESS, DATA ANALYTICS AND ASSESSMENT (CYBER SA 2020), 2020,
  • [38] Fuzzy flip-flop based neural network as a function approximator
    Lovassy, Rita
    Koczy, Laszlo T.
    Gal, Laszlo
    2008 IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE FOR MEASUREMENT SYSTEMS AND APPLICATIONS, 2008, : 44 - 49
  • [39] Robustness of Adaptive Neural Network Optimization Under Training Noise
    Chaudhury, Subhajit
    Yamasaki, Toshihiko
    IEEE ACCESS, 2021, 9 : 37039 - 37053
  • [40] Neural network training and simulation using a multidimensional optimization system
    Likas, A
    Karras, DA
    Lagaris, IE
    INTERNATIONAL JOURNAL OF COMPUTER MATHEMATICS, 1998, 67 (1-2) : 33 - 46