Reduction of training computation by network optimization of Integration Neural Network approximator

被引:1
|
作者
Iwata, Yoshiharu [1 ]
Wakamatsu, Hidefumi [1 ]
机构
[1] Osaka Univ, Suita, Osaka 5650871, Japan
关键词
D O I
10.1109/SII55687.2023.10039273
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In constructing approximators for simulations, such as the finite element method using machine learning, there is a conflict between reducing training data generation time and improving approximation accuracy. To solve this problem, we proposed a Hybrid Neural Network and Integration Neural Network as an approximator for simulations with high accuracy, even with a small number of data. This method combines a simple perceptron approximator that mimics multiple regression analysis created based on deductive knowledge (linear approximator) and a neural network approximator created based on inductive knowledge (nonlinear approximator). This combination is based on Weierstrass' approximation theorem. In this study, by applying the approximator theorem one step further, we investigated the reduction of learning computational complexity by simplifying the network structure of the Integration Neural Network and improving the network structure. As a result, we found that approximators with almost the same accuracy can be constructed, and the number of weight updates in the learning process can be reduced to about 5%.
引用
收藏
页数:5
相关论文
共 50 条
  • [1] Global optimization for neural network training
    Shang, Y
    Wah, BW
    COMPUTER, 1996, 29 (03) : 45 - +
  • [2] Confirmation of driving principle by weight analysis of Integration Neural Network and extension of deductive approximator
    Iwata, Yoshiharu
    Wakamatsu, Hidefumi
    JOURNAL OF ADVANCED MECHANICAL DESIGN SYSTEMS AND MANUFACTURING, 2024, 18 (07):
  • [3] Neural network approximator with novel learning scheme for design optimization with variable complexity data
    Kodiyalam, S
    Gurumoorthy, R
    AIAA JOURNAL, 1997, 35 (04) : 736 - 739
  • [4] A Compact Neural Network for Fused Lasso Signal Approximator
    Mohammadi, Majid
    IEEE TRANSACTIONS ON CYBERNETICS, 2021, 51 (08) : 4327 - 4336
  • [5] Neural network with unbounded activation functions is universal approximator
    Sonoda, Sho
    Murata, Noboru
    APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS, 2017, 43 (02) : 233 - 268
  • [6] Neural Network Training Schemes for Antenna Optimization
    Linh Ho Manh
    Grimaccia, Francesco
    Mussetta, Marco
    Zich, Riccardo E.
    2014 IEEE ANTENNAS AND PROPAGATION SOCIETY INTERNATIONAL SYMPOSIUM (APSURSI), 2014, : 1948 - 1949
  • [7] Neural network training and stochastic global optimization
    Jordanov, I
    ICONIP'02: PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING: COMPUTATIONAL INTELLIGENCE FOR THE E-AGE, 2002, : 488 - 492
  • [8] Instance Selection Optimization for Neural Network Training
    Kordos, Miroslaw
    ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING, ICAISC 2016, 2016, 9692 : 610 - 620
  • [9] A competitive functional link artificial neural network as a universal approximator
    Lotfi, Ehsan
    Rezaee, Abbas Ali
    SOFT COMPUTING, 2018, 22 (14) : 4613 - 4625
  • [10] OPTIMIZATION METHOD FOR INTEGRATION OF CONVOLUTIONAL AND RECURRENT NEURAL NETWORK
    Kassylkassova, K.
    Yessengaliyeva, Zh.
    Urazboev, G.
    Kassylkassova, A.
    EURASIAN JOURNAL OF MATHEMATICAL AND COMPUTER APPLICATIONS, 2023, 11 (02): : 40 - 56