TAKING LAWS OUT OF TRAINED NEURAL NETWORKS

被引:0
|
作者
Majewski, Jaroslaw [1 ]
Wojtyna, Ryszard [1 ]
机构
[1] Univ Technol & Life Sci, Fac Telecommun & Elect Engn, PL-85796 Bydgoszcz, Poland
关键词
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this paper, a problem of discovering numeric laws governing a trained neural network is considered. We propose new multilayer perceptrons implementing fractional rational functions, i.e. functions expressed as ratio of two polynomials of any order with a given number of components in the function numerator and denominator. Our networks can be utilized not only for the function implementation. They can also be used to extract knowledge embedded in the trained network This is performed during the training process. The extracted laws, underlying the network operation, are expressed in the symbolic, fractional-rational-function form. Our networks provide information about the function parameters. The extraction ability results from applying proper activation functions in different perceptron layers, i.e. functions of exp(.), ln(.), (.)(-1) and/or (.)(2) types. Both theoretical considerations and simulation results are presented to illustrate properties of our networks.
引用
收藏
页码:21 / 24
页数:4
相关论文
共 50 条
  • [21] Optimizing over an Ensemble of Trained Neural Networks
    Wang, Keliang
    Lozano, Leonardo
    Cardonha, Carlos
    Bergman, David
    INFORMS JOURNAL ON COMPUTING, 2023, 35 (03) : 652 - 674
  • [22] Extracting rules from trained neural networks
    Tsukimoto, H
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2000, 11 (02): : 377 - 389
  • [23] Interpreting Adversarially Trained Convolutional Neural Networks
    Zhang, Tianyuan
    Zhu, Zhanxing
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [24] Robustness Guarantees for Adversarially Trained Neural Networks
    Mianjy, Poorya
    Arora, Raman
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [25] Confronting Domain Shift in Trained Neural Networks
    Martinez, Carianne
    Najera-Flores, David A.
    Brink, Adam R.
    Quinn, D. Dane
    Chatzi, Eleni
    Forrest, Stephanie
    NEURIPS 2020 WORKSHOP ON PRE-REGISTRATION IN MACHINE LEARNING, VOL 148, 2020, 148 : 176 - 192
  • [26] NEURAL NETWORKS OPTIMALLY TRAINED WITH NOISY DATA
    WONG, KYM
    SHERRINGTON, D
    PHYSICAL REVIEW E, 1993, 47 (06): : 4465 - 4482
  • [27] Dismantling Complex Networks by a Neural Model Trained from Tiny Networks
    Zhang, Jiazheng
    Wang, Bang
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 2559 - 2568
  • [28] EXTRACTING RULES FROM TRAINED RBF NEURAL NETWORKS
    Grabusts, Peter
    ENVIRONMENT, TECHNOLOGY, RESOURCES, PROCEEDINGS, 2005, : 33 - 39
  • [29] Dendrite Morphological Neural Networks Trained by Differential Evolution
    Arce, Fernando
    Zamora, Erik
    Sossa, Humberto
    Barron, Ricardo
    PROCEEDINGS OF 2016 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2016,
  • [30] Optimally adapted multistate neural networks trained with noise
    Erichsen, R
    Theumann, WK
    PHYSICAL REVIEW E, 1999, 59 (01) : 947 - 955