ERANN: An Algorithm to Extract Symbolic Rules from Trained Artificial Neural Networks

被引:1
|
作者
Kamruzzaman, S. M. [1 ]
Hamid, Md. Abdul [1 ]
Sarkar, A. M. Jehad [1 ]
机构
[1] Hankuk Univ Foreign Studies, Dept Elect Engn, Yongin 449791, Kyonggi Do, South Korea
关键词
Backpropagation; Clustering algorithm; Constructive algorithm; Continuous activation function; Pruning algorithm; Rule extraction algorithm; Symbolic rules;
D O I
10.4103/0377-2063.96181
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This paper presents an algorithm to extract symbolic rules from trained artificial neural networks (ANNs), called ERANN. In many applications, it is desirable to extract knowledge from ANNs for the users to gain a better understanding of how the networks solve the problems. Although ANN usually achieves high classification accuracy, the obtained results sometimes may be incomprehensible, because the knowledge embedded within them is distributed over the activation functions and the connection weights. This problem can be solved by extracting rules from trained ANNs. To do so, a rule extraction algorithm has been proposed in this paper to extract symbolic rules from trained ANNs. A standard three-layer feedforward ANN with four-phase training is the basis of the proposed algorithm. Extensive experimental studies on a set of benchmark classification problems, including breast cancer, iris, diabetes, wine, season, golfplaying, and lenses classification, demonstrates the applicability of the proposed method. Extracted rules are comparable with other methods in terms of number of rules, average number of conditions for a rule, and the rules accuracy. The proposed method achieved accuracy values 96.28, 98.67, 76.56, 91.01, 100, 100, and 100 for the above problems, respectively. It has been seen that these results are one of the best results comparing with results obtained from related previous studies.
引用
收藏
页码:138 / 154
页数:17
相关论文
共 50 条
  • [31] Symbolic interpretation of artificial neural networks based on multiobjective genetic algorithms and association rules mining
    Yedjour, Dounia
    Benyettou, Abdelkader
    APPLIED SOFT COMPUTING, 2018, 72 : 177 - 188
  • [32] Searching for Promisingly Trained Artificial Neural Networks
    Lujano-Rojas, Juan M.
    Dufo-Lopez, Rodolfo
    Artal-Sevil, Jesus Sergio
    Garcia-Paricio, Eduardo
    FORECASTING, 2023, 5 (03): : 550 - 575
  • [33] Rules extraction from constructively trained neural networks based on genetic algorithms
    Mohamed, Marghny H.
    NEUROCOMPUTING, 2011, 74 (17) : 3180 - 3192
  • [34] Extraction of rules from artificial neural networks for nonlinear regression
    Setiono, R
    Leow, WK
    Zurada, JM
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2002, 13 (03): : 564 - 577
  • [35] A statistics based approach for extracting priority rules from trained neural networks
    Zhou, ZH
    Chen, SF
    Chen, ZQ
    IJCNN 2000: PROCEEDINGS OF THE IEEE-INNS-ENNS INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOL III, 2000, : 401 - 406
  • [36] Extracting stochastic machines from recurrent neural networks trained on complex symbolic sequences
    Tino, P
    Vojtek, V
    FIRST INTERNATIONAL CONFERENCE ON KNOWLEDGE-BASED INTELLIGENT ELECTRONIC SYSTEMS, PROCEEDINGS 1997 - KES '97, VOLS 1 AND 2, 1997, : 551 - 558
  • [37] Extracting rules from neural networks using symbolic algorithms:: Preliminary results
    Milaré, CR
    de Carvalho, ACPLF
    Monard, MC
    ICCIMA 2001: FOURTH INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND MULTIMEDIA APPLICATIONS, PROCEEDINGS, 2001, : 384 - 388
  • [38] Instance-based method to extract rules from neural networks
    Kim, D
    Lee, JH
    ARTIFICIAL NEURAL NETWORKS-ICANN 2001, PROCEEDINGS, 2001, 2130 : 1193 - 1198
  • [39] Selecting Correct Methods to Extract Fuzzy Rules from Artificial Neural Network
    Tan, Xiao
    Zhou, Yuan
    Ding, Zuohua
    Liu, Yang
    MATHEMATICS, 2021, 9 (11)
  • [40] FERNN: An Algorithm for Fast Extraction of Rules from Neural Networks
    Rudy Setiono
    Wee Kheng Leow
    Applied Intelligence, 2000, 12 : 15 - 25