Learning Accurate and Interpretable Decision Rule Sets from Neural Networks

被引:0
|
作者
Qiao, Litao [1 ]
Wang, Weijia [1 ]
Lin, Bill [1 ]
机构
[1] Univ Calif San Diego, Elect & Comp Engn, La Jolla, CA 92093 USA
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper proposes a new paradigm for learning a set of independent logical rules in disjunctive normal form as an interpretable model for classification. We consider the problem of learning an interpretable decision rule set as training a neural network in a specific, yet very simple two-layer architecture. Each neuron in the first layer directly maps to an interpretable if-then rule after training, and the output neuron in the second layer directly maps to a disjunction of the first-layer rules to form the decision rule set. Our representation of neurons in this first rules layer enables us to encode both the positive and the negative association of features in a decision rule. State-of-the-art neural net training approaches can be leveraged for learning highly accurate classification models. Moreover, we propose a sparsity-based regularization approach to balance between classification accuracy and the simplicity of the derived rules. Our experimental results show that our method can generate more accurate decision rule sets than other state-of-the-art rule-learning algorithms with better accuracy-simplicity trade-offs. Further, when compared with uninterpretable black-box machine learning approaches such as random forests and full-precision deep neural networks, our approach can easily find interpretable decision rule sets that have comparable predictive performance.
引用
收藏
页码:4303 / 4311
页数:9
相关论文
共 50 条
  • [1] Learning Interpretable Decision Rule Sets: A Submodular Optimization Approach
    Yang, Fan
    He, Kai
    Yang, Linxiao
    Du, Hongxia
    Yang, Jingbang
    Yang, Bo
    Sun, Liang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [2] Alternative Formulations of Decision Rule Learning from Neural Networks
    Qiao, Litao
    Wang, Weijia
    Lin, Bill
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2023, 5 (03): : 937 - 956
  • [3] RL-Net: Interpretable Rule Learning with Neural Networks
    Dierckx, Lucile
    Veroneze, Rosana
    Nijssen, Siegfried
    NEURAL-SYMBOLIC LEARNING AND REASONING 2023, NESY 2023, 2023,
  • [4] RL-Net: Interpretable Rule Learning with Neural Networks
    Dierckx, Lucile
    Veroneze, Rosana
    Nijssen, Siegfried
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2023, PT I, 2023, 13935 : 95 - 107
  • [5] Learning Accurate and Interpretable Decision Trees
    Balcan, Maria-Florina
    Sharma, Dravyansh
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2024, 244 : 288 - 307
  • [7] A Bayesian Framework for Learning Rule Sets for Interpretable Classification
    Wang, Tong
    Rudin, Cynthia
    Doshi-Velez, Finale
    Liu, Yimin
    Klampfl, Erica
    MacNeille, Perry
    JOURNAL OF MACHINE LEARNING RESEARCH, 2017, 18 : 1 - 37
  • [8] Rule extraction from neural networks using fuzzy sets
    Wettayaprasit, W
    Lursinsap, C
    Chu, CHH
    ICONIP'02: PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING: COMPUTATIONAL INTELLIGENCE FOR THE E-AGE, 2002, : 2582 - 2586
  • [9] Sets2Sets: Learning from Sequential Sets with Neural Networks
    Hu, Haoji
    He, Xiangnan
    KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 1491 - 1499
  • [10] Interpretable and Accurate Convolutional Neural Networks for Human Activity Recognition
    Kim, Eunji
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2020, 16 (11) : 7190 - 7198