Model Interpretability through the Lens of Computational Complexity

被引:0
|
作者
Barcelo, Pablo [1 ,4 ]
Monet, Mikael [2 ]
Perez, Jorge [3 ,4 ]
Subercaseaux, Bernardo [3 ,4 ]
机构
[1] PUC Chile, Inst Math & Computat Engn, Santiago, Chile
[2] Inria Lille, Lille, France
[3] Univ Chile, Dept Comp Sci, Santiago, Chile
[4] Fdn Res Data, Millennium Inst, Santiago, Chile
关键词
KNAPSACK;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In spite of several claims stating that some models are more interpretable than others - e.g., "linear models are more interpretable than deep neural networks" - we still lack a principled notion of interpretability to formally compare among different classes of models. We make a step towards such a notion by studying whether folk-lore interpretability claims have a correlate in terms of computational complexity theory. We focus on local post-hoc explainability queries that, intuitively, attempt to answer why individual inputs are classified in a certain way by a given model. In a nutshell, we say that a class C-1 of models is more interpretable than another class C-2, if the computational complexity of answering post-hoc queries for models in C-2 is higher than for those in C-1. We prove that this notion provides a good theoretical counterpart to current beliefs on the interpretability of models; in particular, we show that under our definition and assuming standard complexity-theoretical assumptions (such as P not equal NP), both linear and tree-based models are strictly more interpretable than neural networks. Our complexity analysis, however, does not provide a clear-cut difference between linear and tree-based models, as we obtain different results depending on the particular post-hoc explanations considered. Finally, by applying a finer complexity analysis based on parameterized complexity, we are able to prove a theoretical result suggesting that shallow neural networks are more interpretable than deeper ones.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] A computational model for measuring discourse complexity
    Sun, Kun
    Xiong, Wenxin
    DISCOURSE STUDIES, 2019, 21 (06) : 690 - 712
  • [23] Complexity of the interpretability logics ILW and ILP
    Mikec, Luka
    LOGIC JOURNAL OF THE IGPL, 2023, 31 (01) : 194 - 213
  • [24] Humanizing TESOL Research Through the Lens of Complexity Thinking
    Pinner, Richard S.
    Sampson, Richard J.
    TESOL QUARTERLY, 2021, 55 (02) : 633 - 642
  • [25] Random Projection Through the Lens of Data Complexity Indicators
    Thummanusarn, Yamonporn
    Kahan, Ata
    21ST IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS ICDMW 2021, 2021, : 482 - 489
  • [26] History of art paintings through the lens of entropy and complexity
    Sigaki, Higor Y. D.
    Perc, Matjaz
    Ribeiro, Heroldo V.
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2018, 115 (37) : E8585 - E8594
  • [27] On the Interpretability of Regularisation for Neural Networks Through Model Gradient Similarity
    Szolnoky, Vincent
    Andersson, Viktor
    Kulcsar, Balazs
    Jornsten, Rebecka
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [28] Enabling chemical discovery through the lens of a computational microscope
    Amaro, Rommie E.
    ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY, 2013, 246
  • [29] Enabling chemical discovery through the lens of computational microscope
    Amaro, Rommle
    ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY, 2013, 246
  • [30] Complexity in Analogy Tasks: An Analysis and Computational Model
    Stahl, Philip
    Ragni, Marco
    ECAI 2010 - 19TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2010, 215 : 1041 - 1042