Explanation sets: A general framework for machine learning explainability

被引:13
|
作者
Fernandez, Ruben R. [1 ]
de Diego, Isaac Martin [1 ]
Moguerza, Javier M. [1 ]
Herrera, Francisco [2 ,3 ]
机构
[1] Rey Juan Carlos Univ, Data Sci Lab DSLAB, C Tulipan S-N, Mostoles 28933, Spain
[2] Univ Granada, Andalusian Res Inst Data Sci & Computat Intellige, Dept Comp Sci & Artificial Intelligence, Granada 18071, Spain
[3] King Abdulaziz Univ, Fac Comp & Informat Technol, Jeddah 21589, Saudi Arabia
关键词
Explainable machine learning; Explanation sets; Counterfactuals; Semifactuals; Example-based explanation;
D O I
10.1016/j.ins.2022.10.084
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Explainable Machine Learning (ML) is an emerging field of Artificial Intelligence that has gained popularity in the last decade. It focuses on explaining ML models and their predic-tions, enabling people to understand the rationale behind them. Counterfactuals and semi-factuals are two instances of Explainable ML techniques that explain model predictions using other observations. These techniques are based on the comparison between the observation to be explained and another one. In counterfactuals, their prediction is differ-ent, and in semifactuals, it is the same. Both techniques have been studied in the Social Sciences and Explainable ML communities, and they have different use cases and proper-ties. In this paper, the Explanation Set framework, an approach that unifies counterfactuals and semifactuals, is introduced. Explanation Sets are example-based explanations defined in a neighborhood where most observations satisfy a grouping measure. The neighborhood allows defining and combining restrictions. The grouping measure determines if the expla-nations are counterfactuals (dissimilarity) or semifactuals (similarity). Besides providing a unified framework, the major strength of the proposal is to extend these explanations to other tasks such as regression by using an appropriate grouping measure. The proposal is validated in a regression and classification task using several neighborhoods and group-ing measures. (c) 2022 Elsevier Inc. All rights reserved.
引用
收藏
页码:464 / 481
页数:18
相关论文
共 50 条
  • [41] Machine learning and deep analytics for biocomputing: call for better explainability
    Petkovic, Dragutin
    Kobzik, Lester
    Re, Christopher
    PACIFIC SYMPOSIUM ON BIOCOMPUTING 2018 (PSB), 2018, : 623 - 627
  • [42] Accuracy and explainability of statistical and machine learning xG models in football
    Cefis, Mattia
    Carpita, Maurizio
    STATISTICS, 2025, 59 (02) : 426 - 445
  • [43] A Short Survey on Machine Learning Explainability: An Application to Periocular Recognition
    Brito, Joao
    Proenca, Hugo
    ELECTRONICS, 2021, 10 (15)
  • [44] Machine Learning Explainability for Intrusion Detection in the Industrial Internet of Things
    Ahakonye L.A.C.
    Nwakanma C.I.
    Lee J.M.
    Kim D.-S.
    IEEE Internet of Things Magazine, 2024, 7 (03): : 68 - 74
  • [45] Transparency, auditability, and explainability of machine learning models in credit scoring
    Buecker, Michael
    Szepannek, Gero
    Gosiewska, Alicja
    Biecek, Przemyslaw
    JOURNAL OF THE OPERATIONAL RESEARCH SOCIETY, 2022, 73 (01) : 70 - 90
  • [46] Machine Learning and the Future of Scientific Explanation
    Florian J. Boge
    Michael Poznic
    Journal for General Philosophy of Science, 2021, 52 : 171 - 176
  • [47] Testing machine learning explanation methods
    Andrew A. Anderson
    Neural Computing and Applications, 2023, 35 : 18073 - 18084
  • [48] dalex: Responsible machine learning with interactive explainability and fairness in python
    Baniecki, Hubert
    Kretowicz, Wojciech
    Piatyszek, Piotr
    Wisniewski, Jakub
    Biecek, Przemyslaw
    Journal of Machine Learning Research, 2021, 22 : 1 - 7
  • [49] On Machine Learning models explainability in the banking sector: the case of SHAP
    Garcia-Cespedes, Ruben
    Alias-Carrascosa, Francisco J.
    Moreno, Manuel
    JOURNAL OF THE OPERATIONAL RESEARCH SOCIETY, 2025,
  • [50] An Argumentative Explanation of Machine Learning Outcomes
    Bistarelli, Stefano
    Mancinelli, Alessio
    Santini, Francesco
    Taticchi, Carlo
    COMPUTATIONAL MODELS OF ARGUMENT, COMMA 2022, 2022, 353 : 347 - 348