Evaluation of Human-Understandability of Global Model Explanations Using Decision Tree

被引:0
|
作者
Sivaprasad, Adarsa [1 ]
Reiter, Ehud [1 ]
Tintarev, Nava [2 ]
Oren, Nir [1 ]
机构
[1] Univ Aberdeen, Dept Comp Sci, Aberdeen, Scotland
[2] Maastricht Univ, Maastricht, Netherlands
基金
欧盟地平线“2020”;
关键词
Global Explanation; End-user Understandability; Health; Informatics; RISK;
D O I
10.1007/978-3-031-50396-2_3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In explainable artificial intelligence (XAI) research, the predominant focus has been on interpreting models for experts and practitioners. Model agnostic and local explanation approaches are deemed interpretable and sufficient in many applications. However, in domains like healthcare, where end users are patients without AI or domain expertise, there is an urgent need for model explanations that are more comprehensible and instil trust in the model's operations. We hypothesise that generating model explanations that are narrative, patient-specific and global (holistic of the model) would enable better understandability and enable decision-making. We test this using a decision tree model to generate both local and global explanations for patients identified as having a high risk of coronary heart disease. These explanations are presented to non-expert users. We find a strong individual preference for a specific type of explanation. The majority of participants prefer global explanations, while a smaller group prefers local explanations. A task based evaluation of mental models of these participants provide valuable feedback to enhance narrative global explanations. This, in turn, guides the design of health informatics systems that are both trustworthy and actionable.
引用
收藏
页码:43 / 65
页数:23
相关论文
共 50 条
  • [1] Using ontologies to enhance human understandability of global post-hoc explanations of black-box models
    Confalonieri, Roberto
    Weyde, Tillman
    Besold, Tarek R.
    Martin, Fermin Moscoso del Prado
    ARTIFICIAL INTELLIGENCE, 2021, 296
  • [2] Exact Shapley values for local and model-true explanations of decision tree ensembles
    Campbell, Thomas W.
    Roder, Heinrich
    Georgantas, Robert W.
    Roder, Joanna
    MACHINE LEARNING WITH APPLICATIONS, 2022, 9
  • [3] DECISION TREE DESIGN USING A PROBABILISTIC MODEL
    CASEY, RG
    NAGY, G
    IEEE TRANSACTIONS ON INFORMATION THEORY, 1984, 30 (01) : 93 - 99
  • [4] A Decision Tree-Based Model for Tender Evaluation
    Mandale, Samuel Kumbu
    Kasamani, Bernard Shibwabo
    6TH INTERNATIONAL CONFERENCE ON SMART CITY APPLICATIONS, 2022, 393 : 115 - 130
  • [5] Grade Evaluation Model Based on Fuzzy Decision Tree
    Zhang, Xinyue
    Wang, Tinghuan
    2018 4TH INTERNATIONAL CONFERENCE ON SYSTEMS, COMPUTING, AND BIG DATA (ICSCBD 2018), 2019, : 97 - 101
  • [6] Discrete model based answer script evaluation using decision tree rule classifier
    Madhumitha Ramamurthy
    Ilango Krishnamurthi
    Sudhagar Ilango
    Shanthi Palaniappan
    Cluster Computing, 2019, 22 : 13499 - 13510
  • [7] Evaluation Model of Rural Ecological Environment Governance Quality Using Decision Tree Algorithm
    Yan, Yan
    Feng, Xumeng
    MOBILE INFORMATION SYSTEMS, 2022, 2022
  • [8] Discrete model based answer script evaluation using decision tree rule classifier
    Ramamurthy, Madhumitha
    Krishnamurthi, Ilango
    Ilango, Sudhagar
    Palaniappan, Shanthi
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2019, 22 (Suppl 6): : 13499 - 13510
  • [9] Human activity recognition model based on Decision tree
    Fan, Lin
    Wang, Zhongmin
    Wang, Hai
    2013 INTERNATIONAL CONFERENCE ON ADVANCED CLOUD AND BIG DATA (CBD), 2013, : 64 - 68
  • [10] Plausibility validation of a decision making model using subjects' explanations of decisions
    Ignacio Serrano, J.
    Iglesias, Angel
    Dolores del Castillo, M.
    BIOLOGICALLY INSPIRED COGNITIVE ARCHITECTURES, 2017, 20 : 1 - 9