One Explanation Does Not Fit AllThe Promise of Interactive Explanations for Machine Learning Transparency

被引:0
|
作者
Kacper Sokol
Peter Flach
机构
[1] University of Bristol,Department of Computer Science
来源
关键词
Interactive; Personalised; Explanations; Counterfactuals;
D O I
暂无
中图分类号
学科分类号
摘要
The need for transparency of predictive systems based on Machine Learning algorithms arises as a consequence of their ever-increasing proliferation in the industry. Whenever black-box algorithmic predictions influence human affairs, the inner workings of these algorithms should be scrutinised and their decisions explained to the relevant stakeholders, including the system engineers, the system’s operators and the individuals whose case is being decided. While a variety of interpretability and explainability methods is available, none of them is a panacea that can satisfy all diverse expectations and competing objectives that might be required by the parties involved. We address this challenge in this paper by discussing the promises of Interactive Machine Learning for improved transparency of black-box systems using the example of contrastive explanations—a state-of-the-art approach to Interpretable Machine Learning. Specifically, we show how to personalise counterfactual explanations by interactively adjusting their conditional statements and extract additional explanations by asking follow-up “What if?” questions. Our experience in building, deploying and presenting this type of system allowed us to list desired properties as well as potential limitations, which can be used to guide the development of interactive explainers. While customising the medium of interaction, i.e., the user interface comprising of various communication channels, may give an impression of personalisation, we argue that adjusting the explanation itself and its content is more important. To this end, properties such as breadth, scope, context, purpose and target of the explanation have to be considered, in addition to explicitly informing the explainee about its limitations and caveats. Furthermore, we discuss the challenges of mirroring the explainee’s mental model, which is the main building block of intelligible human–machine interactions. We also deliberate on the risks of allowing the explainee to freely manipulate the explanations and thereby extracting information about the underlying predictive model, which might be leveraged by malicious actors to steal or game the model. Finally, building an end-to-end interactive explainability system is a challenging engineering task; unless the main goal is its deployment, we recommend “Wizard of Oz” studies as a proxy for testing and evaluating standalone interactive explainability algorithms.
引用
收藏
页码:235 / 250
页数:15
相关论文
共 34 条
  • [1] One Explanation Does Not Fit All The Promise of Interactive Explanations for Machine Learning Transparency
    Sokol, Kacper
    Flach, Peter
    KUNSTLICHE INTELLIGENZ, 2020, 34 (02): : 235 - 250
  • [2] Leveraging explanations in interactive machine learning: An overview
    Teso, Stefano
    Alkan, Oznur
    Stammer, Wolfgang
    Daly, Elizabeth
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2023, 6
  • [3] Robustness in Machine Learning Explanations: Does It Matter?
    Hancox-Li, Leif
    FAT* '20: PROCEEDINGS OF THE 2020 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, 2020, : 640 - 647
  • [4] Increasing transparency in machine learning through bootstrap simulation and shapely additive explanations
    Huang, Alexander A.
    Huang, Samuel Y.
    PLOS ONE, 2023, 18 (02):
  • [5] One size does not fit all: detecting attention in children with autism using machine learning
    Bilikis Banire
    Dena Al Thani
    Marwa Qaraqe
    User Modeling and User-Adapted Interaction, 2024, 34 : 259 - 291
  • [6] One size does not fit all: detecting attention in children with autism using machine learning
    Banire, Bilikis
    Al Thani, Dena
    Qaraqe, Marwa
    USER MODELING AND USER-ADAPTED INTERACTION, 2024, 34 (02) : 259 - 291
  • [7] Adaptive Machine Learning as Research: Does the Cure Fit the Disease?
    DeCamp, Matthew
    Kao, David
    AMERICAN JOURNAL OF BIOETHICS, 2024, 24 (10): : 70 - 72
  • [8] Deep Learning in Radiology: Does One Size Fit All?
    Erickson, Bradley J.
    Korfiatis, Panagiotis
    Kline, Timothy L.
    Akkus, Zeynettin
    Philbrick, Kenneth
    Weston, Alexander D.
    JOURNAL OF THE AMERICAN COLLEGE OF RADIOLOGY, 2018, 15 (03) : 521 - 526
  • [9] Online learning modules: Does one version fit all?
    Yeung, Alexandra
    Schmid, Siegbert
    Tasker, Roy
    WHO'S LEARNING? WHOSE TECHNOLOGY?, PROCEEDINGS, VOLS 1 AND 2, 2006, : 1000 - 1000
  • [10] Teaching undergraduate mathematics in interactive groups: how does it fit with students' learning?
    Sheryn, Louise
    Ell, Fiona
    INTERNATIONAL JOURNAL OF MATHEMATICAL EDUCATION IN SCIENCE AND TECHNOLOGY, 2014, 45 (06) : 863 - 878