Explainability through uncertainty: Trustworthy decision-making with neural networks

被引:3
|
作者
Thuy, Arthur [1 ,2 ]
Benoit, Dries F. [1 ,2 ]
机构
[1] Univ Ghent, Fac Econ & Business Adm, Res Grp Data Analyt, Tweekerkenstr 2, B-9000 Ghent, Belgium
[2] Flanders Make, CVAMO Core Lab, Gaston Geenslaan 8, B-3001 Leuven, Belgium
关键词
Decision support systems; Explainable artificial intelligence; Monte Carlo Dropout; Deep Ensembles; Distribution shift; PREDICTION; ANALYTICS;
D O I
10.1016/j.ejor.2023.09.009
中图分类号
C93 [管理学];
学科分类号
12 ; 1201 ; 1202 ; 120202 ;
摘要
Uncertainty is a key feature of any machine learning model and is particularly important in neural networks, which tend to be overconfident. This overconfidence is worrying under distribution shifts, where the model performance silently degrades as the data distribution diverges from the training data distribution. Uncertainty estimation offers a solution to overconfident models, communicating when the output should (not) be trusted. Although methods for uncertainty estimation have been developed, they have not been explicitly linked to the field of explainable artificial intelligence (XAI). Furthermore, literature in operations research ignores the actionability component of uncertainty estimation and does not consider distribution shifts. This work proposes a general uncertainty framework, with contributions being threefold: (i) uncertainty estimation in ML models is positioned as an XAI technique, giving local and model-specific explanations; (ii) classification with rejection is used to reduce misclassifications by bringing a human expert in the loop for uncertain observations; (iii) the framework is applied to a case study on neural networks in educational data mining subject to distribution shifts. Uncertainty as XAI improves the model's trustworthiness in downstream decision-making tasks, giving rise to more actionable and robust machine learning systems in operations research.
引用
收藏
页码:330 / 340
页数:11
相关论文
共 50 条
  • [1] Trustworthy Hybrid Decision-Making
    Mantri, Ipsit
    Sasikumar, Nevasini
    MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2023, PT II, 2025, 2134 : 239 - 244
  • [2] Variants of uncertainty in decision-making and their neural correlates
    Volz, KG
    Schubotz, RI
    von Cramon, DY
    BRAIN RESEARCH BULLETIN, 2005, 67 (05) : 403 - 412
  • [3] DECISION-MAKING USING NEURAL NETWORKS
    KARAYIANNIS, NB
    VENETSANOPOULOS, AN
    NEUROCOMPUTING, 1994, 6 (03) : 363 - 374
  • [4] PNNUAD: Perception Neural Networks Uncertainty Aware Decision-Making for Autonomous Vehicle
    Liu, Jiaxin
    Wang, Hong
    Peng, Liang
    Cao, Zhong
    Yang, Diange
    Li, Jun
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (12) : 24355 - 24368
  • [5] Nonlinear decision-making with enzymatic neural networks
    Okumura, S.
    Gines, G.
    Lobato-Dauzier, N.
    Baccouche, A.
    Deteix, R.
    Fujii, T.
    Rondelez, Y.
    Genot, A. J.
    NATURE, 2022, 610 (7932) : 496 - +
  • [6] DECISION-MAKING AND UNCERTAINTY
    DAVIS, DG
    SAUNDERS, ES
    JOURNAL OF ACADEMIC LIBRARIANSHIP, 1992, 17 (06): : 356 - 357
  • [7] UNCERTAINTY IN DECISION-MAKING
    GOLAY, MW
    TECHNOLOGY REVIEW, 1980, 82 (07): : 37 - 37
  • [8] Distributed decision-making by a team of neural networks
    Mukhopadhyay, S
    PROCEEDINGS OF THE 37TH IEEE CONFERENCE ON DECISION AND CONTROL, VOLS 1-4, 1998, : 1082 - 1083
  • [9] Nonlinear decision-making with enzymatic neural networks
    S. Okumura
    G. Gines
    N. Lobato-Dauzier
    A. Baccouche
    R. Deteix
    T. Fujii
    Y. Rondelez
    A. J. Genot
    Nature, 2022, 610 : 496 - 501
  • [10] On Trustworthy Decision-Making Process of Human Drivers From the View of Perceptual Uncertainty Reduction
    Wang, Huanjie
    Liu, Haibin
    Wang, Wenshuo
    Sun, Lijun
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (02) : 1625 - 1636