HEX: Human-in-the-loop explainability via deep reinforcement learning

被引:0
|
作者
Lash, Michael T. [1 ]
机构
[1] Univ Kansas, Sch Business, Analyt Informat & Operat Area, 1654 Naismith Dr, Lawrence, KS 66045 USA
关键词
Explainability; Interpretability; Human-in-the-loop; Deep reinforcement learning; Machine learning; Behavioral machine learning; Decision support; EXPLANATIONS; ALGORITHMS; MODELS;
D O I
10.1016/j.dss.2024.114304
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The use of machine learning (ML) models in decision-making contexts, particularly those used in high-stakes decision-making, are fraught with issue and peril since a person - not a machine - must ultimately be held accountable for the consequences of decisions made using such systems. Machine learning explainability (MLX) promises to provide decision-makers with prediction-specific rationale, assuring them that the model-elicited predictions are made for the right reasons and are thus reliable. Few works explicitly consider this key human-in the-loop (HITL) component, however. In this work we propose HEX, a human-in-the-loop deep reinforcement learning approach to MLX. HEX incorporates 0-distrust projection to synthesize decider-specific explainers that produce explanations strictly in terms of a decider's preferred explanatory features using any classification model. Our formulation explicitly considers the decision boundary of the ML model in question using proposed explanatory point mode of explanation, thus ensuring explanations are specific to the ML model in question. We empirically evaluate HEX against other competing methods, finding that HEX is competitive with the state-of-the-art and outperforms other methods in human-in-the-loop scenarios. We conduct a randomized, controlled laboratory experiment utilizing actual explanations elicited from both HEX and competing methods. We causally establish that our method increases decider's trust and tendency to rely on trusted features.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Human-in-the-loop Reinforcement Learning
    Liang, Huanghuang
    Yang, Lu
    Cheng, Hong
    Tu, Wenzhe
    Xu, Mengjie
    2017 CHINESE AUTOMATION CONGRESS (CAC), 2017, : 4511 - 4518
  • [2] ASHA: Assistive Teleoperation via Human-in-the-Loop Reinforcement Learning
    Chen, Sean
    Gao, Jensen
    Reddy, Siddharth
    Berseth, Glen
    Dragan, Anca D.
    Levine, Sergey
    2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2022, 2022, : 7505 - 7512
  • [3] End-to-end grasping policies for human-in-the-loop robots via deep reinforcement learning
    Sharif, Mohammadreza
    Erdogmus, Deniz
    Amato, Christopher
    Padir, Taskin
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 2768 - 2774
  • [4] Personalization of Hearing Aid Compression by Human-in-the-Loop Deep Reinforcement Learning
    Alamdari, Nasim
    Lobarinas, Edward
    Kehtarnavaz, Nasser
    IEEE ACCESS, 2020, 8 : 203503 - 203515
  • [5] Thermal comfort management leveraging deep reinforcement learning and human-in-the-loop
    Cicirelli, Franco
    Guerrieri, Antonio
    Mastroianni, Carlo
    Spezzano, Giandomenico
    Vinci, Andrea
    PROCEEDINGS OF THE 2020 IEEE INTERNATIONAL CONFERENCE ON HUMAN-MACHINE SYSTEMS (ICHMS), 2020, : 160 - 165
  • [6] Human-in-the-loop Reinforcement Learning for Emotion Recognition
    Tan, Swee Yang
    Yau, Kok-Lim Alvin
    2024 IEEE 14TH SYMPOSIUM ON COMPUTER APPLICATIONS & INDUSTRIAL ELECTRONICS, ISCAIE 2024, 2024, : 21 - 26
  • [7] Deep Reinforcement Active Learning for Human-In-The-Loop Person Re-Identification
    Liu, Zimo
    Wang, Jingya
    Gong, Shaogang
    Lu, Huchuan
    Tao, Dacheng
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 6121 - 6130
  • [8] Explainability in deep reinforcement learning
    Heuillet, Alexandre
    Couthouis, Fabien
    Diaz-Rodriguez, Natalia
    KNOWLEDGE-BASED SYSTEMS, 2021, 214 (214)
  • [9] Value Driven Representation for Human-in-the-Loop Reinforcement Learning
    Keramati, Ramtin
    Brunskill, Emma
    ACM UMAP '19: PROCEEDINGS OF THE 27TH ACM CONFERENCE ON USER MODELING, ADAPTATION AND PERSONALIZATION, 2019, : 176 - 180
  • [10] Reinforcement Learning Requires Human-in-the-Loop Framing and Approaches
    Taylor, Matthew E.
    HHAI 2023: AUGMENTING HUMAN INTELLECT, 2023, 368 : 351 - 360