HEX: Human-in-the-loop explainability via deep reinforcement learning

被引:0
|
作者
Lash, Michael T. [1 ]
机构
[1] Univ Kansas, Sch Business, Analyt Informat & Operat Area, 1654 Naismith Dr, Lawrence, KS 66045 USA
关键词
Explainability; Interpretability; Human-in-the-loop; Deep reinforcement learning; Machine learning; Behavioral machine learning; Decision support; EXPLANATIONS; ALGORITHMS; MODELS;
D O I
10.1016/j.dss.2024.114304
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The use of machine learning (ML) models in decision-making contexts, particularly those used in high-stakes decision-making, are fraught with issue and peril since a person - not a machine - must ultimately be held accountable for the consequences of decisions made using such systems. Machine learning explainability (MLX) promises to provide decision-makers with prediction-specific rationale, assuring them that the model-elicited predictions are made for the right reasons and are thus reliable. Few works explicitly consider this key human-in the-loop (HITL) component, however. In this work we propose HEX, a human-in-the-loop deep reinforcement learning approach to MLX. HEX incorporates 0-distrust projection to synthesize decider-specific explainers that produce explanations strictly in terms of a decider's preferred explanatory features using any classification model. Our formulation explicitly considers the decision boundary of the ML model in question using proposed explanatory point mode of explanation, thus ensuring explanations are specific to the ML model in question. We empirically evaluate HEX against other competing methods, finding that HEX is competitive with the state-of-the-art and outperforms other methods in human-in-the-loop scenarios. We conduct a randomized, controlled laboratory experiment utilizing actual explanations elicited from both HEX and competing methods. We causally establish that our method increases decider's trust and tendency to rely on trusted features.
引用
收藏
页数:12
相关论文
共 50 条
  • [11] Where to Add Actions in Human-in-the-Loop Reinforcement Learning
    Mandel, Travis
    Liu, Yun-En
    Brunskill, Emma
    Popovic, Zoran
    THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 2322 - 2328
  • [12] Toward Human-in-the-Loop AI: Enhancing Deep Reinforcement Learning via Real-Time Human Guidance for Autonomous Driving
    Wu, Jingda
    Huang, Zhiyu
    Hu, Zhongxu
    Lv, Chen
    ENGINEERING, 2023, 21 : 75 - 91
  • [13] Human-in-the-Loop Behavior Modeling via an Integral Concurrent Adaptive Inverse Reinforcement Learning
    Wu, Huai-Ning
    Wang, Mi
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (08) : 11359 - 11370
  • [14] HEIDL: Learning Linguistic Expressions with Deep Learning and Human-in-the-Loop
    Yang, Yiwei
    Kandogan, Eser
    Li, Yunyao
    Lasecki, Walter S.
    Sen, Prithviraj
    PROCEEDINGS OF THE 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: SYSTEM DEMONSTRATIONS, (ACL 2019), 2019, : 135 - 140
  • [15] Optimal Volt/Var Control for Unbalanced Distribution Networks With Human-in-the-Loop Deep Reinforcement Learning
    Sun, Xianzhuo
    Xu, Zhao
    Qiu, Jing
    Liu, Huichuan
    Wu, Huayi
    Tao, Yuechuan
    IEEE TRANSACTIONS ON SMART GRID, 2024, 15 (03) : 2639 - 2651
  • [16] Human-in-the-Loop Reinforcement Learning in Continuous-Action Space
    Luo, Biao
    Wu, Zhengke
    Zhou, Fei
    Wang, Bing-Chuan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 35 (11) : 1 - 10
  • [17] Human-in-the-Loop Reinforcement Learning in Continuous-Action Space
    Luo, Biao
    Wu, Zhengke
    Zhou, Fei
    Wang, Bing-Chuan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (11) : 15735 - 15744
  • [18] Decision Making for Human-in-the-loop Robotic Agents via Uncertainty-Aware Reinforcement Learning
    Singi, Siddharth
    He, Zhanpeng
    Pan, Alvin
    Patel, Sandip
    Sigurdsson, Gunnar A.
    Piramuthu, Robinson
    Song, Shuran
    Ciocarlie, Matei
    2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2024, 2024, : 7939 - 7945
  • [19] Shared Autonomy Based on Human-in-the-loop Reinforcement Learning with Policy Constraints
    Li, Ming
    Kang, Yu
    Zhao, Yun-Bo
    Zhu, Jin
    You, Shiyi
    2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 7349 - 7354
  • [20] A Human-in-the-Loop Approach based on Explainability to Improve NTL Detection
    Coma-Puig, Bernat
    Carmona, Josep
    21ST IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS ICDMW 2021, 2021, : 943 - 950