Explainable Reinforcement Learning through a Causal Lens

被引:0
|
作者
Madumal, Prashan
Miller, Tim
Sonenberg, Liz
Vetere, Frank
机构
基金
澳大利亚研究理事会;
关键词
EXPLANATIONS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Prominent theories in cognitive science propose that humans understand and represent the knowledge of the world through causal relationships. In making sense of the world, we build causal models in our mind to encode cause-effect relations of events and use these to explain why new events happen by referring to counterfactuals - things that did not happen. In this paper, we use causal models to derive causal explanations of the behaviour of model-free reinforcement learning agents. We present an approach that learns a structural causal model during reinforcement learning and encodes causal relationships between variables of interest. This model is then used to generate explanations of behaviour based on counterfactual analysis of the causal model. We computationally evaluate the model in 6 domains and measure performance and task prediction accuracy. We report on a study with 120 participants who observe agents playing a real-time strategy game (Starcraft II) and then receive explanations of the agents' behaviour. We investigate: 1) participants' understanding gained by explanations through task prediction; 2) explanation satisfaction and 3) trust. Our results show that causal model explanations perform better on these measures compared to two other baseline explanation models.
引用
收藏
页码:2493 / 2500
页数:8
相关论文
共 50 条
  • [41] Sample-Based Rule Extraction for Explainable Reinforcement Learning
    Engelhardt, Raphael C.
    Lange, Moritz
    Wiskott, Laurenz
    Konen, Wolfgang
    MACHINE LEARNING, OPTIMIZATION, AND DATA SCIENCE, LOD 2022, PT I, 2023, 13810 : 330 - 345
  • [42] XPM: An Explainable Deep Reinforcement Learning Framework for Portfolio Management
    Shi, Si
    Li, Jianjun
    Li, Guohui
    Pan, Peng
    Liu, Ke
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 1661 - 1670
  • [43] Explainable Reinforcement Learning: Basic Problems Exploration and Method Survey
    Liu X.
    Liu S.-Y.
    Zhuang Y.-K.
    Gao Y.
    Ruan Jian Xue Bao/Journal of Software, 2023, 34 (05): : 2300 - 2316
  • [44] Explainable reinforcement learning (XRL): a systematic literature review and taxonomy
    Bekkemoen, Yanzhe
    MACHINE LEARNING, 2024, 113 (01) : 355 - 441
  • [45] Reinforcement Learning Based Path Exploration for Sequential Explainable Recommendation
    Li, Yicong
    Chen, Hongxu
    Li, Yile
    Li, Lin
    Yu, Philip S.
    Xu, Guandong
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (11) : 11801 - 11814
  • [46] Explainable Action Advising for Multi-Agent Reinforcement Learning
    Guo, Yue
    Campbell, Joseph
    Stepputtis, Simon
    Li, Ruiyu
    Hughes, Dana
    Fang, Fei
    Sycara, Katia
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA, 2023, : 5515 - 5521
  • [47] A Bayesian Network Approach to Explainable Reinforcement Learning with Distal Information
    Milani, Rudy
    Moll, Maximilian
    De Leone, Renato
    Pickl, Stefan
    SENSORS, 2023, 23 (04)
  • [48] Explainable reinforcement learning (XRL): a systematic literature review and taxonomy
    Yanzhe Bekkemoen
    Machine Learning, 2024, 113 : 355 - 441
  • [49] Resilience-based explainable reinforcement learning in chemical process
    Szatmari, Kinga
    Horvath, Gergely
    Nemeth, Sandor
    Bai, Wenshuai
    Kummer, Alex
    COMPUTERS & CHEMICAL ENGINEERING, 2024, 191
  • [50] Explainable Deep Reinforcement Learning for UAV autonomous path planning
    He, Lei
    Aouf, Nabil
    Song, Bifeng
    AEROSPACE SCIENCE AND TECHNOLOGY, 2021, 118