Explainable Reinforcement Learning through a Causal Lens

被引:0
|
作者
Madumal, Prashan
Miller, Tim
Sonenberg, Liz
Vetere, Frank
机构
基金
澳大利亚研究理事会;
关键词
EXPLANATIONS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Prominent theories in cognitive science propose that humans understand and represent the knowledge of the world through causal relationships. In making sense of the world, we build causal models in our mind to encode cause-effect relations of events and use these to explain why new events happen by referring to counterfactuals - things that did not happen. In this paper, we use causal models to derive causal explanations of the behaviour of model-free reinforcement learning agents. We present an approach that learns a structural causal model during reinforcement learning and encodes causal relationships between variables of interest. This model is then used to generate explanations of behaviour based on counterfactual analysis of the causal model. We computationally evaluate the model in 6 domains and measure performance and task prediction accuracy. We report on a study with 120 participants who observe agents playing a real-time strategy game (Starcraft II) and then receive explanations of the agents' behaviour. We investigate: 1) participants' understanding gained by explanations through task prediction; 2) explanation satisfaction and 3) trust. Our results show that causal model explanations perform better on these measures compared to two other baseline explanation models.
引用
收藏
页码:2493 / 2500
页数:8
相关论文
共 50 条
  • [31] Causal Reinforcement Learning for Knowledge Graph Reasoning
    Li, Dezhi
    Lu, Yunjun
    Wu, Jianping
    Zhou, Wenlu
    Zeng, Guangjun
    APPLIED SCIENCES-BASEL, 2024, 14 (06):
  • [32] Efficient Reinforcement Learning with Prior Causal Knowledge
    Lu, Yangyi
    Meisami, Amirhossein
    Tewari, Ambuj
    CONFERENCE ON CAUSAL LEARNING AND REASONING, VOL 177, 2022, 177
  • [33] Reinforcement Causal Structure Learning on Order Graph
    Yang, Dezhi
    Yu, Guoxian
    Wang, Jun
    Wu, Zhengtian
    Guo, Maozu
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 9, 2023, : 10737 - 10744
  • [34] A causal role of estradiol in human reinforcement learning
    Veselic, Sebastijan
    Jocham, Gerhard
    Gausterer, Christian
    Wagner, Bernhard
    Ernhoefer-Ressler, Miriam
    Lanzenberger, Rupert
    Eisenegger, Christoph
    Lamm, Claus
    Vermeer, Annabel Losecaat
    HORMONES AND BEHAVIOR, 2021, 134
  • [35] CARL: A Synergistic Framework for Causal Reinforcement Learning
    Mendez-Molina, Arquimides
    Morales, Eduardo F.
    Sucar, L. Enrique
    IEEE ACCESS, 2023, 11 : 126462 - 126481
  • [36] Causal Discovery and Reinforcement Learning: A Synergistic Integration
    Mendez-Molina, Arquimides
    Morales, Eduardo F.
    Enrique Sucar, L.
    INTERNATIONAL CONFERENCE ON PROBABILISTIC GRAPHICAL MODELS, VOL 186, 2022, 186
  • [37] VERIFICATION, VALIDATION, AND CALIBRATION THROUGH A CAUSAL LENS
    Gonzales, Ronald
    Mandelli, Diego
    Wang, Congjian
    Abdo, Mohammad
    Balestra, Paolo
    Qin, Sunming
    Welker, Zachary
    Petrov, Victor
    Manera, Annalisa
    PROCEEDINGS OF 2024 VERIFICATION, VALIDATION, AND UNCERTAINTY QUANTIFICATION SYMPOSIUM, VVUQ2024, 2024,
  • [38] Explainable Reinforcement Learning for Network Management via Surrogate Model
    Botta, Alessio
    Canonico, Roberto
    Navarro, Annalisa
    2024 IEEE 44TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS WORKSHOPS, ICDCS 2024, 2024, : 35 - 40
  • [39] A User Study on Explainable Online Reinforcement Learning for Adaptive Systems
    Metzger, Andreas
    Laufer, Jan
    Feit, Felix
    Pohl, Klaus
    ACM TRANSACTIONS ON AUTONOMOUS AND ADAPTIVE SYSTEMS, 2024, 19 (03)
  • [40] Container Caching Optimization based on Explainable Deep Reinforcement Learning
    Jayaram, Divyashree
    Jeelani, Saad
    Ishigaki, Genya
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 7127 - 7132