A Bayesian Network Approach to Explainable Reinforcement Learning with Distal Information

被引:0
|
作者
Milani, Rudy [1 ]
Moll, Maximilian [1 ]
De Leone, Renato [2 ]
Pickl, Stefan [1 ]
机构
[1] Univ Bundeswehr Muenchen, Fac Comp Sci, Werner Heisenberg Weg 39, D-85577 Neubiberg, Germany
[2] Univ Camerino, Sch Sci & Technol, Via Madonna Carceri 9, I-62032 Camerino, Italy
关键词
Explainable Reinforcement Learning; Bayesian Network; model-free methods; causal explanation; human study; MODEL;
D O I
10.3390/s23042013
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Nowadays, Artificial Intelligence systems have expanded their competence field from research to industry and daily life, so understanding how they make decisions is becoming fundamental to reducing the lack of trust between users and machines and increasing the transparency of the model. This paper aims to automate the generation of explanations for model-free Reinforcement Learning algorithms by answering "why" and "why not" questions. To this end, we use Bayesian Networks in combination with the NOTEARS algorithm for automatic structure learning. This approach complements an existing framework very well and demonstrates thus a step towards generating explanations with as little user input as possible. This approach is computationally evaluated in three benchmarks using different Reinforcement Learning methods to highlight that it is independent of the type of model used and the explanations are then rated through a human study. The results obtained are compared to other baseline explanation models to underline the satisfying performance of the framework presented in terms of increasing the understanding, transparency and trust in the action chosen by the agent.
引用
收藏
页数:38
相关论文
共 50 条
  • [41] Explainable reinforcement learning for powertrain control engineering
    Laflamme, C.
    Doppler, J.
    Palvolgyi, B.
    Dominka, S.
    Viharos, Zs. J.
    Haeussler, S.
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 146
  • [42] Explainable Deep Learning for False Information Identification: An Argumentation Theory Approach
    Lee, Kyuhan
    Ram, Sudha
    INFORMATION SYSTEMS RESEARCH, 2024, 35 (02) : 890 - 907
  • [43] Explainable Reinforcement Learning: A Survey and Comparative Review
    Milani, Stephanie
    Topin, Nicholay
    Veloso, Manuela
    Fang, Fei
    ACM COMPUTING SURVEYS, 2024, 56 (07) : 1 - 36
  • [44] Portfolio construction using explainable reinforcement learning
    Cortes, Daniel Gonzalez
    Onieva, Enrique
    Pastor, Iker
    Trinchera, Laura
    Wu, Jian
    EXPERT SYSTEMS, 2024, 41 (11)
  • [45] Explainable Reinforcement Learning via Model Transforms
    Finkelstein, Mira
    Liu, Lucy
    Schlot, Nitsan Levy
    Kolumbus, Yoav
    Parkes, David C.
    Rosenschein, Jeffrey S.
    Keren, Sarah
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [46] Memory-Based Explainable Reinforcement Learning
    Cruz, Francisco
    Dazeley, Richard
    Vamplew, Peter
    AI 2019: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, 11919 : 66 - 77
  • [47] Causal State Distillation for Explainable Reinforcement Learning
    Lu, Wenhao
    Zhao, Xufeng
    Fryen, Thilo
    Lee, Jae Hee
    Li, Mengdi
    Magg, Sven
    Wermter, Stefan
    CAUSAL LEARNING AND REASONING, VOL 236, 2024, 236 : 106 - 142
  • [48] Inherently Explainable Reinforcement Learning in Natural Language
    Peng, Xiangyu
    Xing, Chen
    Choubey, Prafulla Kumar
    Wu, Chien-Sheng
    Xiong, Caiming
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [49] BAYESIAN-APPROACH TO PRODUCTION OF INFORMATION AND LEARNING BY DOING
    GROSSMAN, SJ
    KIHLSTROM, RE
    MIRMAN, LJ
    REVIEW OF ECONOMIC STUDIES, 1977, 44 (03): : 533 - 547
  • [50] Benchmarking for Bayesian Reinforcement Learning
    Castronovo, Michael
    Ernst, Damien
    Couetoux, Adrien
    Fonteneau, Raphael
    PLOS ONE, 2016, 11 (06):