Probabilistic Counterexample Guidance for Safer Reinforcement Learning

被引:1
|
作者
Ji, Xiaotong [1 ]
Filieri, Antonio [1 ]
机构
[1] Imperial Coll London, Dept Comp, London SW7 2AZ, England
关键词
Safe reinforcement learning; Probabilistic model checking; Counterexample guidance;
D O I
10.1007/978-3-031-43835-6_22
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Safe exploration aims at addressing the limitations of Reinforcement Learning (RL) in safety-critical scenarios, where failures during trial-and-error learning may incur high costs. Several methods exist to incorporate external knowledge or to use proximal sensor data to limit the exploration of unsafe states. However, reducing exploration risks in unknown environments, where an agent must discover safety threats during exploration, remains challenging. In this paper, we target the problem of safe exploration by guiding the training with counterexamples of the safety requirement. Our method abstracts both continuous and discrete state-space systems into compact abstract models representing the safety-relevant knowledge acquired by the agent during exploration. We then exploit probabilistic counter-example generation to construct minimal simulation submodels eliciting safety requirement violations, where the agent can efficiently train offline to refine its policy towards minimising the risk of safety violations during the subsequent online exploration. We demonstrate our method's effectiveness in reducing safety violations during online exploration in preliminary experiments by an average of 40.3% compared with QL and DQN standard algorithms and 29.1% compared with previous related work, while achieving comparable cumulative rewards with respect to unrestricted exploration and alternative approaches.
引用
收藏
页码:311 / 328
页数:18
相关论文
共 50 条
  • [1] A probabilistic learning approach for counterexample guided abstraction refinement
    He, Fei
    Song, Xiaoyu
    Gu, Ming
    Sun, Jiaguang
    AUTOMATED TECHNOLOGY FOR VERIFICATION AND ANALYSIS, PROCEEDINGS, 2006, 4218 : 39 - 50
  • [2] Exploring Safer Behaviors for Deep Reinforcement Learning
    Marchesini, Enrico
    Corsi, Davide
    Farinelli, Alessandro
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 7701 - 7709
  • [3] Accelerating Reinforcement Learning with Suboptimal Guidance
    Bohn, Eivind
    Moe, Signe
    Johansen, Tor Arne
    IFAC PAPERSONLINE, 2020, 53 (02): : 8090 - 8096
  • [4] Integrating guidance into relational reinforcement learning
    Driessens, K
    Dzeroski, S
    MACHINE LEARNING, 2004, 57 (03) : 271 - 304
  • [5] Certified reinforcement learning with logic guidance
    Hasanbeig, Hosein
    Kroening, Daniel
    Abate, Alessandro
    ARTIFICIAL INTELLIGENCE, 2023, 322
  • [6] ADAPTIVE GUIDANCE WITH REINFORCEMENT META LEARNING
    Gaudet, Brian
    Linares, Richard
    SPACEFLIGHT MECHANICS 2019, VOL 168, PTS I-IV, 2019, 168 : 4091 - 4109
  • [7] Integrating Guidance into Relational Reinforcement Learning
    Kurt Driessens
    Sašo Džeroski
    Machine Learning, 2004, 57 : 271 - 304
  • [8] Reinforcement learning guidance law of Q-learning
    Zhang Q.
    Ao B.
    Zhang Q.
    Xi Tong Gong Cheng Yu Dian Zi Ji Shu/Systems Engineering and Electronics, 2020, 42 (02): : 414 - 419
  • [9] Counterexample-guided permissive supervisor synthesis for probabilistic systems through learning
    Wu, Bo
    Lin, Hai
    2015 AMERICAN CONTROL CONFERENCE (ACC), 2015, : 2894 - 2899
  • [10] Hierarchical reinforcement learning guidance with threat avoidance
    Li Bohao
    Wu Yunjie
    Li Guofei
    JOURNAL OF SYSTEMS ENGINEERING AND ELECTRONICS, 2022, 33 (05) : 1173 - 1185