Safe Q-Learning Method Based on Constrained Markov Decision Processes

被引:19
|
作者
Ge, Yangyang [1 ]
Zhu, Fei [1 ,2 ]
Lin, Xinghong [1 ]
Liu, Quan [1 ]
机构
[1] Soochow Univ, Sch Comp Sci & Technol, Suzhou 215006, Peoples R China
[2] Soochow Univ, Prov Key Lab Comp Informat Proc Technol, Suzhou 215006, Peoples R China
来源
IEEE ACCESS | 2019年 / 7卷
基金
中国国家自然科学基金;
关键词
Constrained Markov decision processes; safe reinforcement learning; Q-learning; constraint; Lagrange multiplier; REINFORCEMENT; OPTIMIZATION; ALGORITHM;
D O I
10.1109/ACCESS.2019.2952651
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The application of reinforcement learning in industrial fields makes the safety problem of the agent a research hotspot. Traditional methods mainly alter the objective function and the exploration process of the agent to address the safety problem. Those methods, however, can hardly prevent the agent from falling into dangerous states because most of the methods ignore the damage caused by unsafe states. As a result, most solutions are not satisfactory. In order to solve the aforementioned problem, we come forward with a safe Q-learning method that is based on constrained Markov decision processes, adding safety constraints as prerequisites to the model, which improves standard Q-learning algorithm so that the proposed algorithm seeks for the optimal solution ensuring that the safety premise is satisfied. During the process of finding the solution in form of the optimal state-action value, the feasible space of the agent is limited to the safe space that guarantees the safety via the feasible space being filtered by constraints added to the action space. Because the traditional solution methods are not applicable to the safe Q-learning model as they tend to obtain local optimal solution, we take advantage of the Lagrange multiplier method to solve the optimal action that can be performed in the current state based on the premise of linearizing constraint functions, which not only improves the efficiency and accuracy of the algorithm, but also guarantees to obtain the global optimal solution. The experiments verify the effectiveness of the algorithm.
引用
收藏
页码:165007 / 165017
页数:11
相关论文
共 50 条
  • [41] Risk-constrained Markov Decision Processes
    Borkar, Vivek
    Jain, Rahul
    49TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2010, : 2664 - 2669
  • [42] Risk-Constrained Markov Decision Processes
    Borkar, Vivek
    Jain, Rahul
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2014, 59 (09) : 2574 - 2579
  • [43] Constrained Markov Decision Processes for Intelligent Traffic
    Singh, Tripty
    2019 10TH INTERNATIONAL CONFERENCE ON COMPUTING, COMMUNICATION AND NETWORKING TECHNOLOGIES (ICCCNT), 2019,
  • [44] Entropy Maximization for Constrained Markov Decision Processes
    Savas, Yagiz
    Ornik, Melkior
    Cubuktepe, Murat
    Topcu, Ufuk
    2018 56TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2018, : 911 - 918
  • [45] Dominance-constrained Markov decision processes
    Haskell, William B.
    Jain, Rahul
    2012 IEEE 51ST ANNUAL CONFERENCE ON DECISION AND CONTROL (CDC), 2012, : 5991 - 5996
  • [46] Constrained Markov decision processes with uncertain costs
    Varagapriya, V.
    Singh, Vikas Vikram
    Lisser, Abdel
    OPERATIONS RESEARCH LETTERS, 2022, 50 (02) : 218 - 223
  • [47] Cognitive Electronic Jamming Decision-Making Method Based on Improved Q-Learning Algorithm
    Li, Huiqin
    Li, Yanling
    He, Chuan
    Zhan, Jianwei
    Zhang, Hui
    INTERNATIONAL JOURNAL OF AEROSPACE ENGINEERING, 2021, 2021
  • [48] Intelligent Decision Method of Slope Perturbing Based on Q-Learning for Anti-Deception Jamming
    Wei, Jingjing
    Yu, Lei
    Xu, Rongqing
    2022 6TH INTERNATIONAL CONFERENCE ON IMAGING, SIGNAL PROCESSING AND COMMUNICATIONS, ICISPC, 2022, : 71 - 76
  • [49] A reinforcement learning based algorithm for Markov decision processes
    Bhatnagar, S
    Kumar, S
    2005 International Conference on Intelligent Sensing and Information Processing, Proceedings, 2005, : 199 - 204
  • [50] Model-Based Reinforcement Learning for Infinite-Horizon Discounted Constrained Markov Decision Processes
    HasanzadeZonuzy, Aria
    Kalathil, Dileep
    Shakkottai, Srinivas
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 2519 - 2525