Null Space Based Efficient Reinforcement Learning with Hierarchical Safety Constraints

被引:0
|
作者
Yang, Quantao [1 ]
Stork, Johannes A. [1 ]
Stoyanov, Todor [1 ]
机构
[1] Orebro Univ, Autonomous Mobile Manipulat Lab AMM, Orebro, Sweden
关键词
D O I
10.1109/ECMR50962.2021.9568848
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Reinforcement learning is inherently unsafe for use in physical systems, as learning by trial-and-error can cause harm to the environment or the robot itself. One way to avoid unpredictable exploration is to add constraints in the action space to restrict the robot behavior. In this paper, we propose a null space based framework of integrating reinforcement learning methods in constrained continuous action spaces. We leverage a hierarchical control framework to decompose target robotic skills into higher ranked tasks (e: g:, joint limits and obstacle avoidance) and lower ranked reinforcement learning task. Safe exploration is guaranteed by only learning policies in the null space of higher prioritized constraints. Meanwhile multiple constraint phases for different operational spaces are constructed to guide the robot exploration. Also, we add penalty loss for violating higher ranked constraints to accelerate the learning procedure. We have evaluated our method on different redundant robotic tasks in simulation and show that our null space based reinforcement learning method can explore and learn safely and efficiently.
引用
收藏
页数:6
相关论文
共 50 条
  • [41] A Hierarchical Framework for Quadruped Locomotion Based on Reinforcement Learning
    Tan, Wenhao
    Fang, Xing
    Zhang, Wei
    Song, Ran
    Chen, Teng
    Zheng, Yu
    Li, Yibin
    2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 8462 - 8468
  • [42] Potential Based Reward Shaping for Hierarchical Reinforcement Learning
    Gao, Yang
    Toni, Francesca
    PROCEEDINGS OF THE TWENTY-FOURTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI), 2015, : 3504 - 3510
  • [43] Promoting Quality and Diversity in Population-based Reinforcement Learning via Hierarchical Trajectory Space Exploration
    Miao, Jiayu
    Zhou, Tianze
    Shao, Kun
    Zhou, Ming
    Zhang, Weinan
    Hao, Jianye
    Yu, Yong
    Wang, Jun
    2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2022, 2022, : 7544 - 7550
  • [44] Swarm Reinforcement Learning Method Based on Hierarchical Q-Learning
    Kuroe, Yasuaki
    Takeuchi, Kenya
    Maeda, Yutaka
    2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021), 2021,
  • [45] Hierarchical extreme learning machine based reinforcement learning for goal localization
    AlDahoul, Nouar
    Htike, Zaw Zaw
    Akmeliawati, Rini
    3RD INTERNATIONAL CONFERENCE ON MECHANICAL, AUTOMOTIVE AND AEROSPACE ENGINEERING 2016, 2017, 184
  • [46] Concurrent Hierarchical Reinforcement Learning
    Marthi, Bhaskara
    Russell, Stuart
    Latham, David
    Guestrin, Carlos
    19TH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI-05), 2005, : 779 - 785
  • [47] Hierarchical reinforcement learning with OMQ
    Shen, Jing
    Liu, Haibo
    Gu, Guochang
    PROCEEDINGS OF THE FIFTH IEEE INTERNATIONAL CONFERENCE ON COGNITIVE INFORMATICS, VOLS 1 AND 2, 2006, : 584 - 588
  • [48] Hierarchical Imitation and Reinforcement Learning
    Le, Hoang M.
    Jiang, Nan
    Agarwal, Alekh
    Dudik, Miroslav
    Yue, Yisong
    Daume, Hal, III
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [49] On Efficiency in Hierarchical Reinforcement Learning
    Wen, Zheng
    Precup, Doina
    Ibrahimi, Morteza
    Barreto, Andre
    Van Roy, Benjamin
    Singh, Satinder
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [50] Learning Null Space Projections
    Lin, Hsiu-Chin
    Howard, Matthew
    Vijayakumar, Sethu
    2015 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2015, : 2613 - 2619