Planning Irregular Object Packing via Hierarchical Reinforcement Learning

被引:13
|
作者
Huang, Sichao [1 ]
Wang, Ziwei [1 ]
Zhou, Jie [1 ]
Lu, Jiwen [1 ]
机构
[1] Tsinghua Univ, Beijing Natl Res Ctr Informat Sci & Technol BNRist, Dept Automat, Beijing 100084, Peoples R China
基金
中国国家自然科学基金;
关键词
Manipulation planning; reinforcement learning; robotic packing; BIN-PACKING; ALGORITHM; 3D;
D O I
10.1109/LRA.2022.3222996
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Object packing by autonomous robots is an important challenge in warehouses and logistics industry. Most conventional data-driven packing planning approaches focus on regular cuboid packing, which are usually heuristic and limit the practical use in realistic applications with everyday objects. In this paper, we propose a deep hierarchical reinforcement learning approach to simultaneously plan packing sequence and placement for irregular object packing. Specifically, the top manager network infers packing sequence from six principal view heightmaps of all objects, and then the bottom worker network receives heightmaps of the next object to predict the placement position and orientation. The two networks are trained hierarchically in a self-supervised Q-Learning framework, where the rewards are provided by the packing results based on the top height, object volume and placement stability in the box. The framework repeats sequence and placement planning iteratively until all objects have been packed into the box or no space is remained for unpacked items. We compare our approach with existing robotic packing methods for irregular objects in a physics simulator. Experiments show that our approach can pack more objects with less time cost than the state-of-the-art packing methods of irregular objects. We also implement our packing plan with a robotic manipulator to show the generalization ability in the real world.
引用
收藏
页码:81 / 88
页数:8
相关论文
共 50 条
  • [1] Distilling a Hierarchical Policy for Planning and Control via Representation and Reinforcement Learning
    Ha, Jung-Su
    Park, Young-Jin
    Chae, Hyeok-Joo
    Park, Soon-Seo
    Choi, Han-Lim
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 4459 - 4466
  • [2] Planning-Augmented Hierarchical Reinforcement Learning
    Gieselmann, Robert
    Pokorny, Florian T.
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (03) : 5097 - 5104
  • [3] Approximate planning for bayesian hierarchical reinforcement learning
    Ngo Anh Vien
    Hung Ngo
    Lee, Sungyoung
    Chung, TaeChoong
    APPLIED INTELLIGENCE, 2014, 41 (03) : 808 - 819
  • [4] Approximate planning for bayesian hierarchical reinforcement learning
    Ngo Anh Vien
    Hung Ngo
    Sungyoung Lee
    TaeChoong Chung
    Applied Intelligence, 2014, 41 : 808 - 819
  • [5] Irregular Object Packing Production and Processing
    An, Haixia
    Zhang, Jinhuan
    Huang, Zhigang
    PROCEEDINGS OF THE 2010 INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY AND SCIENTIFIC MANAGEMENT, VOLS 1-2, 2010, : 246 - +
  • [6] Hierarchical reinforcement learning via dynamic subspace search for multi-agent planning
    Aaron Ma
    Michael Ouimet
    Jorge Cortés
    Autonomous Robots, 2020, 44 : 485 - 503
  • [7] Hierarchical reinforcement learning via dynamic subspace search for multi-agent planning
    Ma, Aaron
    Ouimet, Michael
    Cortes, Jorge
    AUTONOMOUS ROBOTS, 2020, 44 (3-4) : 485 - 503
  • [8] Robot Task Planning via Deep Reinforcement Learning: a Tabletop Object Sorting Application
    Ceola, Federico
    Tosello, Elisa
    Tagliapietra, Luca
    Nicola, Giorgio
    Ghidoni, Stefano
    2019 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC), 2019, : 486 - 492
  • [9] Bin Packing Optimization via Deep Reinforcement Learning
    Wang, Baoying
    Lin, Zhaohui
    Kong, Weijie
    Dong, Huixu
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2025, 10 (03): : 2542 - 2549
  • [10] A Hybrid Reinforcement Learning Algorithm for 2D Irregular Packing Problems
    Fang, Jie
    Rao, Yunqing
    Zhao, Xusheng
    Du, Bing
    MATHEMATICS, 2023, 11 (02)