Reinforcement Learning Enabled Self-Homing of Industrial Robotic Manipulators in Manufacturing

被引:0
|
作者
Karigiannis, John N. [1 ]
Laurin, Philippe [3 ]
Liu, Shaopeng [1 ]
Holovashchenko, Viktor [1 ]
Lizotte, Antoine [2 ]
Roux, Vincent [2 ]
Boulet, Philippe [2 ]
机构
[1] GE Res, 1 Res Circle, Niskayuna, NY 12309 USA
[2] Global Robot & Automat Ctr GE Aviat, 2 Blvd Aeroport, Bromont, PQ J2L 1A3, Canada
[3] Robotech Automatisat, 2168 Rue Prov, Longueuil, PQ J4G 1R7, Canada
关键词
reinforcement learning; self-homing; parallel-agent; industrial robotic manipulator;
D O I
暂无
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Industrial robotics has been playing a major role in manufacturing across all types of industries. One common task of robotic cells in manufacturing is called homing, a step that enables a robotic arm to return to its initial / home position (HPos) from anywhere in a robotic cell, without collision or experiencing robot singularities while respecting its joint limits. In almost all industrial robotic cells, an operation cycle starts from, and ends to HPos. The home position also works as a safe state for a cycle to restart when an alarm or fault occurs within the cell. When an alarm occurs, the robot configuration in the cell is unpredictable, thus challenging to bring the robot, autonomously and with safety at HPos and restart the operation. This paper presents a non-vision, reinforcement learning-based approach of a parallel-agent setting to enable selfhoming capability in industrial robotic cells, eliminating the need of manual programming of robot manipulators. This approach assumes the sensing of an unknown robotic cell environment pre-encoded in the state definition so that the policies learned can be transferred without further training. The agents are trained in a simulation environment generated by the mechanical design of an actual robotic cell to increase the accuracy of mapping the real environment to the simulated one. The approach explores the impact of certain curriculum on the agent's learning and evaluates two choices, compared to a non-curriculum baseline. A parallel-agent, multi-process training setting is employed to enhance performance in exploring the state space, where experiences are shared among the agents via shared memory. Upon deployment, all agents are involved with their respective policies in a collective manner. The approach has been demonstrated in simulated industrial robotic cells, and it has been shown that the policies derived in simulation are transferable to a corresponding real industrial robotic cells, and are generalizable to other robotic systems in manufacturing settings. (C) 2022 Society of Manufacturing Engineers (SME). Published by Elsevier Ltd. All rights reserved. Peer-review under responsibility of the Scientific Committee of the NAMRI/SME.
引用
收藏
页码:909 / 918
页数:10
相关论文
共 50 条
  • [41] A Novel Reinforcement Learning-based Unsupervised Fault Detection for Industrial Manufacturing Systems
    Acernese, Antonio
    Yerudkar, Amol
    Del Vecchio, Carmen
    2022 AMERICAN CONTROL CONFERENCE, ACC, 2022, : 2650 - 2655
  • [42] Task Scheduling via Modified Deep Reinforcement Learning for MEC-Enabled Industrial IoT
    Wang, Yizhou
    Zhang, Haixia
    Zhou, Xiaotian
    Li, Dongyang
    Yuan, Dongfeng
    2022 14TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING, WCSP, 2022, : 661 - 666
  • [43] Blockchain-Enabled Software-Defined Industrial Internet of Things With Deep Reinforcement Learning
    Luo, Jia
    Chen, Qianbin
    Yu, F. Richard
    Tang, Lun
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (06): : 5466 - 5480
  • [44] Reinforcement Learning-enabled Auctions for Self-Healing in Service Function Chaining
    Avgeris, Marios
    Leivadeas, Aris
    Lambadaris, Ioannis
    2023 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS, ICC WORKSHOPS, 2023, : 776 - 781
  • [45] Reinforcement learning approach to self-organization in a biological manufacturing system framework
    Fujii, N
    Hatono, I
    Ueda, K
    PROCEEDINGS OF THE INSTITUTION OF MECHANICAL ENGINEERS PART B-JOURNAL OF ENGINEERING MANUFACTURE, 2004, 218 (06) : 667 - 673
  • [46] Industrial Insert Robotic Assembly Based on Model-based Meta-Reinforcement Learning
    Liu, Dong
    Zhang, Xiamin
    Du, Yu
    Gao, Dan
    Wang, Minghao
    Cong, Ming
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (IEEE-ROBIO 2021), 2021, : 1508 - 1512
  • [47] Reinforcement Learning-Based Fixed-Time Trajectory Tracking Control for Uncertain Robotic Manipulators With Input Saturation
    Cao, Shengjie
    Sun, Liang
    Jiang, Jingjing
    Zuo, Zongyu
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (08) : 4584 - 4595
  • [48] Reinforcement Learning-based Scheduling of a Job-Shop Process with Distributedly Controlled Robotic Manipulators for Transport Operations
    Jungbluth, Simon
    Gafur, Nigora
    Popper, Jens
    Yfantis, Vassilios
    Ruskowski, Martin
    IFAC PAPERSONLINE, 2022, 55 (02): : 156 - 162
  • [49] Reinforcement learning in robotic motion planning by combined-based and self-imitation
    Luo, Sha
    Schomaker, Lambert
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2023, 170
  • [50] Self-Assembling Modular Robotic Training Using Concurrent Dual Reinforcement Learning
    O'Fallon, John M.
    Megherbi, Dalia
    18TH ANNUAL IEEE INTERNATIONAL SYSTEMS CONFERENCE, SYSCON 2024, 2024,