Reinforcement Learning Enabled Self-Homing of Industrial Robotic Manipulators in Manufacturing

被引:0
|
作者
Karigiannis, John N. [1 ]
Laurin, Philippe [3 ]
Liu, Shaopeng [1 ]
Holovashchenko, Viktor [1 ]
Lizotte, Antoine [2 ]
Roux, Vincent [2 ]
Boulet, Philippe [2 ]
机构
[1] GE Res, 1 Res Circle, Niskayuna, NY 12309 USA
[2] Global Robot & Automat Ctr GE Aviat, 2 Blvd Aeroport, Bromont, PQ J2L 1A3, Canada
[3] Robotech Automatisat, 2168 Rue Prov, Longueuil, PQ J4G 1R7, Canada
关键词
reinforcement learning; self-homing; parallel-agent; industrial robotic manipulator;
D O I
暂无
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Industrial robotics has been playing a major role in manufacturing across all types of industries. One common task of robotic cells in manufacturing is called homing, a step that enables a robotic arm to return to its initial / home position (HPos) from anywhere in a robotic cell, without collision or experiencing robot singularities while respecting its joint limits. In almost all industrial robotic cells, an operation cycle starts from, and ends to HPos. The home position also works as a safe state for a cycle to restart when an alarm or fault occurs within the cell. When an alarm occurs, the robot configuration in the cell is unpredictable, thus challenging to bring the robot, autonomously and with safety at HPos and restart the operation. This paper presents a non-vision, reinforcement learning-based approach of a parallel-agent setting to enable selfhoming capability in industrial robotic cells, eliminating the need of manual programming of robot manipulators. This approach assumes the sensing of an unknown robotic cell environment pre-encoded in the state definition so that the policies learned can be transferred without further training. The agents are trained in a simulation environment generated by the mechanical design of an actual robotic cell to increase the accuracy of mapping the real environment to the simulated one. The approach explores the impact of certain curriculum on the agent's learning and evaluates two choices, compared to a non-curriculum baseline. A parallel-agent, multi-process training setting is employed to enhance performance in exploring the state space, where experiences are shared among the agents via shared memory. Upon deployment, all agents are involved with their respective policies in a collective manner. The approach has been demonstrated in simulated industrial robotic cells, and it has been shown that the policies derived in simulation are transferable to a corresponding real industrial robotic cells, and are generalizable to other robotic systems in manufacturing settings. (C) 2022 Society of Manufacturing Engineers (SME). Published by Elsevier Ltd. All rights reserved. Peer-review under responsibility of the Scientific Committee of the NAMRI/SME.
引用
收藏
页码:909 / 918
页数:10
相关论文
共 50 条
  • [21] Reinforcement learning-based adaptive tracking control for flexible-joint robotic manipulators
    Zhong, Huihui
    Wen, Weijian
    Fan, Jianjun
    Yang, Weijun
    AIMS MATHEMATICS, 2024, 9 (10): : 27330 - 27360
  • [22] Body schema learning for robotic manipulators from visual self-perception
    Sturm, Juergen
    Plagemann, Christian
    Burgard, Wolfram
    JOURNAL OF PHYSIOLOGY-PARIS, 2009, 103 (3-5) : 220 - 231
  • [23] A framework for industrial robot training in cloud manufacturing with deep reinforcement learning
    Liu, Yongkui
    Yao, Junying
    Lin, Tingyu
    Xu, He
    Shi, Feng
    Xiao, Yingying
    Zhang, Lin
    Wang, Lihui
    PROCEEDINGS OF THE ASME 2020 15TH INTERNATIONAL MANUFACTURING SCIENCE AND ENGINEERING CONFERENCE (MSEC2020), VOL 2B, 2020,
  • [24] Fuzzy logic based reinforcement learning of admittance control for automated robotic manufacturing
    Prabhu, SM
    Garg, DP
    FIRST INTERNATIONAL CONFERENCE ON KNOWLEDGE-BASED INTELLIGENT ELECTRONIC SYSTEMS, PROCEEDINGS 1997 - KES '97, VOLS 1 AND 2, 1997, : 478 - 487
  • [25] Opportunities and Challenges in Applying Reinforcement Learning to Robotic Manipulation: an Industrial Case Study
    Toner, Tyler
    Saez, Miguel
    Tilbury, Dawn M.
    Barton, Kira
    MANUFACTURING LETTERS, 2023, 35 : 1019 - 1030
  • [26] Deep reinforcement learning enabled self-learning control for energy efficient driving
    Qi, Xuewei
    Luo, Yadan
    Wu, Guoyuan
    Boriboonsomsin, Kanok
    Barth, Matthew
    TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2019, 99 : 67 - 81
  • [27] Customizing skills for assistive robotic manipulators, an inverse reinforcement learning approach with error-related potentials
    Batzianoulis, Iason
    Iwane, Fumiaki
    Wei, Shupeng
    Correia, Carolina Gaspar Pinto Ramos
    Chavarriaga, Ricardo
    Millan, Jose del R.
    Billard, Aude
    COMMUNICATIONS BIOLOGY, 2021, 4 (01)
  • [28] Customizing skills for assistive robotic manipulators, an inverse reinforcement learning approach with error-related potentials
    Iason Batzianoulis
    Fumiaki Iwane
    Shupeng Wei
    Carolina Gaspar Pinto Ramos Correia
    Ricardo Chavarriaga
    José del R. Millán
    Aude Billard
    Communications Biology, 4
  • [29] Model-Based Reinforcement Learning for Closed-Loop Dynamic Control of Soft Robotic Manipulators
    Thuruthel, Thomas George
    Falotico, Egidio
    Renda, Federico
    Laschi, Cecilia
    IEEE TRANSACTIONS ON ROBOTICS, 2019, 35 (01) : 124 - 134
  • [30] High order CMAC-based self-learning controller for robotic manipulators
    Yang, Shengyue
    Fan, Xiaoping
    Changsha Tiedao Xuyuan Xuebao/Journal of Changsha Railway University, 2000, 18 (03): : 29 - 33