Reinforcement Learning of a Six-DOF Industrial Manipulator for Pick-and-Place Application Using Efficient Control in Warehouse Management

被引:0
|
作者
Iqdymat, Ahmed [1 ]
Stamatescu, Grigore [1 ]
机构
[1] Natl Univ Sci & Technol POLITEHN Bucharest, Dept Automat & Ind Informat, Splaiul Independentei 313, Bucharest 060042, Romania
关键词
reinforcement learning; energy efficient control; industrial manipulator; pick-and-place; automation; sustainability;
D O I
10.3390/su17020432
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
This study investigates the integration of reinforcement learning (RL) with optimal control to enhance precision and energy efficiency in industrial robotic manipulation. A novel framework is proposed, combining Deep Deterministic Policy Gradient (DDPG) with a Linear Quadratic Regulator (LQR) controller, specifically applied to the ABB IRB120, a six-degree-of-freedom (6-DOF) industrial manipulator, for pick-and-place tasks in warehouse automation. The methodology employs an actor-critic RL architecture with a 27-dimensional state input and a 6-dimensional joint action output. The RL agent was trained using MATLAB's Reinforcement Learning Toolbox and integrated with ABB's RobotStudio simulation environment via TCP/IP communication. LQR controllers were incorporated to optimize joint-space trajectory tracking, minimizing energy consumption while ensuring precise control. The novelty of this research lies in its synergistic combination of RL and LQR control, addressing energy efficiency and precision simultaneously-an area that has seen limited exploration in industrial robotics. Experimental validation across 100 diverse scenarios confirmed the framework's effectiveness, achieving a mean positioning accuracy of 2.14 mm (a 28% improvement over traditional methods), a 92.5% success rate in pick-and-place tasks, and a 22.7% reduction in energy consumption. The system demonstrated stable convergence after 458 episodes and maintained a mean joint angle error of 4.30 degrees, validating its robustness and efficiency. These findings highlight the potential of RL for broader industrial applications. The demonstrated accuracy and success rate suggest its applicability to complex tasks such as electronic component assembly, multi-step manufacturing, delicate material handling, precision coordination, and quality inspection tasks like automated visual inspection, surface defect detection, and dimensional verification. Successful implementation in such contexts requires addressing challenges including task complexity, computational efficiency, and adaptability to process variability, alongside ensuring safety, reliability, and seamless system integration. This research builds upon existing advancements in warehouse automation, inverse kinematics, and energy-efficient robotics, contributing to the development of adaptive and sustainable control strategies for industrial manipulators in automated environments.
引用
收藏
页数:25
相关论文
共 10 条
  • [1] Deep Reinforcement Learning Applied to a Robotic Pick-and-Place Application
    Gomes, Natanael Magno
    Martins, Felipe N.
    Lima, Jose
    Wortche, Heinrich
    OPTIMIZATION, LEARNING ALGORITHMS AND APPLICATIONS, OL2A 2021, 2021, 1488 : 251 - 265
  • [2] Model-Free Dynamic Control of a 3-DoF Delta Parallel Robot for Pick-and-Place Application based on Deep Reinforcement Learning
    Jalali, Hasan
    Samadi, Saba
    Kalhor, Ahmad
    Masouleh, Mehdi Tale
    2022 10TH RSI INTERNATIONAL CONFERENCE ON ROBOTICS AND MECHATRONICS (ICROM), 2022, : 48 - 54
  • [3] Control and Operation of 4 DOF Industrial Pick and Place Robot Using HMI
    Dubey, Akshay P.
    Pattnaik, Santosh Mohan
    Saravanakumar, R.
    PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON SOFT COMPUTING SYSTEMS, ICSCS 2015, VOL 1, 2016, 397 : 787 - 798
  • [4] Pick and Place Operations in Logistics Using a Mobile Manipulator Controlled with Deep Reinforcement Learning
    Iriondo, Ander
    Lazkano, Elena
    Susperregi, Loreto
    Urain, Julen
    Fernandez, Ane
    Molina, Jorge
    APPLIED SCIENCES-BASEL, 2019, 9 (02):
  • [5] Simulated and Real Robotic Reach, Grasp, and Pick-and-Place Using Combined Reinforcement Learning and Traditional Controls
    Lobbezoo, Andrew
    Kwon, Hyock-Ju
    ROBOTICS, 2023, 12 (01)
  • [6] Prehensile and Non-Prehensile Robotic Pick-and-Place of Objects in Clutter Using Deep Reinforcement Learning
    Imtiaz, Muhammad Babar
    Qiao, Yuansong
    Lee, Brian
    SENSORS, 2023, 23 (03)
  • [7] Realization of highly energy efficient pick-and-place tasks using resonance-based robot motion control
    Matsusaka, Kento
    Uemura, Mitsunori
    Kawamura, Sadao
    Advanced Robotics, 2016, 30 (09): : 608 - 620
  • [8] Realization of highly energy efficient pick-and-place tasks using resonance-based robot motion control
    Matsusaka, Kento
    Uemura, Mitsunori
    Kawamura, Sadao
    ADVANCED ROBOTICS, 2016, 30 (09) : 608 - 620
  • [9] Design of a prototype of manipulator arm for implementing pick-and-place task in industrial robot system using TCS3200 color sensor and ATmega2560 microcontroller
    Najmurrokhman, A.
    Kusnandar, K.
    Maulana, F.
    Wibowo, B.
    Nurlina, E.
    ANNUAL CONFERENCE OF SCIENCE AND TECHNOLOGY, 2019, 1375
  • [10] A Deep Reinforcement Learning-based Application Framework for Conveyor Belt-based Pick-and-Place Systems using 6-axis Manipulators under Uncertainty and Real-time Constraints
    Le, Tuyen P.
    Lee, DongHyun
    Choi, DaeWoo
    2021 18TH INTERNATIONAL CONFERENCE ON UBIQUITOUS ROBOTS (UR), 2021, : 464 - 470