Tuning of Model Predictive Control for Loading Motion of Automated Excavator Using Deep Reinforcement Learning

被引:0
|
作者
Ishihara S. [1 ]
Ohtsuka T. [2 ]
机构
[1] Hitachi Ltd.,Research & Development Group, 7-1-1, Omika, Ibaraki, Hitachi
[2] Kyoto University, Graduate School of Informatics, Department of Informatics, Yoshida-honmachi, Sakyo-ku, Kyoto
关键词
automated excavator; deep reinforcement learning; model predictive control; parameter tuning;
D O I
10.1541/ieejeiss.144.552
中图分类号
学科分类号
摘要
This study deals with the control problems for automating the operation of an excavator loading soil onto the back of a dump truck. In the loading operation, the bucket should not touch the dump truck and should spill as little soil in the bucket as possible. We have been studying how to apply Model Predictive Control (MPC) to this problem to achieve ideal loading operation. When trying to achieve the desired operation using MPC, it is extremely important to tune the weights of the objective function appropriately. However, since this control problem may depend on the situations, that is, initial posture of the excavator and the position of the truck, optimization for specific conditions would not be desirable. Therefore, we constructed a method to generate suitable weight parameters according to the loading situation using reinforcement learning. The effectiveness of the proposed method was verified by numerical simulations. © 2024 The Institute of Electrical Engineers of Japan.
引用
收藏
页码:552 / 559
页数:7
相关论文
共 50 条
  • [31] Realization of Excavator Loading Operation by Nonlinear Model Predictive Control with Bucket Load Estimation
    Ishihara, Shinji
    Kanazawa, Akira
    Narikawa, Ryu
    IFAC PAPERSONLINE, 2021, 54 (20): : 20 - 25
  • [32] Automated function development for emission control with deep reinforcement learning
    Koch, Lucas
    Picerno, Mario
    Badalian, Kevin
    Lee, Sung-Yong
    Andert, Jakob
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 117
  • [33] Multiagent Deep Reinforcement Learning for Automated Truck Platooning Control
    Lian, Renzong
    Li, Zhiheng
    Wen, Boxuan
    Wei, Junqing
    Zhang, Jiawei
    Li, Li
    IEEE INTELLIGENT TRANSPORTATION SYSTEMS MAGAZINE, 2024, 16 (01) : 116 - 131
  • [34] Local motion simulation using deep reinforcement learning
    Xu, Dong
    Huang, Xiao
    Li, Zhenlong
    Li, Xiang
    TRANSACTIONS IN GIS, 2020, 24 (03) : 756 - 779
  • [35] Model Predictive Control Guided Reinforcement Learning Control Scheme
    Xie, Huimin
    Xu, Xinghai
    Li, Yuling
    Hong, Wenjing
    Shi, Jia
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [36] Adaptive Coordinated Motion Control: Automated Tuning for Predictive Safety in Electric Vehicles
    Sun, Haobo
    Zhang, Lin
    Yang, Yanding
    Ye, Xiaoming
    Liu, Xiaoyan
    Chen, Hong
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2024,
  • [37] Containerized Framework for Building Control Performance Comparisons: Model Predictive Control vs Deep Reinforcement Learning Control
    Fu, Yangyang
    Xu, Shichao
    Zhu, Qi
    O'Neill, Zheng
    BUILDSYS'21: PROCEEDINGS OF THE 2021 ACM INTERNATIONAL CONFERENCE ON SYSTEMS FOR ENERGY-EFFICIENT BUILT ENVIRONMENTS, 2021, : 276 - 280
  • [38] Model Predictive Control-Based Reinforcement Learning Using Expected Sarsa
    Moradimaryamnegari, Hoomaan
    Frego, Marco
    Peer, Angelika
    IEEE ACCESS, 2022, 10 : 81177 - 81191
  • [39] Hierarchical Evasive Path Planning Using Reinforcement Learning and Model Predictive Control
    Feher, Arpad
    Aradi, Szilard
    Becsi, Tamas
    IEEE ACCESS, 2020, 8 : 187470 - 187482
  • [40] Training Dynamic Motion Primitives using Deep Reinforcement Learning to Control a Robotic Tadpole
    Hameed, Imran
    Chao, Xu
    Navarro-Alarcon, David
    Jing, Xingjian
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 6881 - 6887