Federated Discrete Reinforcement Learning for Automatic Guided Vehicle Control

被引:2
|
作者
Sierra-Garcia, J. Enrique [1 ]
Santos, Matilde [2 ]
机构
[1] Univ Burgos, Electromech Engn Dept, Burgos 09006, Spain
[2] Univ Complutense Madrid, Inst Knowledge Technol, Madrid 28040, Spain
关键词
Automated guided vehicle (AGV); Federated learning; Industry; 4.0; Intelligent control; Path following; Reinforcement learning; AGV;
D O I
10.1016/j.future.2023.08.021
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Under the federated learning paradigm, the agents learn in parallel and combine their knowledge to build a global knowledge model. This new machine learning strategy increases privacy and reduces communication costs, some benefits that can be very useful for industry applications deployed in the edge. Automatic Guided Vehicles (AGVs) can take advantage of this approach since they can be considered intelligent agents, operate in fleets, and are normally managed by a central system that can run in the edge and handles the knowledge of each of them to obtain a global emerging behavioral model. Furthermore, this idea can be combined with the concept of reinforcement learning (RL). This way, the AGVs can interact with the system to learn according to the policy implemented by the RL algorithm in order to follow specified routes, and send their findings to the main system. The centralized system collects this information in a group policy to turn it over to the AGVs. In this work, a novel Federated Discrete Reinforcement Learning (FDRL) approach is implemented to control the trajectories of a fleet of AGVs. Each industrial AGV runs the modules that correspond to an RL system: a state estimator, a rewards calculator, an action selector, and a policy update algorithm. AGVs share their policy variation with the federated server, which combines them into a group policy with a learning aggregation function. To validate the proposal, simulation results of the FDRL control for five hybrid tricycle-differential AGVs and four different trajectories (ellipse, lemniscate, octagon, and a closed 16-polyline) have been obtained and compared with a Proportional Integral Derivative (PID) controller optimized with genetic algorithms. The intelligent control approach shows an average improvement of 78% in mean absolute error, 75% in root mean square error, and 73% in terms of standard deviation. It has been shown that this approach also accelerates the learning up to a 50 % depending on the trajectory, with an average of 36% speed up while allowing precise tracking. The suggested federated-learning based technique outperforms an optimized fuzzy logic controller (FLC) for all of the measured trajectories as well. In addition, different learning aggregation functions have been proposed and evaluated. The influence of the number of vehicles (from 2 to 10) on the path following performance and on network transmission has been analyzed too.& COPY; 2023 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
引用
收藏
页码:78 / 89
页数:12
相关论文
共 50 条
  • [31] Federated Reinforcement Learning for Electric Vehicles Charging Control on Distribution Networks
    Qian, Junkai
    Jiang, Yuning
    Liu, Xin
    Wang, Qiong
    Wang, Ting
    Shi, Yuanming
    Chen, Wei
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (03) : 5511 - 5525
  • [32] Access Control for RAN Slicing based on Federated Deep Reinforcement Learning
    Liu, Yi-Jing
    Feng, Gang
    Wang, Jian
    Sun, Yao
    Qin, Shuang
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2021), 2021,
  • [33] LongiControl: A Reinforcement Learning Environment for Longitudinal Vehicle Control
    Dohmen, Jan
    Liessner, Roman
    Friebel, Christoph
    Baeker, Bernard
    ICAART: PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE - VOL 2, 2021, : 1030 - 1037
  • [34] Attitude Control of Hypersonic Vehicle based on Reinforcement Learning
    Liu, Jingwen
    Fan, Hongdong
    Fan, Yonghua
    Cai, Guangbin
    2024 3RD CONFERENCE ON FULLY ACTUATED SYSTEM THEORY AND APPLICATIONS, FASTA 2024, 2024, : 1503 - 1507
  • [35] Neural reinforcement learning for the control of an autonomous mobile vehicle
    Cicirelli, G
    D'Orazio, T
    Ancona, N
    Distante, A
    IASTED: PROCEEDINGS OF THE IASTED INTERNATIONAL CONFERENCE ON ROBOTICS AND APPLICATIONS, 2003, : 18 - 23
  • [36] A Fair Federated Learning Framework With Reinforcement Learning
    Sun, Yaqi
    Si, Shijing
    Wang, Jianzong
    Dong, Yuhan
    Zhu, Zhitao
    Xiao, Jing
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [37] Vehicles Control: Collision Avoidance using Federated Deep Reinforcement Learning
    Ben Elallid, Badr
    Abouaomar, Amine
    Benamar, Nabil
    Kobbane, Abdellatif
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 4369 - 4374
  • [38] A scalable approach to optimize traffic signal control with federated reinforcement learning
    Jingjing Bao
    Celimuge Wu
    Yangfei Lin
    Lei Zhong
    Xianfu Chen
    Rui Yin
    Scientific Reports, 13
  • [39] Unified Automatic Control of Vehicular Systems With Reinforcement Learning
    Yan, Zhongxia
    Kreidieh, Abdul Rahman
    Vinitsky, Eugene
    Bayen, Alexandre M.
    Wu, Cathy
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2023, 20 (02) : 789 - 804
  • [40] Learning Transferable Policy in Reinforcement Learning for Vehicle Velocity Tracking Control
    Natsu Y.
    Hamagami T.
    Kanke M.
    Yoshida K.
    Niwakawa M.
    IEEJ Transactions on Electronics, Information and Systems, 2021, 141 (12) : 1492 - 1499