Transition to intelligent fleet management systems in open pit mines: A critical review on application of reinforcement-learning-based systems

被引:3
|
作者
Hazrathosseini, Arman [1 ]
Moradi Afrapoli, Ali [1 ]
机构
[1] Laval Univ, Dept Min Met & Mat Engn, IntelMine Lab, 1728 Pavillon Adrien Pouliot,1065 Ave Medecine, Quebec City, PQ G1V 0A6, Canada
关键词
open-pit mines; fleet management system; truck-shovel system; intelligent dispatching; reinforcement learning; multi-agent algorithm; NEURAL-NETWORKS; OPTIMIZATION; GAME;
D O I
10.1177/25726668231222998
中图分类号
TD [矿业工程];
学科分类号
0819 ;
摘要
The mathematical methods developed so far for addressing truck dispatching problems in fleet management systems (FMSs) of open-pit mines fail to capture the autonomy and dynamicity demanded by Mining 4.0, having led to the popularity of reinforcement learning (RL) methods capable of capturing real-time operational changes. Nonetheless, this nascent field feels the absence of a comprehensive study to elicit the shortfalls of previous studies in favour of more mature future works. To fill the gap, the present study attempts to critically review previously published articles in RL-based mine FMSs through both developing a five-feature-class scale embedded with 29 widely used dispatching features and an insightful review of basics and trends in RL. Results show that 60% of those features were neglected in previous works and that the underlying algorithms have many potentials for improvement. This study also laid out future research directions, pertinent challenges and possible solutions.
引用
收藏
页码:50 / 73
页数:24
相关论文
共 50 条
  • [21] A critical review of safe reinforcement learning strategies in power and energy systems
    Bui, Van-Hai
    Mohammadi, Sina
    Das, Srijita
    Hussain, Akhtar
    Hollweg, Guilherme Vieira
    Su, Wencong
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 143
  • [22] Reinforcement-Learning-Based Output-Feedback Control of Nonstrict Nonlinear Discrete-Time Systems With Application to Engine Emission Control
    Shih, Peter
    Kaul, Brian C.
    Jagannathan, Sarangapani
    Drallmeier, James A.
    IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2009, 39 (05): : 1162 - 1179
  • [23] Reinforcement-Learning-Based Composite Optimal Control for Looper Hydraulic Servo Systems in Hot Strip Rolling
    Wang, Yudong
    Shen, Hao
    Wu, Jiacheng
    Yan, Huaicheng
    Xu, Shengyuan
    IEEE-ASME TRANSACTIONS ON MECHATRONICS, 2023, 28 (05) : 2495 - 2504
  • [24] Applications of GPS-based navigation systems on mobile mining equipment in open-pit mines
    Peck, J
    Hendricks, C
    CIM BULLETIN, 1997, 90 (1011): : 114 - 119
  • [25] Digital twin application for reinforcement learning based optimal scheduling and reliability management enhancement of systems
    Zhou, Jun
    Yang, Mei
    Zhan, Yong
    Xu, Li
    SOLAR ENERGY, 2023, 252 : 29 - 38
  • [26] Applications of Deep Reinforcement Learning for Home Energy Management Systems: A Review
    Laton, Dominik
    Grela, Jakub
    Ozadowicz, Andrzej
    ENERGIES, 2024, 17 (24)
  • [27] Reinforcement Learning for Non-Deterministic Transition Systems With an Application to Symbolic Control
    Borri, Alessandro
    Possieri, Corrado
    IEEE CONTROL SYSTEMS LETTERS, 2023, 7 : 1610 - 1615
  • [28] Reinforcement Learning-Based Intelligent Resource Allocation for Integrated VLCP Systems
    Yang, Helin
    Du, Pengfei
    Zhong, Wen-De
    Chen, Chen
    Alphones, Arokiaswami
    Zhang, Sheng
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2019, 8 (04) : 1204 - 1207
  • [29] Intelligent Optimization Control for Air Starting Systems Based on Deep Reinforcement Learning
    Peng, Jin
    Li, Xin
    Guo, Zhongyu
    Yang, Wenda
    Xu, Hongzhang
    Qi, Yiwen
    2023 35TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2023, : 2589 - 2594
  • [30] Research on the Multiobjective and Efficient Ore-Blending Scheduling of Open-Pit Mines Based on Multiagent Deep Reinforcement Learning
    Feng, Zhidong
    Liu, Ge
    Wang, Luofeng
    Gu, Qinghua
    Chen, Lu
    SUSTAINABILITY, 2023, 15 (06)