Hierarchical reinforcement learning for transportation infrastructure maintenance planning

被引:7
|
作者
Hamida, Zachary [1 ]
Goulet, James-A. [1 ]
机构
[1] Polytech Montreal, Dept Civil Geol & Min Engn, 2500 Chem Polytech, Montreal, PQ H3T 1J4, Canada
关键词
Maintenance planning; Reinforcement learning; RL environment; Deep Q-learning; Infrastructure deterioration; State-space models;
D O I
10.1016/j.ress.2023.109214
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Maintenance planning on bridges commonly faces multiple challenges, mainly related to complexity and scale. Those challenges stem from the large number of structural elements in each bridge in addition to the uncertainties surrounding their health condition, which is monitored using visual inspections at the element -level. Recent developments have relied on deep reinforcement learning (RL) for solving maintenance planning problems, with the aim to minimize the long-term costs. Nonetheless, existing RL based solutions have adopted approaches that often lacked the capacity to scale due to the inherently large state and action spaces. The aim of this paper is to introduce a hierarchical RL formulation for maintenance planning, which naturally adapts to the hierarchy of information and decisions in infrastructure. The hierarchical formulation enables decomposing large state and action spaces into smaller ones, by relying on state and temporal abstraction. An additional contribution from this paper is the development of an open-source RL environment that uses state-space models (SSM) to describe the propagation of the deterioration condition and speed over time. The functionality of this new environment is demonstrated by solving maintenance planning problems at the element-level, and the bridge-level.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] Modified Reinforcement Learning Infrastructure
    Suomala, Jyrki
    Suomala, Ville
    PROCEEDINGS OF THE 2ND INTERNATIONAL CONFERENCE ON APPLIED SOCIAL SCIENCE RESEARCH, 2014, 104 : 95 - 97
  • [42] Reinforcement Learning Based Trajectory Planning for Multi-UAV Load Transportation
    Estevez, Julian
    Manuel Lopez-Guede, Jose
    del Valle-Echavarri, Javier
    Grana, Manuel
    IEEE ACCESS, 2024, 12 : 144009 - 144016
  • [43] A hierarchical deep reinforcement learning method for coupled transportation and power distribution system dispatching
    Han, Qi
    Li, Xueping
    He, Liangce
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 145
  • [44] Urban transportation: innovations in infrastructure planning and development
    Narayanaswami, Sundaravalli
    INTERNATIONAL JOURNAL OF LOGISTICS MANAGEMENT, 2017, 28 (01) : 150 - 171
  • [45] Hierarchical Reinforcement Learning for Autonomous Decision Making and Motion Planning of Intelligent Vehicles
    Lu, Yang
    Xu, Xin
    Zhang, Xinglong
    Qian, Lilin
    Zhou, Xing
    IEEE ACCESS, 2020, 8 : 209776 - 209789
  • [46] Transportation Infrastructure Planning, Management, and Finance INTRODUCTION
    Doll, Claus
    Durango-Cohen, Pablo L.
    Ueda, Takayuki
    JOURNAL OF INFRASTRUCTURE SYSTEMS, 2009, 15 (04) : 261 - 262
  • [47] Hierarchical Multi-Robot Pursuit with Deep Reinforcement Learning and Navigation Planning
    Chen, Wenzhang
    Zhu, Yuanheng
    39TH YOUTH ACADEMIC ANNUAL CONFERENCE OF CHINESE ASSOCIATION OF AUTOMATION, YAC 2024, 2024, : 1274 - 1280
  • [48] Hierarchical Evasive Path Planning Using Reinforcement Learning and Model Predictive Control
    Feher, Arpad
    Aradi, Szilard
    Becsi, Tamas
    IEEE ACCESS, 2020, 8 : 187470 - 187482
  • [49] HRL-Painter: Optimal planning painter based on hierarchical reinforcement learning
    Zhang, Jiong
    Xu, Guangxin
    Zhang, Xiaoyan
    NEUROCOMPUTING, 2025, 636
  • [50] Hierarchical production control and distribution planning under retail uncertainty with reinforcement learning
    Deng, Yang
    Chow, Andy H. F.
    Yan, Yimo
    Su, Zicheng
    Zhou, Zhili
    Kuo, Yong-Hong
    INTERNATIONAL JOURNAL OF PRODUCTION RESEARCH, 2025,