Optimal Maintenance Policy for Corroded Oil and Gas Pipelines using Markov Decision Processes

被引:2
|
作者
Heidary, Roohollah [1 ]
Prasad-Rao, Jubilee [1 ]
Groth, Katrina M. [2 ]
机构
[1] Global Technol Connect Inc, Atlanta, GA 30339 USA
[2] Univ Maryland, Ctr Risk & Reliabil, Syst Risk & Reliabil Anal Lab SyRRA, College Pk, MD 20742 USA
关键词
PITTING CORROSION; RELIABILITY; GROWTH; MODEL;
D O I
10.36001/IJPHM.2022.v13i1.3106
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
This paper presents a novel approach to determine optimal maintenance policies for degraded oil and gas pipelines due to internal pitting corrosion. This approach builds a bridge between Markov process-based corrosion rate models and Markov decision processes (MDP). This bridging allows for considering both short-term and long-term costs for optimal pipeline maintenance operations. To implement MI)P, probability transition matrices are estimated to move from one degradation state to the next in the pipeline degradation Markov processes. A case study is also implemented with four pipeline failure modes (i.e., safe, small leak, large leak, and rupture). And four maintenance actions (i.e., do nothing, adding corrosion inhibitors, pigging, and replacement) are considered by assuming perfect pipeline inspections. Monte Carlo simulation is performed on 10,000 initial pits using the selected corrosion models and assumed maintenance and failure costs to determine an optimal maintenance policy.
引用
收藏
页码:1 / 8
页数:8
相关论文
共 50 条
  • [31] An optimal policy for partially observable markov decision processes with non-independent monitors
    Jin, L
    Mashita, T
    Suzuki, K
    ADVANCED RELIABILITY MODELING, 2004, : 213 - 220
  • [32] Navigating to the Best Policy in Markov Decision Processes
    Al Marjani, Aymen
    Garivier, Aurelien
    Proutiere, Alexandre
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [33] Optimal Policies for Quantum Markov Decision Processes
    Ying, Ming-Sheng
    Feng, Yuan
    Ying, Sheng-Gang
    INTERNATIONAL JOURNAL OF AUTOMATION AND COMPUTING, 2021, 18 (03) : 410 - 421
  • [34] Geometric Policy Iteration for Markov Decision Processes
    Wu, Yue
    De Loera, Jesus A.
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 2070 - 2078
  • [35] Policy learning in continuous-time Markov decision processes using Gaussian Processes
    Bartocci, Ezio
    Bortolussi, Luca
    Brazdil, Tomas
    Milios, Dimitrios
    Sanguinetti, Guido
    PERFORMANCE EVALUATION, 2017, 116 : 84 - 100
  • [36] Efficient Policy Representation for Markov Decision Processes
    Khademi, Anahita
    Khademian, Sepehr
    SMART TECHNOLOGIES IN URBAN ENGINEERING, STUE-2022, 2023, 536 : 151 - 162
  • [37] Policy gradient in Lipschitz Markov Decision Processes
    Matteo Pirotta
    Marcello Restelli
    Luca Bascetta
    Machine Learning, 2015, 100 : 255 - 283
  • [38] Policy gradient in Lipschitz Markov Decision Processes
    Pirotta, Matteo
    Restelli, Marcello
    Bascetta, Luca
    MACHINE LEARNING, 2015, 100 (2-3) : 255 - 283
  • [39] Optimal adaptive policies for Markov decision processes
    Burnetas, AN
    Katehakis, MN
    MATHEMATICS OF OPERATIONS RESEARCH, 1997, 22 (01) : 222 - 255
  • [40] Optimal Policies for Quantum Markov Decision Processes
    Ming-Sheng Ying
    Yuan Feng
    Sheng-Gang Ying
    International Journal of Automation and Computing, 2021, 18 (03) : 410 - 421