Hierarchical framework for interpretable and specialized deep reinforcement learning-based predictive maintenance

被引:6
|
作者
Abbas, Ammar N. [1 ,2 ]
Chasparis, Georgios C. [1 ]
Kelleher, John D. [3 ]
机构
[1] Software Competence Ctr Hagenberg, Data Sci, Softwarepk 32a, A-4232 Hagenberg, Austria
[2] Technol Univ Dublin, Dept Comp Sci, Dublin D02HW71, Ireland
[3] Maynooth Univ, ADAPT Res Ctr, Maynooth W23 A3HY, Ireland
基金
爱尔兰科学基金会;
关键词
Deep reinforcement learning; Probabilistic modeling; Input-output hidden Markov model; Predictive maintenance; Industry; 5.0; Interpretable reinforcement learning; GO;
D O I
10.1016/j.datak.2023.102240
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep reinforcement learning holds significant potential for application in industrial decision-making, offering a promising alternative to traditional physical models. However, its black-box learning approach presents challenges for real-world and safety-critical systems, as it lacks interpretability and explanations for the derived actions. Moreover, a key research question in deep reinforcement learning is how to focus policy learning on critical decisions within sparse domains. This paper introduces a novel approach that combines probabilistic modeling and reinforcement learning, providing interpretability and addressing these challenges in the context of safety-critical predictive maintenance. The methodology is activated in specific situations identified through the input-output hidden Markov model, such as critical conditions or near-failure scenarios. To mitigate the challenges associated with deep reinforcement learning in safety-critical predictive maintenance, the approach is initialized with a baseline policy using behavioral cloning, requiring minimal interactions with the environment. The effectiveness of this framework is demonstrated through a case study on predictive maintenance for turbofan engines, outperforming previous approaches and baselines, while also providing the added benefit of interpretability. Importantly, while the framework is applied to a specific use case, this paper aims to present a general methodology that can be applied to diverse predictive maintenance applications.
引用
收藏
页数:28
相关论文
共 50 条
  • [41] Predictive maintenance decision-making for serial production lines based on deep reinforcement learning
    Cui P.
    Wang J.
    Zhang W.
    Li Y.
    Jisuanji Jicheng Zhizao Xitong/Computer Integrated Manufacturing Systems, CIMS, 2021, 27 (12): : 3416 - 3428
  • [42] A Reinforcement Learning-Based Follow-up Framework
    Astudillo, Javiera
    Protopapas, Pavlos
    Pichara, Karim
    Becker, Ignacio
    ASTRONOMICAL JOURNAL, 2023, 165 (03):
  • [43] Predictive Maintenance Model for IIoT-Based Manufacturing: A Transferable Deep Reinforcement Learning Approach
    Ong, Kevin Shen Hoong
    Wang, Wenbo
    Hieu, Nguyen Quang
    Niyato, Dusit
    Friedrichs, Thomas
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (17) : 15725 - 15741
  • [44] Gamification Framework for Reinforcement Learning-based Neuropsychology Experiments
    Chetitah, Mounsif
    Mueller, Julian
    Deserno, Lorenz
    Waltmann, Maria
    von Mammen, Sebastian
    PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON THE FOUNDATIONS OF DIGITAL GAMES, FDG 2023, 2023,
  • [45] Deep Learning-Based Predictive Framework for Groundwater Level Forecast in Arid Irrigated Areas
    Liu, Wei
    Yu, Haijiao
    Yang, Linshan
    Yin, Zhenliang
    Zhu, Meng
    Wen, Xiaohu
    WATER, 2021, 13 (18)
  • [46] RLPS: A Reinforcement Learning-Based Framework for Personalized Search
    Yao, Jing
    Dou, Zhicheng
    Xu, Jun
    Wen, Ji-Rong
    ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2021, 39 (03)
  • [47] LSTM-Autoencoder-based Interpretable Predictive Maintenance Framework for Industrial Systems
    Agrawal, Anmol
    Sinha, Aparna
    Das, Debanjan
    2024 IEEE INTERNATIONAL INSTRUMENTATION AND MEASUREMENT TECHNOLOGY CONFERENCE, I2MTC 2024, 2024,
  • [48] Evolutionary Framework With Reinforcement Learning-Based Mutation Adaptation
    Sallam, Karam M.
    Elsayed, Saber M.
    Chakrabortty, Ripon K.
    Ryan, Michael J.
    IEEE ACCESS, 2020, 8 : 194045 - 194071
  • [49] Reinforcement Learning-based Hierarchical Seed Scheduling for Greybox Fuzzing
    Wang, Jinghan
    Song, Chengyu
    Yin, Heng
    28TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2021), 2021,
  • [50] A Deep Reinforcement Learning-Based Approach in Porker Game
    Kong, Yan
    Rui, Yefeng
    Hsia, Chih-Hsien
    Journal of Computers (Taiwan), 2023, 34 (02) : 41 - 51