Hierarchical framework for interpretable and specialized deep reinforcement learning-based predictive maintenance

被引:6
|
作者
Abbas, Ammar N. [1 ,2 ]
Chasparis, Georgios C. [1 ]
Kelleher, John D. [3 ]
机构
[1] Software Competence Ctr Hagenberg, Data Sci, Softwarepk 32a, A-4232 Hagenberg, Austria
[2] Technol Univ Dublin, Dept Comp Sci, Dublin D02HW71, Ireland
[3] Maynooth Univ, ADAPT Res Ctr, Maynooth W23 A3HY, Ireland
基金
爱尔兰科学基金会;
关键词
Deep reinforcement learning; Probabilistic modeling; Input-output hidden Markov model; Predictive maintenance; Industry; 5.0; Interpretable reinforcement learning; GO;
D O I
10.1016/j.datak.2023.102240
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep reinforcement learning holds significant potential for application in industrial decision-making, offering a promising alternative to traditional physical models. However, its black-box learning approach presents challenges for real-world and safety-critical systems, as it lacks interpretability and explanations for the derived actions. Moreover, a key research question in deep reinforcement learning is how to focus policy learning on critical decisions within sparse domains. This paper introduces a novel approach that combines probabilistic modeling and reinforcement learning, providing interpretability and addressing these challenges in the context of safety-critical predictive maintenance. The methodology is activated in specific situations identified through the input-output hidden Markov model, such as critical conditions or near-failure scenarios. To mitigate the challenges associated with deep reinforcement learning in safety-critical predictive maintenance, the approach is initialized with a baseline policy using behavioral cloning, requiring minimal interactions with the environment. The effectiveness of this framework is demonstrated through a case study on predictive maintenance for turbofan engines, outperforming previous approaches and baselines, while also providing the added benefit of interpretability. Importantly, while the framework is applied to a specific use case, this paper aims to present a general methodology that can be applied to diverse predictive maintenance applications.
引用
收藏
页数:28
相关论文
共 50 条
  • [21] Deep reinforcement learning-based framework for constrained any-objective optimization
    Honari H.
    Khodaygan S.
    Journal of Ambient Intelligence and Humanized Computing, 2023, 14 (07) : 9575 - 9591
  • [22] Deep reinforcement learning-based algorithms selectors for the resource scheduling in hierarchical Cloud computing
    Zhou G.
    Wen R.
    Tian W.
    Buyya R.
    Journal of Network and Computer Applications, 2022, 208
  • [23] Learning to Interrupt: A Hierarchical Deep Reinforcement Learning Framework for Efficient Exploration
    Li, Tingguang
    Pan, Jin
    Zhu, Delong
    Meng, Max Q. -H.
    2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO), 2018, : 648 - 653
  • [24] Deep Reinforcement Learning-based Quantization for Federated Learning
    Zheng, Sihui
    Dong, Yuhan
    Chen, Xiang
    2023 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC, 2023,
  • [25] DeepAR: a novel deep learning-based hybrid framework for the interpretable prediction of androgen receptor antagonists
    Schaduangrat, Nalini
    Anuwongcharoen, Nuttapat
    Charoenkwan, Phasit
    Shoombuatong, Watshara
    JOURNAL OF CHEMINFORMATICS, 2023, 15 (01)
  • [26] DeepAR: a novel deep learning-based hybrid framework for the interpretable prediction of androgen receptor antagonists
    Nalini Schaduangrat
    Nuttapat Anuwongcharoen
    Phasit Charoenkwan
    Watshara Shoombuatong
    Journal of Cheminformatics, 15
  • [27] Multi-agent deep reinforcement learning based Predictive Maintenance on parallel machines
    Rodriguez, Marcelo Luis Ruiz
    Kubler, Sylvain
    de Giorgio, Andrea
    Cordy, Maxime
    Robert, Jeremy
    Le Traon, Yves
    ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2022, 78
  • [28] Predictive Maintenance for Edge-Based Sensor Networks: A Deep Reinforcement Learning Approach
    Ong, Kevin Shen Hoong
    Niyato, Dusit
    Yuen, Chau
    2020 IEEE 6TH WORLD FORUM ON INTERNET OF THINGS (WF-IOT), 2020,
  • [29] DEEP REINFORCEMENT LEARNING-BASED IRRIGATION SCHEDULING
    Yang, Y.
    Hu, J.
    Porter, D.
    Marek, T.
    Heflin, K.
    Kong, H.
    Sun, L.
    TRANSACTIONS OF THE ASABE, 2020, 63 (03) : 549 - 556
  • [30] Optimal policy for structure maintenance: A deep reinforcement learning framework
    Wei, Shiyin
    Bao, Yuequan
    Li, Hui
    STRUCTURAL SAFETY, 2020, 83 (83)