Improving predictive maintenance: Evaluating the impact of preprocessing and model complexity on the effectiveness of eXplainable Artificial Intelligence methods

被引:0
|
作者
Ndao, Mouhamadou Lamine [1 ,2 ]
Youness, Genane [1 ,2 ]
Niang, Ndeye [2 ]
Saporta, Gilbert [2 ]
机构
[1] IDFC, Lab LINEACT CESI, Nanterre, France
[2] Lab CEDRIC MSDMA, Paris, France
关键词
Predictive maintenance; Data pre-processing; Post-hoc local eXplainable Artificial; Intelligence; Evaluation eXplainable Artificial Intelligence; metrics; Long Short-Term Memory Neural Network; USEFUL LIFE PREDICTION;
D O I
10.1016/j.engappai.2025.110144
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Due to their performance in this field, Long-Short-Term Memory Neural Network (LSTM) approaches are often used to predict the remaining useful life (RUL). However, their complexity limits the interpretability of their results. So, eXplainable Artificial Intelligence (XAI) methods are used to understand the relationship between the input data and the predicted RUL. Modeling involves making choices, such as preprocessing strategies or model complexity. Understanding how these modeling choices affect the effectiveness of XAI methods is crucial. This paper investigates the impact of two modeling aspects: preprocessing multivariate time series and model complexity, precisely the number of hidden layers, on the quality of the explanations provided by three XAI post-hoc local agnostic methods (Local Interpretable Model-Agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), and Learning to eXplain (L2X) in the context of the RUL prediction. The quality of the XAI methods is evaluated using eleven metrics, categorized under five properties based on the definitions of interpretability and explainability. Experiments on the C-MAPSS dataset for aero-engine prognostics demonstrate that SHAP often provides better explanations when optimized preprocessing parameters are used. However, variations in these preprocessing parameters affect the quality of the explanation. Additionally, the results suggest no significant correlation between the complexity of the LSTM model and explanation quality, although changes in the number of layers notably influence the precision of SHAP's explanations.
引用
收藏
页数:22
相关论文
共 22 条
  • [1] Explainable Artificial Intelligence for Predictive Maintenance Applications
    Matzka, Stephan
    2020 THIRD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE FOR INDUSTRIES (AI4I 2020), 2020, : 69 - 74
  • [2] Evaluating eXplainable artificial intelligence tools for hard disk drive predictive maintenance
    Ferraro, Antonino
    Galli, Antonio
    Moscato, Vincenzo
    Sperli, Giancarlo
    ARTIFICIAL INTELLIGENCE REVIEW, 2023, 56 (07) : 7279 - 7314
  • [3] Evaluating eXplainable artificial intelligence tools for hard disk drive predictive maintenance
    Antonino Ferraro
    Antonio Galli
    Vincenzo Moscato
    Giancarlo Sperlì
    Artificial Intelligence Review, 2023, 56 : 7279 - 7314
  • [4] Explainable Artificial Intelligence Model for Predictive Maintenance in Smart Agricultural Facilities
    Kisten, Melvin
    Ezugwu, Absalom El-Shamir
    Olusanya, Micheal O.
    IEEE ACCESS, 2024, 12 : 24348 - 24367
  • [5] Explainable Artificial Intelligence (XAI) Approaches in Predictive Maintenance: A Review
    Sharma J.
    Mittal M.L.
    Soni G.
    Keprate A.
    Recent Patents on Engineering, 2024, 18 (05) : 18 - 26
  • [6] Evaluating the Effectiveness of Explainable Artificial Intelligence Approaches (Student Abstract)
    Jung, Jinsun
    Kim, Hyeoneui
    THIRTY-EIGTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 21, 2024, : 23528 - 23529
  • [7] Explainable Artificial Intelligence Applied to Predictive Maintenance: Comparison of Post-Hoc Explainability Techniques
    Mota, Bruno
    Faria, Pedro
    Corchado, Juan
    Ramos, Carlos
    EXPLAINABLE ARTIFICIAL INTELLIGENCE, XAI 2024, PT IV, 2024, 2156 : 353 - 364
  • [8] Explainable artificial intelligence for deep learning-based model predictive controllers
    Utama, Christian
    Karg, Benjamin
    Meske, Christian
    Lucia, Sergio
    2022 26TH INTERNATIONAL CONFERENCE ON SYSTEM THEORY, CONTROL AND COMPUTING (ICSTCC), 2022, : 464 - 471
  • [9] Evaluating Explainable Artificial Intelligence Methods Based on Feature Elimination: A Functionality-Grounded Approach
    Elkhawaga, Ghada
    Elzeki, Omar
    Abuelkheir, Mervat
    Reichert, Manfred
    ELECTRONICS, 2023, 12 (07)
  • [10] Classification of battery laser welding defects via enhanced image preprocessing methods and explainable artificial intelligence-based verification
    Hwang, Sujin
    Lee, Jongsoo
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 133