XNILMBoost: Explainability-informed load disaggregation training enhancement using attribution priors

被引:0
|
作者
Batic, Djordje [1 ]
Stankovic, Vladimir [1 ]
Stankovic, Lina [1 ]
机构
[1] Univ Strathclyde, Dept Elect & Elect Engn, 204 George St, Glasgow G1 1XW, Scotland
关键词
Explainable deep learning; Load disaggregation; Non-intrusive load monitoring; Trustworthy artificial intelligence; NEURAL-NETWORK; ENERGY;
D O I
10.1016/j.engappai.2024.109766
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In the ongoing energy transition, characterized by increased reliance on distributed renewable sources and smart grid technologies, the need for advanced and trustworthy artificial intelligence (AI) in energy management systems is crucial. Non-intrusive load monitoring (NILM), a method for inferring individual appliance energy consumption from aggregate smart meter data, has gained prominence for enhancing energy efficiency. However, advanced deep neural network models used in NILM, while effective, raise transparency and trust concerns due to their complexity. This paper introduces a novel explainabilityinformed NILM training framework, specifically designed for low-frequency NILM. Our approach aligns with principles for trustworthy AI, focusing on human agency and oversight, technical robustness, and transparency, incorporating explainability directly into the training phase of a NILM model. We propose a novel iterative, explainability-informed NILM training algorithm that uses attribution priors to guide model optimization, including implementation and evaluation of the framework across multiple state-of-the-art NILM architectures, namely, convolutional, recurrent, and dilated causal layers. We introduce a novel Robustness-Trust metric to measure joint improvement in predictive and explainability performance, utilizing explainability metrics of faithfulness, robustness and effective complexity while analyzing model predictive performance against NILM-specific regression and classification metrics. Results broadly show that robust models achieve better explainability, while explainability-enhanced models can lead to improved model robustness. Together, our results demonstrate significant improvements in robustness and transparency of NILM systems across various appliances, model architectures, measurement scales, types of buildings, and energy usage patterns. This work paves the way for more transparent and trustworthy deployments in AI-driven energy systems.
引用
收藏
页数:18
相关论文
共 2 条
  • [1] Explainability-Informed Feature Selection and Performance Prediction for Nonintrusive Load Monitoring
    Mollel, Rachel Stephen
    Stankovic, Lina
    Stankovic, Vladimir
    SENSORS, 2023, 23 (10)
  • [2] Toward Transparent Load Disaggregation-A Framework for Quantitative Evaluation of Explainability Using Explainable AI
    Batic, Djordje
    Stankovic, Vladimir
    Stankovic, Lina
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (01) : 4345 - 4356