OpenFL-XAI: Federated learning of explainable artificial intelligence models in Python']Python

被引:4
|
作者
Daole, Mattia [1 ]
Schiavo, Alessio [1 ,2 ]
Barcena, Jose Luis Corcuera
Ducange, Pietro [1 ]
Marcelloni, Francesco [1 ]
Renda, Alessandro [1 ]
机构
[1] Univ Pisa, Dept Informat Engn, Largo Lucio Lazzarino 1, I-56122 Pisa, Italy
[2] LogObject AG, Ambassador House Thurgauerstr 101 A, CH-8152 Opfikon, Switzerland
关键词
Federated learning; Explainable AI; Rule-based systems; Linguistic fuzzy models; SYSTEMS;
D O I
10.1016/j.softx.2023.101505
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Artificial Intelligence (AI) systems play a significant role in manifold decision-making processes in our daily lives, making trustworthiness of AI more and more crucial for its widespread acceptance. Among others, privacy and explainability are considered key requirements for enabling trust in AI. Building on these needs, we propose a software for Federated Learning (FL) of Rule-Based Systems (RBSs): on one hand FL prioritizes user data privacy during collaborative model training. On the other hand, RBSs are deemed as interpretable-by-design models and ensure high transparency in the decisionmaking process. The proposed software, developed as an extension to the Intel (R) OpenFL open-source framework, offers a viable solution for developing AI applications balancing accuracy, privacy, and interpretability. (c) 2023 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
引用
收藏
页数:6
相关论文
共 50 条
  • [31] Editorial: Explainable Artificial Intelligence (XAI) in Systems Neuroscience
    Lombardi, Angela
    Tavares, Joao Manuel R. S.
    Tangaro, Sabina
    FRONTIERS IN SYSTEMS NEUROSCIENCE, 2021, 15
  • [32] eXplainable Artificial Intelligence (XAI) for improving organisational regility
    Shafiabady, Niusha
    Hadjinicolaou, Nick
    Hettikankanamage, Nadeesha
    Mohammadisavadkoohi, Ehsan
    Wu, Robert M. X.
    Vakilian, James
    PLOS ONE, 2024, 19 (04):
  • [33] A Literature Review on Applications of Explainable Artificial Intelligence (XAI)
    Kalasampath, Khushi
    Spoorthi, K. N.
    Sajeev, Sreeparvathy
    Kuppa, Sahil Sarma
    Ajay, Kavya
    Maruthamuthu, Angulakshmi
    IEEE ACCESS, 2025, 13 : 41111 - 41140
  • [34] PhysioEx: a new Python']Python library for explainable sleep staging through deep learning
    Gagliardi, Guido
    Luca Alfeo, Antonio
    Cimino, Mario G. C. A.
    Valenza, Gaetano
    De Vos, Maarten
    PHYSIOLOGICAL MEASUREMENT, 2025, 46 (02)
  • [35] Explainable Artificial Intelligence for Cyber Threat Intelligence (XAI-CTI)
    Samtani, Sagar
    Chen, Hsinchun
    Kantarcioglu, Murat
    Thuraisingham, Bhavani
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2022, 19 (04) : 2149 - 2150
  • [36] Explainable Artificial Intelligence (XAI) in glaucoma assessment: Advancing the frontiers of machine learning algorithms
    Nimmy, Sonia Farhana
    Hussain, Omar K.
    Chakrabortty, Ripon K.
    Saha, Sajib
    KNOWLEDGE-BASED SYSTEMS, 2025, 316
  • [37] Explainable artificial intelligence (XAI) in deep learning-based medical image analysis
    van der Velden, Bas H.M.
    Kuijf, Hugo J.
    Gilhuijs, Kenneth G.A.
    Viergever, Max A.
    Medical Image Analysis, 2022, 79
  • [38] Explainable Artificial Intelligence (XAI) and Machine Learning Technique for Prediction of Properties in Additive Manufacturing
    Abbili, Kiran Kumar
    JOURNAL OF ADVANCED MANUFACTURING SYSTEMS, 2025, 24 (02) : 229 - 240
  • [39] Explainable artificial intelligence (XAI) in deep learning-based medical image analysis
    Van der Velden, Bas H. M.
    Kuijf, Hugo J.
    Gilhuijs, Kenneth G. A.
    Viergever, Max A.
    MEDICAL IMAGE ANALYSIS, 2022, 79
  • [40] Python']Python-Based Reinforcement Learning on Simulink Models
    Schaefer, Georg
    Schirl, Max
    Rehrl, Jakob
    Huber, Stefan
    Hirlaender, Simon
    COMBINING, MODELLING AND ANALYZING IMPRECISION, RANDOMNESS AND DEPENDENCE, SMPS 2024, 2024, 1458 : 449 - 456