Neural Network Explainable AI Based on Paraconsistent Analysis: An Extension

被引:3
|
作者
Marcondes, Francisco S. [1 ]
Duraes, Dalila [1 ]
Santos, Flavio [1 ]
Almeida, Jose Joao [1 ]
Novais, Paulo [1 ]
机构
[1] Univ Minho, ALGORITMI Ctr, P-4710057 Braga, Portugal
关键词
paraconsistent logic; explainable AI; neural network;
D O I
10.3390/electronics10212660
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper explores the use of paraconsistent analysis for assessing neural networks from an explainable AI perspective. This is an early exploration paper aiming to understand whether paraconsistent analysis can be applied for understanding neural networks and whether it is worth further develop the subject in future research. The answers to these two questions are affirmative. Paraconsistent analysis provides insightful prediction visualisation through a mature formal framework that provides proper support for reasoning. The significant potential envisioned is the that paraconsistent analysis will be used for guiding neural network development projects, despite the performance issues. This paper provides two explorations. The first was a baseline experiment based on MNIST for establishing the link between paraconsistency and neural networks. The second experiment aimed to detect violence in audio files to verify whether the paraconsistent framework scales to industry level problems. The conclusion shown by this early assessment is that further research on this subject is worthful, and may eventually result in a significant contribution to the field.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Utilizing Explainable AI for improving the Performance of Neural Networks
    Sun, Huawei
    Servadei, Lorenzo
    Feng, Hao
    Stephan, Michael
    Santra, Avik
    Wille, Robert
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 1775 - 1782
  • [22] Explainable AI for trustworthy image analysis
    Turley, Jordan E.
    Dunne, Jeffrey A.
    Woods, Zerotti
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS V, 2023, 12538
  • [23] Defining Explainable AI for Requirements Analysis
    Sheh, Raymond
    Monteath, Isaac
    KUNSTLICHE INTELLIGENZ, 2018, 32 (04): : 261 - 266
  • [24] Exploring explainable AI: a bibliometric analysis
    Sharma, Chetan
    Sharma, Shamneesh
    Sharma, Komal
    Sethi, Ganesh Kumar
    Chen, Hsin-Yuan
    DISCOVER APPLIED SCIENCES, 2024, 6 (11)
  • [25] FP-CNN: Fuzzy pooling-based convolutional neural network for lung ultrasound image classification with explainable AI
    Hasan, Md Mahmodul
    Hossain, Muhammad Minoar
    Rahman, Mohammad Motiur
    Azad, Akm
    Alyami, Salem A.
    Moni, Mohammad Ali
    COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 165
  • [26] A Neural-Network Approach for Speech Features Classification based on Paraconsistent Logic.
    Barbon Junior, Sylvio
    Guido, Rodrigo Capobianco
    Vieira, Lucimar Sasso
    2009 11TH IEEE INTERNATIONAL SYMPOSIUM ON MULTIMEDIA (ISM 2009), 2009, : 567 - +
  • [27] New Media Advertising Communication Analysis Model Based on Extension Neural Network
    Zhang, Zhe
    SCIENTIFIC PROGRAMMING, 2021, 2021
  • [28] Paraconsistent Artificial Neural Networks and EEG Analysis
    Abe, Jair Minoro
    Lopes, Helder F. S.
    Nakamatsu, Kazumi
    Akama, Seiki
    KNOWLEDGE-BASED AND INTELLIGENT INFORMATION AND ENGINEERING SYSTEMS, PT III, 2010, 6278 : 164 - +
  • [29] DyFiP: Explainable AI-based Dynamic Filter Pruning of Convolutional Neural Networks
    Sabih, Muhammad
    Hannig, Frank
    Teich, Juergen
    PROCEEDINGS OF THE 2022 2ND EUROPEAN WORKSHOP ON MACHINE LEARNING AND SYSTEMS (EUROMLSYS '22), 2022, : 109 - 115
  • [30] Extraction of Competitive Factors in a Competitor Analysis Using an Explainable Neural Network
    Lee, Younghoon
    NEURAL PROCESSING LETTERS, 2021, 53 (03) : 1979 - 1994