Neural Network Explainable AI Based on Paraconsistent Analysis: An Extension

被引:3
|
作者
Marcondes, Francisco S. [1 ]
Duraes, Dalila [1 ]
Santos, Flavio [1 ]
Almeida, Jose Joao [1 ]
Novais, Paulo [1 ]
机构
[1] Univ Minho, ALGORITMI Ctr, P-4710057 Braga, Portugal
关键词
paraconsistent logic; explainable AI; neural network;
D O I
10.3390/electronics10212660
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper explores the use of paraconsistent analysis for assessing neural networks from an explainable AI perspective. This is an early exploration paper aiming to understand whether paraconsistent analysis can be applied for understanding neural networks and whether it is worth further develop the subject in future research. The answers to these two questions are affirmative. Paraconsistent analysis provides insightful prediction visualisation through a mature formal framework that provides proper support for reasoning. The significant potential envisioned is the that paraconsistent analysis will be used for guiding neural network development projects, despite the performance issues. This paper provides two explorations. The first was a baseline experiment based on MNIST for establishing the link between paraconsistency and neural networks. The second experiment aimed to detect violence in audio files to verify whether the paraconsistent framework scales to industry level problems. The conclusion shown by this early assessment is that further research on this subject is worthful, and may eventually result in a significant contribution to the field.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Neural Network eXplainable AI Based on Paraconsistent Analysis - an Initial Approach
    Marcondes, Francisco S.
    Duraes, Dalila
    Gomes, Marco
    Santos, Flavio
    Almeida, Jose Joao
    Novais, Paulo
    SUSTAINABLE SMART CITIES AND TERRITORIES, 2022, 253 : 139 - 149
  • [2] Using Explainable AI for Neural Network-Based Network Attack Detection
    Zou, Qingtian
    Zhang, Lan
    Sun, Xiaoyan
    Singhal, Anoop
    Liu, Peng
    COMPUTER, 2024, 57 (05) : 78 - 85
  • [3] Comprehensive gene regulatory network analysis based on explainable AI
    Park, Heewon
    Maruhashi, Koji
    Yamaguchi, Rui
    Imoto, Seiya
    Miyano, Satoru
    CANCER SCIENCE, 2022, 113 : 870 - 870
  • [4] Paraconsistent artificial neural network:: An application in cephalometric analysis
    Abe, JM
    Ortega, NRS
    Mário, MC
    Del Santo, M
    KNOWLEDGE-BASED INTELLIGENT INFORMATION AND ENGINEERING SYSTEMS, PT 2, PROCEEDINGS, 2005, 3682 : 716 - 723
  • [5] Development of Neural Network Model With Explainable AI for Measuring Uranium Enrichment
    Ryu, Jichang
    Park, Chanjun
    Park, Jungsuk
    Cho, Namchan
    Park, Jaehyun
    Cho, Gyuseong
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2021, 68 (11) : 2670 - 2681
  • [6] Bridging the trust gap: Evaluating feature relevance in neural network-based gear wear mechanism analysis with explainable AI
    Herwig, Nico
    Peng, Zhongxiao
    Borghesani, Pietro
    TRIBOLOGY INTERNATIONAL, 2023, 187
  • [7] Explainable AI to Facilitate Understanding of Neural Network-Based Metabolite Profiling Using NMR Spectroscopy
    Johnson, Hayden
    Tipirneni-Sajja, Aaryani
    METABOLITES, 2024, 14 (06)
  • [8] Paraconsistent artificial neural network: Applicability in computer analysis of speech productions
    Abe, Jair Minoro
    Almeida Prado, Joao Carlos
    Nakamatsu, Kazumi
    KNOWLEDGE-BASED INTELLIGENT INFORMATION AND ENGINEERING SYSTEMS, PT 2, PROCEEDINGS, 2006, 4252 : 844 - 850
  • [9] Explainable Neural Network analysis on Movie Success Prediction
    Kumar, S. Bhavesh
    Pande, Sagar Dhanraj
    EAI ENDORSED TRANSACTIONS ON SCALABLE INFORMATION SYSTEMS, 2024, 11 (04):
  • [10] Hierarchical Attention based Neural Network for Explainable Recommendation
    Cong, Dawei
    Zhao, Yanyan
    Qin, Bing
    Han, Yu
    Zhang, Murray
    Liu, Alden
    Chen, Nat
    ICMR'19: PROCEEDINGS OF THE 2019 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, 2019, : 373 - 381