Improve the interpretability of convolutional neural networks with probability density function

被引:0
|
作者
Chen, Yueqi [1 ,2 ]
Pan, Tingting [3 ]
Yang, Jie [1 ,2 ]
机构
[1] Dalian Univ Technol, Sch Math Sci, Dalian 116024, Liaoning, Peoples R China
[2] Key Lab Computat Math & Data Intelligence Liaoning, Dalian 116024, Liaoning, Peoples R China
[3] Dalian Polytech Univ, Dept Basic Courses Teaching, Dalian 116034, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Interpretability; Convolutional neural networks; Probability density function;
D O I
10.1016/j.ins.2024.121796
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Currently, Convolutional Neural Networks (CNNs) have demonstrated extensive success in numerous practical applications. Nevertheless, their limited interpretability remains a significant barrier to further advancement in certain crucial fields. Improving the interpretability of CNNs stands as an exceptionally compelling topic in the present time. This paper explores the interpretability of a basic CNN incorporating a convolution-pooled block and a fully connected layer from a statistical perspective. Assuming that the input variables adhere to a normal distribution and maintain independence from each other, the output variables subsequent to the convolution and pooling layers also conform to a normal distribution. Simultaneously, the probability density function (pdf) characterizing the final output variable belongs to an exponential family distribution. By introducing intermediate variables, the pdf of this output variable can be expressed as a linear combination of three distinct normal distributions. Furthermore, the likelihood of the predicted class label can be rewritten as a cumulative density function (cdf) of the standard normal distribution. The originality of this paper lies in its provision of a more innovative and intuitive perspective for dissecting the operational mechanism of CNNs, analyzing them layer by layer to improve their interpretability. Experimental results obtained from both an artificial dataset and the image datasets CIFAR-10 and ImageNet further validate the rationality of these conclusions.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Interpretability of Neural Networks with Probability Density Functions
    Pan, Tingting
    Pedrycz, Witold
    Cui, Jiahui
    Yang, Jie
    Wu, Wei
    ADVANCED THEORY AND SIMULATIONS, 2022, 5 (03)
  • [2] Interpretability for Neural Networks from the Perspective of Probability Density
    Lu, Lu
    Pan, Tingting
    Zhao, Junhong
    Yang, Jie
    2019 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2019), 2019, : 1502 - 1507
  • [3] Radial Basis Function Networks for Convolutional Neural Networks to Learn Similarity Distance Metric and Improve Interpretability
    Amirian, Mohammadreza
    Schwenker, Friedhelm
    IEEE ACCESS, 2020, 8 : 123087 - 123097
  • [4] Progress in Interpretability Research of Convolutional Neural Networks
    Zhang, Wei
    Cai, Lizhi
    Chen, Mingang
    Wang, Naiqi
    MOBILE COMPUTING, APPLICATIONS, AND SERVICES, MOBICASE 2019, 2019, 290 : 155 - 168
  • [5] Less Is More Important: An Attention Module Guided by Probability Density Function for Convolutional Neural Networks
    Xie, Jingfen
    Zhang, Jian
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 3, 2023, : 2947 - 2955
  • [6] Interpretability Analysis of Convolutional Neural Networks for Crack Detection
    Wu, Jie
    He, Yongjin
    Xu, Chengyu
    Jia, Xiaoping
    Huang, Yule
    Chen, Qianru
    Huang, Chuyue
    Eslamlou, Armin Dadras
    Huang, Shiping
    BUILDINGS, 2023, 13 (12)
  • [7] A probability density function generator based on neural networks
    Chen, Chi-Hua
    Song, Fangying
    Hwang, Feng-Jang
    Wu, Ling
    PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS, 2020, 541
  • [8] Estimate of a Probability Density Function through Neural Networks
    Reyneri, Leonardo
    Colla, Valentina
    Vannucci, Marco
    ADVANCES IN COMPUTATIONAL INTELLIGENCE, IWANN 2011, PT I, 2011, 6691 : 57 - 64
  • [9] Semantic Interpretability of Convolutional Neural Networks by Taxonomy Extraction
    Horta, Vitor A. C.
    Sobczyk, Robin
    Stol, Maarten C.
    Mileo, Alessandra
    NEURAL-SYMBOLIC LEARNING AND REASONING 2023, NESY 2023, 2023,
  • [10] Structural Compression of Convolutional Neural Networks with Applications in Interpretability
    Abbasi-Asl, Reza
    Yu, Bin
    FRONTIERS IN BIG DATA, 2021, 4