Improve the interpretability of convolutional neural networks with probability density function

被引:0
|
作者
Chen, Yueqi [1 ,2 ]
Pan, Tingting [3 ]
Yang, Jie [1 ,2 ]
机构
[1] Dalian Univ Technol, Sch Math Sci, Dalian 116024, Liaoning, Peoples R China
[2] Key Lab Computat Math & Data Intelligence Liaoning, Dalian 116024, Liaoning, Peoples R China
[3] Dalian Polytech Univ, Dept Basic Courses Teaching, Dalian 116034, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Interpretability; Convolutional neural networks; Probability density function;
D O I
10.1016/j.ins.2024.121796
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Currently, Convolutional Neural Networks (CNNs) have demonstrated extensive success in numerous practical applications. Nevertheless, their limited interpretability remains a significant barrier to further advancement in certain crucial fields. Improving the interpretability of CNNs stands as an exceptionally compelling topic in the present time. This paper explores the interpretability of a basic CNN incorporating a convolution-pooled block and a fully connected layer from a statistical perspective. Assuming that the input variables adhere to a normal distribution and maintain independence from each other, the output variables subsequent to the convolution and pooling layers also conform to a normal distribution. Simultaneously, the probability density function (pdf) characterizing the final output variable belongs to an exponential family distribution. By introducing intermediate variables, the pdf of this output variable can be expressed as a linear combination of three distinct normal distributions. Furthermore, the likelihood of the predicted class label can be rewritten as a cumulative density function (cdf) of the standard normal distribution. The originality of this paper lies in its provision of a more innovative and intuitive perspective for dissecting the operational mechanism of CNNs, analyzing them layer by layer to improve their interpretability. Experimental results obtained from both an artificial dataset and the image datasets CIFAR-10 and ImageNet further validate the rationality of these conclusions.
引用
收藏
页数:16
相关论文
共 50 条
  • [11] A Scoring Method for Interpretability of Concepts in Convolutional Neural Networks
    Gurkan, Mustafa Kagan
    Arica, Nafiz
    Vural, Fato Yarman
    2022 30TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE, SIU, 2022,
  • [12] UNCERTAINTY MODELING AND INTERPRETABILITY IN CONVOLUTIONAL NEURAL NETWORKS FOR POLYP SEGMENTATION
    Wickstrom, Kristoffer
    Kampffmeyer, Michael
    Jenssen, Robert
    2018 IEEE 28TH INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2018,
  • [13] Conditional probability density function estimation with sigmoidal neural networks
    Sarajedini, A
    Hecht-Nielsen, R
    Chau, PM
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 1999, 10 (02): : 231 - 238
  • [14] Artificial neural networks applied for estimating a probability density function
    Colla, Valentina
    Vannucci, Marco
    Reyneri, Leonardo M.
    INTELLIGENT DATA ANALYSIS, 2015, 19 (01) : 29 - 41
  • [15] A detailed analysis of the interpretability of Convolutional Neural Networks for text classification
    Gimenez, Maite
    Fabregat-Hernandez, Ares
    Fabra-Boluda, Raul
    Palanca, Javier
    Botti, Vicent
    LOGIC JOURNAL OF THE IGPL, 2024,
  • [16] Automated Video Interpretability Assessment using Convolutional Neural Networks
    Kalukin, Andrew R.
    2018 IEEE APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP (AIPR), 2018,
  • [17] Kinetic Energy of Hydrocarbons as a Function of Electron Density and Convolutional Neural Networks
    Yao, Kun
    Parkhill, John
    JOURNAL OF CHEMICAL THEORY AND COMPUTATION, 2016, 12 (03) : 1139 - 1147
  • [18] Uncertainty and interpretability in convolutional neural networks for semantic segmentation of colorectal polyps
    Wickstrom, Kristoffer
    Kampffmeyer, Michael
    Jenssen, Robert
    MEDICAL IMAGE ANALYSIS, 2020, 60
  • [19] INTERPRETABILITY-GUIDED CONVOLUTIONAL NEURAL NETWORKS FOR SEISMIC FAULT SEGMENTATION
    Liu, Zhining
    Zhou, Cheng
    Hu, Guangmin
    Song, Chengyun
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 4312 - 4316
  • [20] Interpretability and Optimisation of Convolutional Neural Networks Based on Sinc-Convolution
    Habib, Ahsan
    Karmakar, Chandan
    Yearwood, John
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2023, 27 (04) : 1758 - 1769