These do not Look Like Those: An Interpretable Deep Learning Model for Image Recognition

被引:36
|
作者
Singh, Gurmail [1 ]
Yow, Kin-Choong [2 ]
机构
[1] Univ Regina, Dept Comp Sci, Regina, SK S4S 0A2, Canada
[2] Univ Regina, Fac Engn & Appl Sci, Regina, SK S4S 0A2, Canada
来源
IEEE ACCESS | 2021年 / 9卷 / 09期
基金
加拿大自然科学与工程研究理事会;
关键词
Covid-19; pneumonia; image recognition; X-ray; prototypical part; X-RAY;
D O I
10.1109/ACCESS.2021.3064838
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Interpretation of the reasoning process of a prediction made by a deep learning model is always desired. However, when it comes to the predictions of a deep learning model that directly impacts on the lives of people then the interpretation becomes a necessity. In this paper, we introduce a deep learning model: negative positive prototypical part network (NP-ProtoPNet). This model attempts to imitate human reasoning for image recognition while comparing the parts of a test image with the corresponding parts of the images from known classes. We demonstrate our model on the dataset of chest X-ray images of Covid-19 patients, pneumonia patients and normal people. The accuracy and precision that our model receives is on par with the best performing non-interpretable deep learning models.
引用
收藏
页码:41482 / 41493
页数:12
相关论文
共 50 条
  • [41] Image Recognition Technology Based on Deep Learning
    Cheng, Fuchao
    Zhang, Hong
    Fan, Wenjie
    Harris, Barry
    WIRELESS PERSONAL COMMUNICATIONS, 2018, 102 (02) : 1917 - 1933
  • [42] Deep Active Transfer Learning for Image Recognition
    Singh, Ankita
    Chakraborty, Shayok
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [43] Deep Residual Learning for Image Recognition: A Survey
    Shafiq, Muhammad
    Gu, Zhaoquan
    APPLIED SCIENCES-BASEL, 2022, 12 (18):
  • [44] An Interpretable Deep Bayesian Model for Facial Micro-Expression Recognition
    Wang, Chenfeng
    Gao, Xiaoguang
    Li, Xinyu
    2023 8TH INTERNATIONAL CONFERENCE ON CONTROL AND ROBOTICS ENGINEERING, ICCRE, 2023, : 91 - 94
  • [45] An interpretable deep learning model to map land subsidence hazard
    Rahmani, Paria
    Gholami, Hamid
    Golzari, Shahram
    ENVIRONMENTAL SCIENCE AND POLLUTION RESEARCH, 2024, 31 (11) : 17372 - 17386
  • [46] Interpretable Deep Learning Model for the Detection and Reconstruction of Dysarthric Speech
    Korzekwa, Daniel
    Barra-Chicote, Roberto
    Kostek, Bozena
    Drugman, Thomas
    Lajszczak, Mateusz
    INTERSPEECH 2019, 2019, : 3890 - 3894
  • [47] Bayesian deep learning: A model-based interpretable approach
    Matsubara, Takashi
    IEICE NONLINEAR THEORY AND ITS APPLICATIONS, 2020, 11 (01): : 16 - 35
  • [48] Interpretable Deep Learning Prediction Model for Compressive Strength of Concrete
    Zhang, Wei-Qi
    Wang, Hui-Ming
    Dongbei Daxue Xuebao/Journal of Northeastern University, 2024, 45 (05): : 738 - 744
  • [49] AN INTERPRETABLE DEEP LEARNING MODEL TO PREDICT SYMPTOMATIC KNEE OSTEOARTHRITIS
    Zokaeinikoo, M.
    Li, X.
    Yang, M.
    OSTEOARTHRITIS AND CARTILAGE, 2021, 29 : S354 - S354
  • [50] Deep PLS: A Lightweight Deep Learning Model for Interpretable and Efficient Data Analytics
    Kong, Xiangyin
    Ge, Zhiqiang
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (11) : 8923 - 8937