ENHANCING INTERPRETABILITY AND FIDELITY IN CONVOLUTIONAL NEURAL NETWORKS THROUGH DOMAIN-INFORMED KNOWLEDGE INTEGRATION

被引:0
|
作者
Agbangba, Codjo Emile [1 ]
Toha, Rodeo Oswald Y. [2 ]
Bello, Abdou Wahidi [3 ]
Adetola, Jamal [2 ]
机构
[1] Univ Abomey Calavi, Lab Biomath & Estimat Forestieres, Calavi, Benin
[2] Univ Natl Sci Technol Ingn & Math, Ecole Natl Super Genie Math & Modelisat, Abomey, Benin
[3] Univ Abomey Calavi, Fac Sci & Tech, Calavi, Benin
关键词
intelligent agriculture; image classification; convolutional neural networks (CNN); plant diseases; initialization; heatmaps;
D O I
10.17654/0972361724062
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
This study addresses the need for robust disease detection methods in vegetable crops by introducing a novel initialization method for convolutional neural networks (CNNs). Rather than creating a new CNN architecture, our approach focuses on infusing expert knowledge from phytopathology directly into the model's foundation. This innovative initialization ensures that the CNN possesses a contextual understanding of intricate disease patterns specific to tomatoes. Additionally, our study redefines the role of heatmaps as a dynamic metric for assessing model fidelity in real-time. Unlike traditional post hoc applications, heatmaps are integrated into the model evaluation process, providing insights into decision-making processes and alignment with expert-derived expectations. This dual innovation aims to enhance transparency and fidelity in CNNs, offering a nuanced and effective solution for disease detection in agriculture. The study contributes to advancing artificial intelligence applications in agriculture by providing accurate predictions and a deeper understanding of the underlying decision mechanisms crucial for crop health management.
引用
收藏
页码:1165 / 1194
页数:30
相关论文
共 50 条
  • [41] Spectral-Fidelity Convolutional Neural Networks for Hyperspectral Pansharpening
    He, Lin
    Zhu, Jiawei
    Li, Jun
    Meng, Deyu
    Chanussot, Jocelyn
    Plaza, Antonio J.
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2020, 13 : 5898 - 5914
  • [42] Packing Convolutional Neural Networks in the Frequency Domain
    Wang, Yunhe
    Xu, Chang
    Xu, Chao
    Tao, Dacheng
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2019, 41 (10) : 2495 - 2510
  • [43] Compressing Convolutional Neural Networks in the Frequency Domain
    Chen, Wenlin
    Wilson, James
    Tyree, Stephen
    Weinberger, Kilian Q.
    Chen, Yixin
    KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, : 1475 - 1484
  • [44] Gated Convolutional Neural Networks for Domain Adaptation
    Madasu, Avinash
    Rao, Vijjini Anvesh
    NATURAL LANGUAGE PROCESSING AND INFORMATION SYSTEMS (NLDB 2019), 2019, 11608 : 118 - 130
  • [45] Enhancing Malaria Detection Through Deep Learning: A Comparative Study of Convolutional Neural Networks
    Benachour, Yassine
    Flitti, Farid
    Khalid, Haris M.
    IEEE ACCESS, 2025, 13 : 35452 - 35477
  • [46] A Fine-Grained Study of Interpretability of Convolutional Neural Networks for Text Classification
    Gimenez, Maite
    Fabregat-Hernandez, Ares
    Fabra-Boluda, Raul
    Palanca, Javier
    Botti, Vicent
    HYBRID ARTIFICIAL INTELLIGENT SYSTEMS, HAIS 2022, 2022, 13469 : 261 - 273
  • [47] Interpretability-Based Multimodal Convolutional Neural Networks for Skin Lesion Diagnosis
    Wang, Sutong
    Yin, Yunqiang
    Wang, Dujuan
    Wang, Yanzhang
    Jin, Yaochu
    IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (12) : 12623 - 12637
  • [48] Enhancing Emotion Recognition through Federated Learning: A Multimodal Approach with Convolutional Neural Networks
    Simic, Nikola
    Suzic, Sinisa
    Milosevic, Nemanja
    Stanojev, Vuk
    Nosek, Tijana
    Popovic, Branislav
    Bajovic, Dragana
    APPLIED SCIENCES-BASEL, 2024, 14 (04):
  • [49] Predicting Brain Age at Slice Level: Convolutional Neural Networks and Consequences for Interpretability
    Ballester, Pedro L.
    da Silva, Laura Tomaz
    Marcon, Matheus
    Esper, Nathalia Bianchini
    Frey, Benicio N.
    Buchweitz, Augusto
    Meneguzzi, Felipe
    FRONTIERS IN PSYCHIATRY, 2021, 12
  • [50] On the Interpretability of Regularisation for Neural Networks Through Model Gradient Similarity
    Szolnoky, Vincent
    Andersson, Viktor
    Kulcsar, Balazs
    Jornsten, Rebecka
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,