Stay Focused - Enhancing Model Interpretability Through Guided Feature Training

被引:2
|
作者
Jenke, Alexander C. [1 ]
Bodenstedt, Sebastian [1 ]
Wagner, Martin [2 ]
Brandenburg, Johanna M. [2 ]
Stern, Antonia [3 ]
Muendermann, Lars [3 ]
Distler, Marius [4 ]
Weitz, Jurgen [4 ]
Mueller-Stich, Beat P. [2 ]
Speidel, Stefanie [1 ]
机构
[1] Natl Ctr Tumor Dis NCT, Partner Site Dresden, Dept Translat Surg Oncol, Dresden, Germany
[2] Heidelberg Univ, Dept Gen, Visceral & Transplantat Surg, Heidelberg, Germany
[3] KARL STORZ SE Co KG, Tuttlingen, Germany
[4] Tech Univ Dresden, Univ Hosp Carl Gustav Carus, Dept Visceral, Thorac & Vasc Surg,Fac Med, Dresden, Germany
关键词
Explainable artificial intelligence; Surgical data science; Instrument presence detection; Computer-assisted surgery;
D O I
10.1007/978-3-031-16437-8_12
中图分类号
R445 [影像诊断学];
学科分类号
100207 ;
摘要
In computer-assisted surgery, artificial intelligence (AI) methods need to be interpretable, as a clinician has to understand a model's decision. To improve the visual interpretability of convolutional neural network, we propose to indirectly guide the feature development process of the model with augmented training data in which unimportant regions in an image have been blurred. On a public dataset, we show that our proposed training workflow results in better visual interpretability of the model and improves the overall model performance. To numerically evaluate heat maps, produced by explainable AI methods, we propose a new metric evaluating the focus with regards to a mask of the region of interest. Further, we are able to show that the resulting model is more robust against changes in the background by focusing the features onto the important areas of the scene and therefore improve model generalization.
引用
收藏
页码:121 / 129
页数:9
相关论文
共 50 条
  • [11] A fuzzy clustering algorithm enhancing local model interpretability
    J. L. Díez
    J. L. Navarro
    A. Sala
    Soft Computing, 2007, 11 : 973 - 983
  • [12] Harmonizing Feature Attributions Across Deep Learning Architectures: Enhancing Interpretability and Consistency
    Kadir, Md Abdul
    Addluri, GowthamKrishna
    Sonntag, Daniel
    ADVANCES IN ARTIFICIAL INTELLIGENCE, KI 2023, 2023, 14236 : 90 - 97
  • [13] Focused Dialogues in Training Contexts: A Model for Enhancing Reflection in Therapist's Professional Practice
    Laitila A.
    Oranen M.
    Contemporary Family Therapy, 2013, 35 (3) : 599 - 612
  • [14] Spectral Zones-Based SHAP/LIME: Enhancing Interpretability in Spectral Deep Learning Models Through Grouped Feature Analysis
    Contreras, Jhonatan
    Winterfeld, Andreea
    Popp, Juergen
    Bocklitz, Thomas
    ANALYTICAL CHEMISTRY, 2024, 96 (39) : 15588 - 15597
  • [15] Enhancing citrus surface defects detection: A priori feature guided semantic segmentation model
    Xu, Xufeng
    Xu, Tao
    Wei, Zichao
    Li, Zetong
    Wang, Yafei
    Rao, Xiuqin
    ARTIFICIAL INTELLIGENCE IN AGRICULTURE, 2025, 15 (01): : 67 - 78
  • [16] Model Interpretability through the Lens of Computational Complexity
    Barcelo, Pablo
    Monet, Mikael
    Perez, Jorge
    Subercaseaux, Bernardo
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [17] Enhancing predictive models for sarcopenia: Suggestions for improved interpretability, feature inclusion, and stratified analyses
    Wei, Ruigang
    GERIATRICS & GERONTOLOGY INTERNATIONAL, 2024, 24 (08) : 818 - 818
  • [18] Enhancing Quality Through Training
    Haigney, Susan
    Pharmaceutical Technology, 2024, 48 (12) : 10 - 15
  • [19] Discriminative feature transformation by guided discriminative training
    Hsiao, R
    Mak, B
    2004 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL I, PROCEEDINGS: SPEECH PROCESSING, 2004, : 897 - 900
  • [20] Enhancing attention through training
    Posner, Michael I.
    Rothbart, Mary K.
    Tang, Yi-Yuan
    CURRENT OPINION IN BEHAVIORAL SCIENCES, 2015, 4 : 1 - 5