Stay Focused - Enhancing Model Interpretability Through Guided Feature Training

被引:2
|
作者
Jenke, Alexander C. [1 ]
Bodenstedt, Sebastian [1 ]
Wagner, Martin [2 ]
Brandenburg, Johanna M. [2 ]
Stern, Antonia [3 ]
Muendermann, Lars [3 ]
Distler, Marius [4 ]
Weitz, Jurgen [4 ]
Mueller-Stich, Beat P. [2 ]
Speidel, Stefanie [1 ]
机构
[1] Natl Ctr Tumor Dis NCT, Partner Site Dresden, Dept Translat Surg Oncol, Dresden, Germany
[2] Heidelberg Univ, Dept Gen, Visceral & Transplantat Surg, Heidelberg, Germany
[3] KARL STORZ SE Co KG, Tuttlingen, Germany
[4] Tech Univ Dresden, Univ Hosp Carl Gustav Carus, Dept Visceral, Thorac & Vasc Surg,Fac Med, Dresden, Germany
关键词
Explainable artificial intelligence; Surgical data science; Instrument presence detection; Computer-assisted surgery;
D O I
10.1007/978-3-031-16437-8_12
中图分类号
R445 [影像诊断学];
学科分类号
100207 ;
摘要
In computer-assisted surgery, artificial intelligence (AI) methods need to be interpretable, as a clinician has to understand a model's decision. To improve the visual interpretability of convolutional neural network, we propose to indirectly guide the feature development process of the model with augmented training data in which unimportant regions in an image have been blurred. On a public dataset, we show that our proposed training workflow results in better visual interpretability of the model and improves the overall model performance. To numerically evaluate heat maps, produced by explainable AI methods, we propose a new metric evaluating the focus with regards to a mask of the region of interest. Further, we are able to show that the resulting model is more robust against changes in the background by focusing the features onto the important areas of the scene and therefore improve model generalization.
引用
收藏
页码:121 / 129
页数:9
相关论文
共 50 条
  • [31] An advanced LAN model based on optimized feature algorithm: Towards hypertension interpretability
    Agham, Nishigandha Dnyaneshwar
    Chaskar, Uttam M.
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2021, 68 (68)
  • [32] OCIE: Augmenting model interpretability via Deconfounded Explanation-Guided Learning
    Dong, Liang
    Chen, Leiyang
    Zheng, Chengliang
    Fu, Zhongwang
    Zukaib, Umer
    Cui, Xiaohui
    Shen, Zhidong
    KNOWLEDGE-BASED SYSTEMS, 2024, 302
  • [33] Enhancing interpretability in film shot analysis through continuous shot integration and saliency maps
    Lu, Fengtian
    Li, Yuzhi
    Tian, Feng
    SIGNAL IMAGE AND VIDEO PROCESSING, 2025, 19 (04)
  • [34] Decoding pulsatile patterns of cerebrospinal fluid dynamics through enhancing interpretability in machine learning
    Keles, Ayse
    Ozisik, Pinar Akdemir
    Algin, Oktay
    Celebi, Fatih Vehbi
    Bendechache, Malika
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [35] On the Interpretability of Regularisation for Neural Networks Through Model Gradient Similarity
    Szolnoky, Vincent
    Andersson, Viktor
    Kulcsar, Balazs
    Jornsten, Rebecka
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [36] Enhancing training efficiency and effectiveness through the use of dyad training
    Shea, CH
    Wulf, G
    Whitacre, C
    JOURNAL OF MOTOR BEHAVIOR, 1999, 31 (02) : 119 - 125
  • [37] Enhancing Sustainability in Finance: Throughput Model focused decisions
    Rodgers, Waymond
    Soderbom, Arne
    Reid, Graeme
    IFKAD 2014: 9TH INTERNATIONAL FORUM ON KNOWLEDGE ASSET DYNAMICS: KNOWLEDGE AND MANAGEMENT MODELS FOR SUSTAINABLE GROWTH, 2014, : 2540 - 2545
  • [38] Enhancing cross-modality person re-identification through attention-guided asymmetric feature learning
    Song, Xuehua
    Zhou, Junxing
    Jin, Hua
    Yuan, Xin
    Wang, Changda
    MULTIMEDIA SYSTEMS, 2025, 31 (02)
  • [39] AFS-BM: enhancing model performance through adaptive feature selection with binary masking
    Turali, Mehmet Y.
    Lorasdagi, Mehmet E.
    Kozat, Suleyman S.
    SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (11) : 7571 - 7582
  • [40] Enhancing Feature Extraction Technique Through Spatial Deep Learning Model for Facial Emotion Detection
    Khan N.
    Singh A.V.
    Agrawal R.
    Annals of Emerging Technologies in Computing, 2023, 7 (02) : 9 - 22