Interpreting Deep Machine Learning Models: An Easy Guide for Oncologists

被引:0
|
作者
Amorim, Jose Pereira [1 ,2 ]
Abreu, Pedro Henriques [1 ]
Fernandez, Alberto [3 ]
Reyes, Mauricio [4 ,5 ]
Santos, Joao [2 ,6 ]
Abreu, Miguel Henriques [7 ]
机构
[1] Univ Coimbra, Dept Informat Engn, CISUC, P-3030290 Coimbra, Portugal
[2] Portuguese Inst Oncol Porto, IPO Porto Res Ctr, P-4200072 Porto, Portugal
[3] Univ Granada, DaSCI Andalusian Res Inst, Granada 18071, Spain
[4] Bern Univ Hosp, Data Sci Ctr, Inselspital, CH-3010 Bern, Switzerland
[5] Univ Bern, ARTORG Ctr Biomed Res, CH-3008 Bern, Switzerland
[6] ICBAS Inst Ciencias Biomed Abel Salazar, P-4050313 Porto, Portugal
[7] Portuguese Oncol Inst Porto, Dept Med Oncol, P-4200072 Porto, Portugal
关键词
Big Data; interpretability; deep learning; decision-support systems; oncology;
D O I
暂无
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Healthcare agents, in particular in the oncology field, are currently collecting vast amounts of diverse patient data. In this context, some decision-support systems, mostly based on deep learning techniques, have already been approved for clinical purposes. Despite all the efforts in introducing artificial intelligence methods in the workflow of clinicians, its lack of interpretability - understand how the methods make decisions - still inhibits their dissemination in clinical practice. The aim of this article is to present an easy guide for oncologists explaining how these methods make decisions and illustrating the strategies to explain them. Theoretical concepts were illustrated based on oncological examples and a literature review of research works was performed from PubMed between January 2014 to September 2020, using "deep learning techniques, " "interpretability " and "oncology " as keywords. Overall, more than 60% are related to breast, skin or brain cancers and the majority focused on explaining the importance of tumor characteristics (e.g. dimension, shape) in the predictions. The most used computational methods are multilayer perceptrons and convolutional neural networks. Nevertheless, despite being successfully applied in different cancers scenarios, endowing deep learning techniques with interpretability, while maintaining their performance, continues to be one of the greatest challenges of artificial intelligence.
引用
收藏
页码:192 / 207
页数:16
相关论文
共 50 条
  • [1] Interpreting Deep Machine Learning Models: An Easy Guide for Oncologists
    Amorim, Jose P.
    Abreu, Pedro H.
    Fernandez, Alberto
    Reyes, Mauricio
    Santos, Joao
    Abreu, Miguel H.
    IEEE REVIEWS IN BIOMEDICAL ENGINEERING, 2023, 16 : 192 - 207
  • [2] Interpreting Deep Learning Models for Multimodal Neuroimaging
    Mueller, K. R.
    Hofmann, S. M.
    2023 11TH INTERNATIONAL WINTER CONFERENCE ON BRAIN-COMPUTER INTERFACE, BCI, 2023,
  • [3] Interpreting Deep Learning Models for Knowledge Tracing
    Yu Lu
    Deliang Wang
    Penghe Chen
    Qinggang Meng
    Shengquan Yu
    International Journal of Artificial Intelligence in Education, 2023, 33 : 519 - 542
  • [4] Interpreting Deep Learning Models for Knowledge Tracing
    Lu, Yu
    Wang, Deliang
    Chen, Penghe
    Meng, Qinggang
    Yu, Shengquan
    INTERNATIONAL JOURNAL OF ARTIFICIAL INTELLIGENCE IN EDUCATION, 2023, 33 (03) : 519 - 542
  • [5] Interpreting deep learning models for weak lensing
    Matilla, Jose Manuel Zorrilla
    Sharma, Manasi
    Hsu, Daniel
    Haiman, Zoltan
    PHYSICAL REVIEW D, 2020, 102 (12)
  • [6] NeuralVis: Visualizing and Interpreting Deep Learning Models
    Zhang, Xufan
    Yin, Ziyue
    Feng, Yang
    Shi, Qingkai
    Liu, Jia
    Chen, Zhenyu
    34TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING (ASE 2019), 2019, : 1106 - 1109
  • [7] Interpreting tree ensemble machine learning models with endoR
    Ruaud, Albane
    Pfister, Niklas
    Ley, Ruth E.
    Youngblut, Nicholas D.
    PLOS COMPUTATIONAL BIOLOGY, 2022, 18 (12)
  • [8] Interpreting mental state decoding with deep learning models
    Thomas, Armin W.
    Re, Christopher
    Poldrack, Russell A.
    TRENDS IN COGNITIVE SCIENCES, 2022, 26 (11) : 972 - 986
  • [9] Understanding and interpreting artificial intelligence, machine learning and deep learning in Emergency Medicine
    Ramlakhan, Shammi
    Saatchi, Reza
    Sabir, Lisa
    Singh, Yardesh
    Hughes, Ruby
    Shobayo, Olamilekan
    Ventour, Dale
    EMERGENCY MEDICINE JOURNAL, 2022, 39 (05) : 380 - 385
  • [10] Interpreting weights of multimodal machine learning models—problems and pitfalls
    Nils Ralf Winter
    Janik Goltermann
    Udo Dannlowski
    Tim Hahn
    Neuropsychopharmacology, 2021, 46 : 1861 - 1862