Interpreting Deep Machine Learning Models: An Easy Guide for Oncologists

被引:0
|
作者
Amorim, Jose Pereira [1 ,2 ]
Abreu, Pedro Henriques [1 ]
Fernandez, Alberto [3 ]
Reyes, Mauricio [4 ,5 ]
Santos, Joao [2 ,6 ]
Abreu, Miguel Henriques [7 ]
机构
[1] Univ Coimbra, Dept Informat Engn, CISUC, P-3030290 Coimbra, Portugal
[2] Portuguese Inst Oncol Porto, IPO Porto Res Ctr, P-4200072 Porto, Portugal
[3] Univ Granada, DaSCI Andalusian Res Inst, Granada 18071, Spain
[4] Bern Univ Hosp, Data Sci Ctr, Inselspital, CH-3010 Bern, Switzerland
[5] Univ Bern, ARTORG Ctr Biomed Res, CH-3008 Bern, Switzerland
[6] ICBAS Inst Ciencias Biomed Abel Salazar, P-4050313 Porto, Portugal
[7] Portuguese Oncol Inst Porto, Dept Med Oncol, P-4200072 Porto, Portugal
关键词
Big Data; interpretability; deep learning; decision-support systems; oncology;
D O I
暂无
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Healthcare agents, in particular in the oncology field, are currently collecting vast amounts of diverse patient data. In this context, some decision-support systems, mostly based on deep learning techniques, have already been approved for clinical purposes. Despite all the efforts in introducing artificial intelligence methods in the workflow of clinicians, its lack of interpretability - understand how the methods make decisions - still inhibits their dissemination in clinical practice. The aim of this article is to present an easy guide for oncologists explaining how these methods make decisions and illustrating the strategies to explain them. Theoretical concepts were illustrated based on oncological examples and a literature review of research works was performed from PubMed between January 2014 to September 2020, using "deep learning techniques, " "interpretability " and "oncology " as keywords. Overall, more than 60% are related to breast, skin or brain cancers and the majority focused on explaining the importance of tumor characteristics (e.g. dimension, shape) in the predictions. The most used computational methods are multilayer perceptrons and convolutional neural networks. Nevertheless, despite being successfully applied in different cancers scenarios, endowing deep learning techniques with interpretability, while maintaining their performance, continues to be one of the greatest challenges of artificial intelligence.
引用
收藏
页码:192 / 207
页数:16
相关论文
共 50 条
  • [41] An Efficient and Generic Method for Interpreting Deep Learning based Knowledge Tracing Models
    Wang, Deliang
    Lu, Yu
    Zhang, Zhi
    Chen, Penghe
    31ST INTERNATIONAL CONFERENCE ON COMPUTERS IN EDUCATION, ICCE 2023, VOL I, 2023, : 2 - 11
  • [42] A guide to interpreting and assessing the performance of prediction models
    Farooq, Vasim
    Brugaletta, Salvatore
    Vranckx, Pascal
    Serruys, Patrick W.
    EUROINTERVENTION, 2011, 6 (08) : 909 - +
  • [43] Towards interpreting multi-temporal deep learning models in crop mapping
    Xu, Jinfan
    Yang, Jie
    Xiong, Xingguo
    Li, Haifeng
    Huang, Jingfeng
    Ting, K. C.
    Ying, Yibin
    Lin, Tao
    REMOTE SENSING OF ENVIRONMENT, 2021, 264
  • [44] Explainable AI for Retinoblastoma Diagnosis: Interpreting Deep Learning Models with LIME and SHAP
    Aldughayfiq, Bader
    Ashfaq, Farzeen
    Jhanjhi, N. Z.
    Humayun, Mamoona
    DIAGNOSTICS, 2023, 13 (11)
  • [45] Towards interpreting deep learning models for industry 4.0 with gated mixture of experts
    Chaoub, Alaaeddine
    Cerisara, Christophe
    Voisin, Alexandre
    Iung, Benoit
    2022 30TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2022), 2022, : 1412 - 1416
  • [46] Interpreting Deep Text Quantification Models
    Bang, YunQi
    Khaleel, Mohammed
    Tavanapong, Wallapak
    DATABASE AND EXPERT SYSTEMS APPLICATIONS, DEXA 2023, PT II, 2023, 14147 : 310 - 324
  • [47] Building and Interpreting Deep Similarity Models
    Eberle, Oliver
    Buettner, Jochen
    Kraeutli, Florian
    Mueller, Klaus-Robert
    Valleriani, Matteo
    Montavon, Gregoire
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (03) : 1149 - 1161
  • [48] Quasar: Easy Machine Learning for Biospectroscopy
    Toplak, Marko
    Read, Stuart T.
    Sandt, Christophe
    Borondics, Ferenc
    CELLS, 2021, 10 (09)
  • [49] Interpreting machine learning models to investigate circadian regulation and facilitate exploration of clock function
    Gardiner, Laura-Jayne
    Rusholme-Pilcher, Rachel
    Colmer, Josh
    Rees, Hannah
    Crescente, Juan Manuel
    Carrieri, Anna Paola
    Duncan, Susan
    Pyzer-Knapp, Edward O.
    Krishna, Ritesh
    Hall, Anthony
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2021, 118 (32)
  • [50] Interpreting machine learning models based on SHAP values in predicting suspended sediment concentration
    Lamane, Houda
    Mouhir, Latifa
    Moussadek, Rachid
    Baghdad, Bouamar
    Kisi, Ozgur
    El Bilali, Ali
    INTERNATIONAL JOURNAL OF SEDIMENT RESEARCH, 2025, 40 (01) : 91 - 107