Visual interpretability of image-based classification models by generative latent space disentanglement applied to in vitro fertilization

被引:1
|
作者
Rotem, Oded [1 ]
Schwartz, Tamar [2 ]
Maor, Ron [2 ]
Tauber, Yishay [2 ]
Shapiro, Maya Tsarfati [2 ]
Meseguer, Marcos [3 ,4 ]
Gilboa, Daniella [2 ]
Seidman, Daniel S. [2 ,5 ]
Zaritsky, Assaf [1 ]
机构
[1] Bengurion Univ Negev, Dept Software & Informat Syst Engn, IL-84105 Beer Sheva, Israel
[2] AIVF Ltd, IL-69271 Tel Aviv, Israel
[3] IVI Fdn Inst Invest Sanit La Fe Valencia, Valencia 46026, Spain
[4] IVIRMA Valencia, Dept Reprod Med, Valencia 46015, Spain
[5] Tel Aviv Univ, Fac Med, IL-69978 Tel Aviv, Israel
关键词
DIABETIC-RETINOPATHY; LIVE BIRTH; TROPHECTODERM MORPHOLOGY; BLASTOCYST TRANSFER; LEARNING-MODELS; DEEP; PREDICTION; PREGNANCY; VALIDATION; ALGORITHM;
D O I
10.1038/s41467-024-51136-9
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
The success of deep learning in identifying complex patterns exceeding human intuition comes at the cost of interpretability. Non-linear entanglement of image features makes deep learning a "black box" lacking human meaningful explanations for the models' decision. We present DISCOVER, a generative model designed to discover the underlying visual properties driving image-based classification models. DISCOVER learns disentangled latent representations, where each latent feature encodes a unique classification-driving visual property. This design enables "human-in-the-loop" interpretation by generating disentangled exaggerated counterfactual explanations. We apply DISCOVER to interpret classification of in vitro fertilization embryo morphology quality. We quantitatively and systematically confirm the interpretation of known embryo properties, discover properties without previous explicit measurements, and quantitatively determine and empirically verify the classification decision of specific embryo instances. We show that DISCOVER provides human-interpretable understanding of "black box" classification models, proposes hypotheses to decipher underlying biomedical mechanisms, and provides transparency for the classification of individual predictions. Identifying complex patterns through deep learning often comes at the cost of interpretability. Focusing on the interpretation of classification of in vitro fertilization embryos, the authors present DISCOVER, an approach that enables visual interpretability of image-based classification models.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] Robustness of Image-Based Malware Classification Models trained with Generative Adversarial Networks
    Reilly, Ciaran
    O'Shaughnessy, Stephen
    Thorpe, Christina
    PROCEEDINGS OF THE 2023 EUROPEAN INTERDISCIPLINARY CYBERSECURITY CONFERENCE, EICC 2023, 2023, : 92 - 99
  • [2] Generative adversarial networks and image-based malware classification
    Nguyen, Huy
    Di Troia, Fabio
    Ishigaki, Genya
    Stamp, Mark
    JOURNAL OF COMPUTER VIROLOGY AND HACKING TECHNIQUES, 2023, 19 (04) : 579 - 595
  • [3] Generative adversarial networks and image-based malware classification
    Huy Nguyen
    Fabio Di Troia
    Genya Ishigaki
    Mark Stamp
    Journal of Computer Virology and Hacking Techniques, 2023, 19 : 579 - 595
  • [4] Generative models for grid-based and image-based pathfinding
    Kirilenko, Daniil
    Andreychuk, Anton
    Panov, Aleksandr I.
    Yakovlev, Konstantin
    ARTIFICIAL INTELLIGENCE, 2025, 338
  • [5] A visual interpretability method to unbox 'black-box' deep learning image-based classification of embryo properties
    Rotem, O.
    Gilboa, D.
    Seidman, D.
    Zaritsky, A.
    Maor, R.
    Meseguer, M.
    Shapiro, M.
    Schwartz, T.
    HUMAN REPRODUCTION, 2024, 39 : I262 - I262
  • [6] TorchEsegeta: Framework for Interpretability and Explainability of Image-Based Deep Learning Models
    Chatterjee, Soumick
    Das, Arnab
    Mandal, Chirag
    Mukhopadhyay, Budhaditya
    Vipinraj, Manish
    Shukla, Aniruddh
    Rao, Rajatha Nagaraja
    Sarasaen, Chompunuch
    Speck, Oliver
    Nuernberger, Andreas
    APPLIED SCIENCES-BASEL, 2022, 12 (04):
  • [7] Visual Insights from the Latent Space of Generative Models for Molecular Design
    Cavallaro, Salvatore
    Vellido, Alfredo
    Konig, Caroline
    ADVANCES IN SELF-ORGANIZING MAPS, LEARNING VECTOR QUANTIZATION, CLUSTERING AND DATA VISUALIZATION: DEDICATED TO THE MEMORY OF TEUVO KOHONEN, WSOM+ 2022, 2022, 533 : 108 - 117
  • [8] An Image-Based Approach to Visual Feature Space Analysis
    Schreck, Tobias
    Schneidewind, Joern
    Keim, Daniel A.
    WSCG 2008, COMMUNICATION PAPERS, 2008, : 223 - +
  • [9] An Image-based Visual Localization Approach to Urban Space
    Liao, Xuan
    Li, Ming
    Chen, Ruizhi
    Guo, Bingxuan
    Wang, Xiqi
    PROCEEDINGS OF 5TH IEEE CONFERENCE ON UBIQUITOUS POSITIONING, INDOOR NAVIGATION AND LOCATION-BASED SERVICES (UPINLBS), 2018, : 282 - 286
  • [10] Uncalibrated Image-based Visual Servoing Based on Joint Space and Image moment
    Wu, Dongjie
    Zhong, Xungao
    Zhang, Xiaoli
    Peng, Xiafu
    Zou, Chaosheng
    2018 37TH CHINESE CONTROL CONFERENCE (CCC), 2018, : 5391 - 5397