Out-of-distribution generalization for segmentation of lymph node metastasis in breast cancer

被引:0
|
作者
Varnava, Yiannis [1 ]
Jakate, Kiran [9 ]
Garnett, Richard [1 ]
Androutsos, Dimitrios [1 ]
Tyrrell, Pascal N. [2 ,3 ,4 ]
Khademi, April [1 ,2 ,4 ,5 ,6 ,7 ,8 ]
机构
[1] Toronto Metropolitan Univ, Dept Elect Comp & Biomed Engn, Toronto, ON, Canada
[2] Univ Toronto, Dept Med Imaging, Toronto, ON, Canada
[3] Univ Toronto, Dept Stat Sci, Toronto, ON, Canada
[4] Univ Toronto, Inst Med Sci, Toronto, ON, Canada
[5] St Michaels Hosp, Keenan Res Ctr Biomed Sci, Unity Hlth Toronto, Toronto, ON, Canada
[6] St Michaels Hosp, Inst Biomed Engn Sci Tech iBEST, Partnership St, Toronto, ON, Canada
[7] Toronto Metropolitan Univ, Toronto, ON, Canada
[8] Vector Inst Artificial Intelligence, Toronto, ON, Canada
[9] Unity Hlth Toronto, Toronto, ON, Canada
来源
SCIENTIFIC REPORTS | 2025年 / 15卷 / 01期
关键词
Histopathology; Lymph node; Breast cancer; Deep learning; Segmentation; Generalization; NORMALIZATION; PATHOLOGISTS; EQUIVALENCE; SEPARATION; TESTS;
D O I
10.1038/s41598-024-80495-y
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Pathology provides the definitive diagnosis, and Artificial Intelligence (AI) tools are poised to improve accuracy, inter-rater agreement, and turn-around time (TAT) of pathologists, leading to improved quality of care. A high value clinical application is the grading of Lymph Node Metastasis (LNM) which is used for breast cancer staging and guides treatment decisions. A challenge of implementing AI tools widely for LNM classification is domain shift, where Out-of-Distribution (OOD) data has a different distribution than the In-Distribution (ID) data used to train the model, resulting in a drop in performance in OOD data. This work proposes a novel clustering and sampling method to automatically curate training datasets in an unsupervised manner with the aim of improving model generalization abilities. To evaluate the generalization performance of the proposed models, we applied a novel use of the Two One-sided Tests (TOST) method. This method examines whether the performance on ID and OOD data is equivalent, serving as a proxy for generalization. We provide the first evidence for computing equivalence margins that are data-dependent, which reduces subjectivity. The proposed framework shows the ensembled models constructed from models that generalized across both tumor and normal patches enhanced performance, achieving an F1 score of 0.81 for LNM classification on unseen ID and OOD samples. Interactive viewing of slide-level segmentations can be accessed on PathcoreFlow (TM) through https://web.pathcore.com/folder/18555?s=QTJVHJuhrfe5. Segmentation models are available at https://github.com/IAMLAB-Ryerson/OOD-Generalization-LNM.
引用
收藏
页数:16
相关论文
共 50 条
  • [11] Out-of-distribution generalization for learning quantum dynamics
    Caro, Matthias C.
    Huang, Hsin-Yuan
    Ezzell, Nicholas
    Gibbs, Joe
    Sornborger, Andrew T.
    Cincio, Lukasz
    Coles, Patrick J.
    Holmes, Zoe
    NATURE COMMUNICATIONS, 2023, 14 (01)
  • [12] On the Adversarial Robustness of Out-of-distribution Generalization Models
    Zou, Xin
    Liu, Weiwei
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [13] On the Out-of-distribution Generalization of Probabilistic Image Modelling
    Zhang, Mingtian
    Zhang, Andi
    McDonagh, Steven
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [14] Assaying Out-Of-Distribution Generalization in Transfer Learning
    Wenzel, Florian
    Dittadi, Andrea
    Gehler, Peter
    Simon-Gabriel, Carl-Johann
    Horn, Max
    Zietlow, Dominik
    Kernert, David
    Russell, Chris
    Brox, Thomas
    Schiele, Bernt
    Scholkopf, Bernhard
    Locatello, Francesco
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [15] Out-of-distribution Generalization and Its Applications for Multimedia
    Wang, Xin
    Cui, Peng
    Zhu, Wenwu
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 5681 - 5682
  • [16] Out-of-Distribution Generalization With Causal Feature Separation
    Wang, Haotian
    Kuang, Kun
    Lan, Long
    Wang, Zige
    Huang, Wanrong
    Wu, Fei
    Yang, Wenjing
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (04) : 1758 - 1772
  • [17] A Stable Vision Transformer for Out-of-Distribution Generalization
    Yu, Haoran
    Liu, Baodi
    Wang, Yingjie
    Zhang, Kai
    Tao, Dapeng
    Liu, Weifeng
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT VIII, 2024, 14432 : 328 - 339
  • [18] Counterfactual Active Learning for Out-of-Distribution Generalization
    Deng, Xun
    Wang, Wenjie
    Feng, Fuli
    Zhang, Hanwang
    He, Xiangnan
    Liao, Yong
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 11362 - 11377
  • [19] Diverse Weight Averaging for Out-of-Distribution Generalization
    Rame, Alexandre
    Kirchmeyer, Matthieu
    Rahier, Thibaud
    Rakotomamonjy, Alain
    Gallinari, Patrick
    Cord, Matthieu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [20] Out-of-distribution Generalization with Causal Invariant Transformations
    Wang, Ruoyu
    Yi, Mingyang
    Chen, Zhitang
    Zhu, Shengyu
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 375 - 385