Neighborhood attention transformer multiple instance learning for whole slide image classification

被引:0
|
作者
Aftab, Rukhma [1 ]
Yan, Qiang [1 ,2 ]
Zhao, Juanjuan [1 ]
Yong, Gao [3 ]
Huajie, Yue [4 ]
Urrehman, Zia [1 ]
Khalid, Faizi Mohammad [1 ]
机构
[1] Taiyuan Univ Technol, Coll Comp Sci & Technol, Coll Data Sci, Taiyuan, Shanxi, Peoples R China
[2] North Univ China, Sch Software, Taiyuan, Shanxi, Peoples R China
[3] Sinopharm Tongmei Gen Hosp, Dept Resp & Crit Care Med, Datong, Shanxi, Peoples R China
[4] Shanxi Med Univ, Hosp 1, Taiyuan, Shanxi, Peoples R China
来源
FRONTIERS IN ONCOLOGY | 2024年 / 14卷
基金
中国国家自然科学基金;
关键词
attention transformer; whole slide images; multiple instance learning; lung cancer; weakly supervised learning;
D O I
10.3389/fonc.2024.1389396
中图分类号
R73 [肿瘤学];
学科分类号
100214 ;
摘要
Introduction Pathologists rely on whole slide images (WSIs) to diagnose cancer by identifying tumor cells and subtypes. Deep learning models, particularly weakly supervised ones, classify WSIs using image tiles but may overlook false positives and negatives due to the heterogeneous nature of tumors. Both cancerous and healthy cells can proliferate in patterns that extend beyond individual tiles, leading to errors at the tile level that result in inaccurate tumor-level classifications.Methods To address this limitation, we introduce NATMIL (Neighborhood Attention Transformer Multiple Instance Learning), which utilizes the Neighborhood Attention Transformer to incorporate contextual dependencies among WSI tiles. NATMIL enhances multiple instance learning by integrating a broader tissue context into the model. Our approach enhances the accuracy of tumor classification by considering the broader tissue context, thus reducing errors associated with isolated tile analysis.Results We conducted a quantitative analysis to evaluate NATMIL's performance against other weakly supervised algorithms. When applied to subtyping non-small cell lung cancer (NSCLC) and lymph node (LN) tumors, NATMIL demonstrated superior accuracy. Specifically, NATMIL achieved accuracy values of 89.6% on the Camelyon dataset and 88.1% on the TCGA-LUSC dataset, outperforming existing methods. These results underscore NATMIL's potential as a robust tool for improving the precision of cancer diagnosis using WSIs.Discussion Our findings demonstrate that NATMIL significantly improves tumor classification accuracy by reducing errors associated with isolated tile analysis. The integration of contextual dependencies enhances the precision of cancer diagnosis using WSIs, highlighting NATMILs<acute accent> potential as a robust tool in pathology.
引用
收藏
页数:10
相关论文
共 50 条
  • [41] DGR-MIL: Exploring Diverse Global Representation in Multiple Instance Learning for Whole Slide Image Classification
    Zhu, Wenhui
    Chen, Xiwen
    Qiu, Peijie
    Sotiras, Aristeidis
    Razi, Abolfazl
    Wang, Yalin
    COMPUTER VISION-ECCV 2024, PT XXXVIII, 2025, 15096 : 333 - 351
  • [42] Targeting tumor heterogeneity: multiplex-detection-based multiple instance learning for whole slide image classification
    Wang, Zhikang
    Bi, Yue
    Pan, Tong
    Wang, Xiaoyu
    Bain, Chris
    Bassed, Richard
    Imoto, Seiya
    Yao, Jianhua
    Daly, Roger J.
    Song, Jiangning
    BIOINFORMATICS, 2023, 39 (03)
  • [43] E2-MIL: An explainable and evidential multiple instance learning framework for whole slide image classification
    Shi, Jiangbo
    Li, Chen
    Gong, Tieliang
    Fu, Huazhu
    MEDICAL IMAGE ANALYSIS, 2024, 97
  • [44] CoD-MIL: Chain-of-Diagnosis Prompting Multiple Instance Learning for Whole Slide Image Classification
    Shi, Jiangbo
    Li, Chen
    Gong, Tieliang
    Wang, Chunbao
    Fu, Huazhu
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2025, 44 (03) : 1218 - 1229
  • [45] Pseudo-Bag Mixup Augmentation for Multiple Instance Learning-Based Whole Slide Image Classification
    Liu, Pei
    Ji, Luping
    Zhang, Xinyu
    Ye, Feng
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2024, 43 (05) : 1841 - 1852
  • [46] Rethinking Multiple Instance Learning for Whole Slide Image Classification: A Bag-Level Classifier is a Good Instance-Level Teacher
    Wang, Hongyi
    Luo, Luyang
    Wang, Fang
    Tong, Ruofeng
    Chen, Yen-Wei
    Hu, Hongjie
    Lin, Lanfen
    Chen, Hao
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2024, 43 (11) : 3964 - 3976
  • [47] MuRCL: Multi-Instance Reinforcement Contrastive Learning for Whole Slide Image Classification
    Zhu, Zhonghang
    Yu, Lequan
    Wu, Wei
    Yu, Rongshan
    Zhang, Defu
    Wang, Liansheng
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2023, 42 (05) : 1337 - 1348
  • [48] Multi-scale representation attention based deep multiple instance learning for gigapixel whole slide image analysis
    Xiang, Hangchen
    Shen, Junyi
    Yan, Qingguo
    Xu, Meilian
    Shi, Xiaoshuang
    Zhu, Xiaofeng
    MEDICAL IMAGE ANALYSIS, 2023, 89
  • [49] Dual-stream Multiple Instance Learning Network for Whole Slide Image Classification with Self-supervised Contrastive Learning
    Li, Bin
    Li, Yin
    Eliceiri, Kevin W.
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 14313 - 14323
  • [50] A Graph-Transformer for Whole Slide Image Classification
    Zheng, Yi
    Gindra, Rushin H.
    Green, Emily J.
    Burks, Eric J.
    Betke, Margrit
    Beane, Jennifer E.
    Kolachalama, Vijaya B.
    IEEE Transactions on Medical Imaging, 2022, 41 (11) : 3003 - 3015