Neighborhood attention transformer multiple instance learning for whole slide image classification

被引:0
|
作者
Aftab, Rukhma [1 ]
Yan, Qiang [1 ,2 ]
Zhao, Juanjuan [1 ]
Yong, Gao [3 ]
Huajie, Yue [4 ]
Urrehman, Zia [1 ]
Khalid, Faizi Mohammad [1 ]
机构
[1] Taiyuan Univ Technol, Coll Comp Sci & Technol, Coll Data Sci, Taiyuan, Shanxi, Peoples R China
[2] North Univ China, Sch Software, Taiyuan, Shanxi, Peoples R China
[3] Sinopharm Tongmei Gen Hosp, Dept Resp & Crit Care Med, Datong, Shanxi, Peoples R China
[4] Shanxi Med Univ, Hosp 1, Taiyuan, Shanxi, Peoples R China
来源
FRONTIERS IN ONCOLOGY | 2024年 / 14卷
基金
中国国家自然科学基金;
关键词
attention transformer; whole slide images; multiple instance learning; lung cancer; weakly supervised learning;
D O I
10.3389/fonc.2024.1389396
中图分类号
R73 [肿瘤学];
学科分类号
100214 ;
摘要
Introduction Pathologists rely on whole slide images (WSIs) to diagnose cancer by identifying tumor cells and subtypes. Deep learning models, particularly weakly supervised ones, classify WSIs using image tiles but may overlook false positives and negatives due to the heterogeneous nature of tumors. Both cancerous and healthy cells can proliferate in patterns that extend beyond individual tiles, leading to errors at the tile level that result in inaccurate tumor-level classifications.Methods To address this limitation, we introduce NATMIL (Neighborhood Attention Transformer Multiple Instance Learning), which utilizes the Neighborhood Attention Transformer to incorporate contextual dependencies among WSI tiles. NATMIL enhances multiple instance learning by integrating a broader tissue context into the model. Our approach enhances the accuracy of tumor classification by considering the broader tissue context, thus reducing errors associated with isolated tile analysis.Results We conducted a quantitative analysis to evaluate NATMIL's performance against other weakly supervised algorithms. When applied to subtyping non-small cell lung cancer (NSCLC) and lymph node (LN) tumors, NATMIL demonstrated superior accuracy. Specifically, NATMIL achieved accuracy values of 89.6% on the Camelyon dataset and 88.1% on the TCGA-LUSC dataset, outperforming existing methods. These results underscore NATMIL's potential as a robust tool for improving the precision of cancer diagnosis using WSIs.Discussion Our findings demonstrate that NATMIL significantly improves tumor classification accuracy by reducing errors associated with isolated tile analysis. The integration of contextual dependencies enhances the precision of cancer diagnosis using WSIs, highlighting NATMILs<acute accent> potential as a robust tool in pathology.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] BEL: A BAG EMBEDDING LOSS FOR TRANSFORMER ENHANCES MULTIPLE INSTANCE WHOLE SLIDE IMAGE CLASSIFICATION
    Sens, Daniel
    Sadafi, Ario
    Casale, Francesco Paolo
    Navab, Nassir
    Marr, Carsten
    2023 IEEE 20TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI, 2023,
  • [22] Iteratively Coupled Multiple Instance Learning from Instance to Bag Classifier for Whole Slide Image Classification
    Wang, Hongyi
    Luo, Luyang
    Wang, Fang
    Tong, Ruofeng
    Chen, Yen-Wei
    Hu, Hongjie
    Lin, Lanfen
    Chen, Hao
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023, PT VI, 2023, 14225 : 467 - 476
  • [23] TPMIL: Trainable Prototype Enhanced Multiple Instance Learning for Whole Slide Image Classification
    Yang, Litao
    Mehta, Deval
    Liu, Sidong
    Mahapatra, Dwarikanath
    Di Ieva, Antonio
    Ge, Zongyuan
    MEDICAL IMAGING WITH DEEP LEARNING, VOL 227, 2023, 227 : 1655 - 1665
  • [24] ProtoMIL: Multiple Instance Learning with Prototypical Parts for Whole-Slide Image Classification
    Rymarczyk, Dawid
    Pardyl, Adam
    Kraus, Jaroslaw
    Kaczynska, Aneta
    Skomorowski, Marek
    Zielinski, Bartosz
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2022, PT I, 2023, 13713 : 421 - 436
  • [25] FR-MIL: Distribution Re-Calibration-Based Multiple Instance Learning With Transformer for Whole Slide Image Classification
    Chikontwe, Philip
    Kim, Meejeong
    Jeong, Jaehoon
    Sung, Hyun Jung
    Go, Heounjeong
    Nam, Soo Jeong
    Park, Sang Hyun
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2025, 44 (01) : 409 - 421
  • [26] Rethinking Multiple Instance Learning for Whole Slide Image Classification: A Good Instance Classifier Is All You Need
    Qu, Linhao
    Ma, Yingfan
    Luo, Xiaoyuan
    Guo, Qinhao
    Wang, Manning
    Song, Zhijian
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (10) : 9732 - 9744
  • [27] Kernel Attention Transformer (KAT) for Histopathology Whole Slide Image Classification
    Zheng, Yushan
    Li, Jun
    Shi, Jun
    Xie, Fengying
    Jiang, Zhiguo
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT II, 2022, 13432 : 283 - 292
  • [28] Feature Re-calibration Based Multiple Instance Learning for Whole Slide Image Classification
    Chikontwe, Philip
    Nam, Soo Jeong
    Go, Heounjeong
    Kim, Meejeong
    Sung, Hyun Jung
    Park, Sang Hyun
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT II, 2022, 13432 : 420 - 430
  • [29] Registration-enhanced multiple instance learning for cervical cancer whole slide image classification
    He, Qiming
    Wang, Chengjiang
    Zeng, Siqi
    Liang, Zhendong
    Duan, Hufei
    Yang, Jingying
    Pan, Feiyang
    He, Yonghong
    Huang, Wenting
    Guan, Tian
    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, 2024, 34 (01)
  • [30] ReMix: A General and Efficient Framework for Multiple Instance Learning Based Whole Slide Image Classification
    Yang, Jiawei
    Chen, Hanbo
    Zhao, Yu
    Yang, Fan
    Zhang, Yao
    He, Lei
    Yao, Jianhua
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT II, 2022, 13432 : 35 - 45