MoMA: Momentum contrastive learning with multi-head attention-based knowledge distillation for histopathology image analysis

被引:0
|
作者
Vuong, Trinh Thi Le [1 ]
Kwak, Jin Tae [1 ]
机构
[1] Korea Univ, Sch Elect Engn, Seoul, South Korea
基金
新加坡国家研究基金会;
关键词
Knowledge distillation; Momentum contrast; Multi-head self-attention; Computational pathology; CANCER;
D O I
10.1016/j.media.2024.103421
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
There is no doubt that advanced artificial intelligence models and high quality data are the keys to success in developing computational pathology tools. Although the overall volume of pathology data keeps increasing, a lack of quality data is a common issue when it comes to a specific task due to several reasons including privacy and ethical issues with patient data. In this work, we propose to exploit knowledge distillation, i.e., utilize the existing model to learn a new, target model, to overcome such issues in computational pathology. Specifically, we employ a student-teacher framework to learn a target model from a pre-trained, teacher model without direct access to source data and distill relevant knowledge via momentum contrastive learning with multi-head attention mechanism, which provides consistent and context-aware feature representations. This enables the target model to assimilate informative representations of the teacher model while seamlessly adapting to the unique nuances of the target data. The proposed method is rigorously evaluated across different scenarios where the teacher model was trained on the same, relevant, and irrelevant classification tasks with the target model. Experimental results demonstrate the accuracy and robustness of our approach in transferring knowledge to different domains and tasks, outperforming other related methods. Moreover, the results provide a guideline on the learning strategy for different types of tasks and scenarios in computational pathology.
引用
收藏
页数:22
相关论文
共 50 条
  • [41] Modeling Functional Brain Networks with Multi-Head Attention-based Region-Enhancement for ADHD Classification
    Cao, Chunhong
    Fu, Huawei
    Li, Gai
    Wang, Mengyang
    Gao, Xieping
    PROCEEDINGS OF THE 2023 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2023, 2023, : 362 - 369
  • [42] MedGraph: malicious edge detection in temporal reciprocal graph via multi-head attention-based GNN
    Kai Chen
    Ziao Wang
    Kai Liu
    Xiaofeng Zhang
    Linhao Luo
    Neural Computing and Applications, 2023, 35 : 8919 - 8935
  • [43] Learning from Interpretable Analysis: Attention-Based Knowledge Tracing
    Zhu, Jia
    Yu, Weihao
    Zheng, Zetao
    Huang, Changqin
    Tang, Yong
    Fung, Gabriel Pui Cheong
    ARTIFICIAL INTELLIGENCE IN EDUCATION (AIED 2020), PT II, 2020, 12164 : 364 - 368
  • [44] Multi-Head Attention-Based Long Short-Term Memory for Depression Detection From Speech
    Zhao, Yan
    Liang, Zhenlin
    Du, Jing
    Zhang, Li
    Liu, Chengyu
    Zhao, Li
    FRONTIERS IN NEUROROBOTICS, 2021, 15
  • [45] A Reverse Positional Encoding Multi-Head Attention-Based Neural Machine Translation Model for Arabic Dialects
    Baniata, Laith H.
    Kang, Sangwoo
    Ampomah, Isaac K. E.
    MATHEMATICS, 2022, 10 (19)
  • [46] Multi-head attention-based intelligent vehicle lane change decision and trajectory prediction model in highways
    Cai, Junyu
    Jiang, Haobin
    Wang, Junyan
    Li, Aoxue
    JOURNAL OF INTELLIGENT TRANSPORTATION SYSTEMS, 2024,
  • [47] Attention-based deep learning for accurate cell image analysis
    Gao, Xiangrui
    Zhang, Fan
    Guo, Xueyu
    Yao, Mengcheng
    Wang, Xiaoxiao
    Chen, Dong
    Zhang, Genwei
    Wang, Xiaodong
    Lai, Lipeng
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [48] Extreme Low Resolution Action Recognition with Spatial-Temporal Multi-Head Self-Attention and Knowledge Distillation
    Purwanto, Didik
    Pramono, Rizard Renanda Adhi
    Chen, Yie-Tarng
    Fang, Wen-Hsien
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 961 - 969
  • [49] Deep incomplete multi-view clustering via attention-based direct contrastive learning
    Zhang, Kaiwu
    Du, Shiqiang
    Wang, Yaoying
    Deng, Tao
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 255
  • [50] Deep Learning Based Mobilenet and Multi-Head Attention Model for Facial Expression Recognition
    Nouisser, Aicha
    Zouari, Ramzi
    Kherallah, Monji
    INTERNATIONAL ARAB JOURNAL OF INFORMATION TECHNOLOGY, 2023, 20 (3A) : 485 - 491