CsAGP: Detecting Alzheimer's disease from multimodal images via dual-transformer with cross-attention and graph pooling

被引:12
|
作者
Tang, Chaosheng [1 ]
Wei, Mingyang [1 ]
Sun, Junding [1 ]
Wang, Shuihua [1 ,2 ,3 ]
Zhang, Yudong [1 ,2 ,3 ]
机构
[1] Henan Polytech Univ, Sch Comp Sci & Technol, Jiaozuo 454000, Henan, Peoples R China
[2] Univ Leicester, Sch Comp & Math Sci, Leicester LE1 7RH, England
[3] King Abdulaziz Univ, Fac Comp & Informat Technol, Dept Informat Syst, Jeddah 21589, Saudi Arabia
关键词
Alzheimer's disease; Vision transformer; Multimodal image fusion; Deep learning; FUSION; MODEL;
D O I
10.1016/j.jksuci.2023.101618
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Alzheimer's disease (AD) is a terrible and degenerative disease commonly occurring in the elderly. Early detection can prevent patients from further damage, which is crucial in treating AD. Over the past few decades, it has been demonstrated that neuroimaging can be a critical diagnostic tool for AD, and the feature fusion of different neuroimaging modalities can enhance diagnostic performance. Most previous studies in multimodal feature fusion have only concatenated the high-level features extracted by neural networks from various neuroimaging images simply. However, a major problem of these studies is over-looking the low-level feature interactions between modalities in the feature extraction stage, resulting in suboptimal performance in AD diagnosis. In this paper, we develop a dual-branch vision transformer with cross-attention and graph pooling, namely CsAGP, which enables multi-level feature interactions between the inputs to learn a shared feature representation. Specifically, we first construct a brand-new cross-attention fusion module (CAFM), which processes MRI and PET images by two independent branches of differing computational complexity. These features are fused merely by the cross-attention mechanism to enhance each other. After that, a concise graph pooling algorithm-based Reshape-Pooling-Reshape (RPR) framework is developed for token selection to reduce token redundancy in the proposed model. Extensive experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) data-base demonstrated that the suggested method obtains 99.04%, 97.43%, 98.57%, and 98.72% accuracy for the classification of AD vs. CN, AD vs. MCI, CN vs. MCI, and AD vs. CN vs. MCI, respectively.(c) 2023 The Author(s). Published by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
引用
收藏
页数:13
相关论文
共 23 条
  • [21] Parameter Combination Optimization in Three-Dimensional Convolutional Neural Networks and Transfer Learning for Detecting Alzheimer's Disease from Magnetic Resonance Images
    Lin, Cheng-Jian
    Lin, Tzu-Chao
    Lin, Cheng-Wei
    SENSORS AND MATERIALS, 2022, 34 (07) : 2837 - 2851
  • [22] Alzheimer's disease classification based on graph kernel support vector machines constructed with 3D texture features extracted from magnetic resonance images
    Cruz de Mendonca, Lucas Jose
    Ferrari, Ricardo Jose
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 211
  • [23] A 4.7-kDa polysaccharide from Panax ginseng suppresses Aβ pathology via mitophagy activation in cross-species Alzheimer's disease models
    Zhang, Shuai
    Liu, Fangbing
    Li, Jinmeng
    Jing, Chenxu
    Lu, Jing
    Chen, Xuenan
    Wang, Dandan
    Cao, Donghui
    Zhao, Daqing
    Sun, Liwei
    BIOMEDICINE & PHARMACOTHERAPY, 2023, 167