Multi-level fusion network for mild cognitive impairment identification using multi-modal neuroimages

被引:5
|
作者
Xu, Haozhe [1 ,2 ,3 ]
Zhong, Shengzhou [1 ,2 ,3 ]
Zhang, Yu [1 ,2 ,3 ]
机构
[1] Southern Med Univ, Sch Biomed Engn, Guangzhou 510515, Peoples R China
[2] Southern Med Univ, Guangdong Prov Key Lab Med Image Proc, Guangzhou 510515, Peoples R China
[3] Southern Med Univ, Guangdong Prov Engn Lab Med Imaging & Diagnost Tec, Guangzhou 510515, Peoples R China
来源
PHYSICS IN MEDICINE AND BIOLOGY | 2023年 / 68卷 / 09期
基金
中国国家自然科学基金;
关键词
mild cognitive impairment; multi-modal neuroimages; convolutional neural network; multi-level fusion; DISEASE; MRI; DEMENTIA; CLASSIFICATION; REPRESENTATION; PROGRESSION; PREDICTION; CONVERSION; DIAGNOSIS;
D O I
10.1088/1361-6560/accac8
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Objective. Mild cognitive impairment (MCI) is a precursor to Alzheimer's disease (AD) which is an irreversible progressive neurodegenerative disease and its early diagnosis and intervention are of great significance. Recently, many deep learning methods have demonstrated the advantages of multi-modal neuroimages in MCI identification task. However, previous studies frequently simply concatenate patch-level features for prediction without modeling the dependencies among local features. Also, many methods only focus on modality-sharable information or modality-specific features and ignore their incorporation. This work aims to address above-mentioned issues and construct a model for accurate MCI identification. Approach. In this paper, we propose a multi-level fusion network for MCI identification using multi-modal neuroimages, which consists of local representation learning and dependency-aware global representation learning stages. Specifically, for each patient, we first extract multi-pair of patches from multiple same position in multi-modal neuroimages. After that, in the local representation learning stage, multiple dual-channel sub-networks, each of which consists of two modality-specific feature extraction branches and three sine-cosine fusion modules, are constructed to learn local features that preserve modality-sharable and modality specific representations simultaneously. In the dependency-aware global representation learning stage, we further capture long-range dependencies among local representations and integrate them into global ones for MCI identification. Main results. Experiments on ADNI-1/ADNI-2 datasets demonstrate the superior performance of the proposed method in MCI identification tasks (Accuracy: 0.802, sensitivity: 0.821, specificity: 0.767 in MCI diagnosis task; accuracy: 0.849, sensitivity: 0.841, specificity: 0.856 in MCI conversion task) when compared with state-of-the-art methods. The proposed classification model has demonstrated a promising potential to predict MCI conversion and identify the disease-related regions in the brain. Significance. We propose a multi-level fusion network for MCI identification using multi-modal neuroimage. The results on ADNI datasets have demonstrated its feasibility and superiority.
引用
收藏
页数:15
相关论文
共 50 条
  • [41] Multi-level, multi-modal interactions for visual question answering over text in images
    Chen, Jincai
    Zhang, Sheng
    Zeng, Jiangfeng
    Zou, Fuhao
    Li, Yuan-Fang
    Liu, Tao
    Lu, Ping
    World Wide Web, 2022, 25 (04) : 1607 - 1623
  • [42] Identifying Alzheimer's disease and mild cognitive impairment with atlas-based multi-modal metrics
    Long, Zhuqing
    Li, Jie
    Fan, Jianghua
    Li, Bo
    Du, Yukeng
    Qiu, Shuang
    Miao, Jichang
    Chen, Jian
    Yin, Juanwu
    Jing, Bin
    FRONTIERS IN AGING NEUROSCIENCE, 2023, 15
  • [43] Multi-level, multi-modal interactions for visual question answering over text in images
    Jincai Chen
    Sheng Zhang
    Jiangfeng Zeng
    Fuhao Zou
    Yuan-Fang Li
    Tao Liu
    Ping Lu
    World Wide Web, 2022, 25 : 1607 - 1623
  • [44] Explanation as a Process: User-Centric Construction of Multi-level and Multi-modal Explanations
    Finzel, Bettina
    Tafler, David E.
    Scheele, Stephan
    Schmid, Ute
    ADVANCES IN ARTIFICIAL INTELLIGENCE, KI 2021, 2021, 12873 : 80 - 94
  • [45] Complex Multi-modal Multi-level Influence Networks - Affordable Housing Case Study
    Beautement, Patrick
    Broenner, Christine
    COMPLEX SCIENCES, PT 2, 2009, 5 : 2054 - 2063
  • [46] Multi-level, multi-modal interactions for visual question answering over text in images
    Chen, Jincai
    Zhang, Sheng
    Zeng, Jiangfeng
    Zou, Fuhao
    Li, Yuan-Fang
    Liu, Tao
    Lu, Ping
    WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2022, 25 (04): : 1607 - 1623
  • [47] MRCap: Multi-modal and Multi-level Relationship-based Dense Video Captioning
    Chen, Wei
    Niu, Jianwei
    Liu, Xuefeng
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 2615 - 2620
  • [48] Comorbidity-driven multi-modal subtype analysis in mild cognitive impairment of Alzheimer's disease
    Katabathula, Sreevani
    Davis, Pamela B.
    Xu, Rong
    ALZHEIMERS & DEMENTIA, 2023, 19 (04) : 1428 - 1439
  • [49] Multi-modal Multi-class Parkinson Disease Classification Using CNN and Decision Level Fusion
    Sahu, Sushanta Kumar
    Chowdhury, Ananda S.
    PATTERN RECOGNITION AND MACHINE INTELLIGENCE, PREMI 2023, 2023, 14301 : 737 - 745
  • [50] A robust multi-level sparse classifier with multi-modal feature extraction for face recognition
    Vishwakarma, Virendra P.
    Mishra, Gargi
    INTERNATIONAL JOURNAL OF APPLIED PATTERN RECOGNITION, 2019, 6 (01) : 76 - 102