Multi-level fusion network for mild cognitive impairment identification using multi-modal neuroimages

被引:5
|
作者
Xu, Haozhe [1 ,2 ,3 ]
Zhong, Shengzhou [1 ,2 ,3 ]
Zhang, Yu [1 ,2 ,3 ]
机构
[1] Southern Med Univ, Sch Biomed Engn, Guangzhou 510515, Peoples R China
[2] Southern Med Univ, Guangdong Prov Key Lab Med Image Proc, Guangzhou 510515, Peoples R China
[3] Southern Med Univ, Guangdong Prov Engn Lab Med Imaging & Diagnost Tec, Guangzhou 510515, Peoples R China
来源
PHYSICS IN MEDICINE AND BIOLOGY | 2023年 / 68卷 / 09期
基金
中国国家自然科学基金;
关键词
mild cognitive impairment; multi-modal neuroimages; convolutional neural network; multi-level fusion; DISEASE; MRI; DEMENTIA; CLASSIFICATION; REPRESENTATION; PROGRESSION; PREDICTION; CONVERSION; DIAGNOSIS;
D O I
10.1088/1361-6560/accac8
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Objective. Mild cognitive impairment (MCI) is a precursor to Alzheimer's disease (AD) which is an irreversible progressive neurodegenerative disease and its early diagnosis and intervention are of great significance. Recently, many deep learning methods have demonstrated the advantages of multi-modal neuroimages in MCI identification task. However, previous studies frequently simply concatenate patch-level features for prediction without modeling the dependencies among local features. Also, many methods only focus on modality-sharable information or modality-specific features and ignore their incorporation. This work aims to address above-mentioned issues and construct a model for accurate MCI identification. Approach. In this paper, we propose a multi-level fusion network for MCI identification using multi-modal neuroimages, which consists of local representation learning and dependency-aware global representation learning stages. Specifically, for each patient, we first extract multi-pair of patches from multiple same position in multi-modal neuroimages. After that, in the local representation learning stage, multiple dual-channel sub-networks, each of which consists of two modality-specific feature extraction branches and three sine-cosine fusion modules, are constructed to learn local features that preserve modality-sharable and modality specific representations simultaneously. In the dependency-aware global representation learning stage, we further capture long-range dependencies among local representations and integrate them into global ones for MCI identification. Main results. Experiments on ADNI-1/ADNI-2 datasets demonstrate the superior performance of the proposed method in MCI identification tasks (Accuracy: 0.802, sensitivity: 0.821, specificity: 0.767 in MCI diagnosis task; accuracy: 0.849, sensitivity: 0.841, specificity: 0.856 in MCI conversion task) when compared with state-of-the-art methods. The proposed classification model has demonstrated a promising potential to predict MCI conversion and identify the disease-related regions in the brain. Significance. We propose a multi-level fusion network for MCI identification using multi-modal neuroimage. The results on ADNI datasets have demonstrated its feasibility and superiority.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] MLF3D: Multi-Level Fusion for Multi-Modal 3D Object Detection
    Jiang, Han
    Wang, Jianbin
    Xiao, Jianru
    Zhao, Yanan
    Chen, Wanqing
    Ren, Yilong
    Yu, Haiyang
    2024 35TH IEEE INTELLIGENT VEHICLES SYMPOSIUM, IEEE IV 2024, 2024, : 1588 - 1593
  • [32] MMF-Track: Multi-Modal Multi-Level Fusion for 3D Single Object Tracking
    Li, Zhiheng
    Cui, Yubo
    Lin, Yu
    Fang, Zheng
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2024, 9 (01): : 1817 - 1829
  • [33] Self-supervised multi-modal fusion network for multi-modal thyroid ultrasound image diagnosis
    Xiang, Zhuo
    Zhuo, Qiuluan
    Zhao, Cheng
    Deng, Xiaofei
    Zhu, Ting
    Wang, Tianfu
    Jiang, Wei
    Lei, Baiying
    COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 150
  • [34] Multi-Level Cross-Modal Interactive-Network-Based Semi-Supervised Multi-Modal Ship Classification
    Song, Xin
    Chen, Zhikui
    Zhong, Fangming
    Gao, Jing
    Zhang, Jianning
    Li, Peng
    SENSORS, 2024, 24 (22)
  • [35] A multi-modal fusion YoLo network for traffic detection
    Zheng, Xinwang
    Zheng, Wenjie
    Xu, Chujie
    COMPUTATIONAL INTELLIGENCE, 2024, 40 (02)
  • [36] 3DMGNet: 3D Model Generation Network Based on Multi-Modal Data Constraints and Multi-Level Feature Fusion
    Wang, Ende
    Xue, Lei
    Li, Yong
    Zhang, Zhenxin
    Hou, Xukui
    SENSORS, 2020, 20 (17) : 1 - 16
  • [37] A lightweight decision-level fusion model for pig disease identification using multi-modal data
    Li, Haopu
    Li, Bugao
    Li, Haoming
    Chen, Min
    Song, Yanbo
    Liu, Zhenyu
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2025, 231
  • [38] FUSION OF MULTI-MODAL NEUROIMAGING DATA AND ASSOCIATION WITH COGNITIVE DATA
    LoPresto, Mark D.
    Akhonda, M. A. B. S.
    Calhoun, Vince D.
    Adali, Tülay
    2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,
  • [39] Sex-Specific Heterogeneity of Mild Cognitive Impairment Identified Based on Multi-Modal Data Analysis
    Katabathula, Sreevani
    Davis, Pamela B.
    Xu, Rong
    JOURNAL OF ALZHEIMERS DISEASE, 2023, 91 (01) : 233 - 243
  • [40] Multi-level perception fusion dehazing network
    Wu, Xiaohua
    Li, Zenglu
    Guo, Xiaoyu
    Xiang, Songyang
    Zhang, Yao
    PLOS ONE, 2023, 18 (10):