Function MRI Representation Learning via Self-supervised Transformer for Automated Brain Disorder Analysis

被引:0
|
作者
Wang, Qianqian [1 ]
Qiao, Lishan [1 ]
Liu, Mingxia [2 ]
机构
[1] Liaocheng Univ, Sch Math Sci, Liaocheng 252000, Shandong, Peoples R China
[2] Univ North Carolina Chapel Hill, Dept Radiol & BRIC, Chapel Hill, NC 27599 USA
基金
中国国家自然科学基金;
关键词
Major depressive disorder; fMRI; Transformer; CLASSIFICATION; DEPRESSION;
D O I
10.1007/978-3-031-21014-3_1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Major depressive disorder (MDD) is a prevalent mental health disorder whose neuropathophysiology remains unclear. Resting-state functional magnetic resonance imaging (rs-fMRI) has been used to capture abnormality or dysfunction functional connectivity networks for automated MDD detection. A functional connectivity network (FCN) of each subject derived from rs-fMRI data can be modeled as a graph consisting of nodes and edges. Graph neural networks (GNNs) play an important role in learning representations of graph-structured data by gradually updating and aggregating node features for brain disorder analysis. However, using one single GNN layer focuses on local graph structure around each node and stacking multiple GNN layers usually leads to the over-smoothing problem. To this end, we propose a transformer-based functional MRI representation learning (TRL) framework to encode global spatial information of FCNs for MDD diagnosis. Experimental results on 282 MDD patients and 251 healthy control (HC) subjects demonstrate that our method outperforms several competing methods in MDD identification based on rs-fMRI data. Besides, based on our learned fully connected graphs, we can detect discriminative functional connectivities in MDD vs. HC classification, providing potential fMRI biomarkers for MDD analysis.
引用
收藏
页码:1 / 10
页数:10
相关论文
共 50 条
  • [1] Self-Supervised Time Series Representation Learning via Cross Reconstruction Transformer
    Zhang, Wenrui
    Yang, Ling
    Geng, Shijia
    Hong, Shenda
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (11) : 16129 - 16138
  • [2] Self-Supervised Time Series Representation Learning via Cross Reconstruction Transformer
    Zhang, Wenrui
    Yang, Ling
    Geng, Shijia
    Hong, Shenda
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (11) : 16129 - 16138
  • [3] Hierarchically Self-supervised Transformer for Human Skeleton Representation Learning
    Chen, Yuxiao
    Zhao, Long
    Yuan, Jianbo
    Tian, Yu
    Xia, Zhaoyang
    Geng, Shijie
    Han, Ligong
    Metaxas, Dimitris N.
    COMPUTER VISION, ECCV 2022, PT XXVI, 2022, 13686 : 185 - 202
  • [4] TERA: Self-Supervised Learning of Transformer Encoder Representation for Speech
    Liu, Andy T.
    Li, Shang-Wen
    Lee, Hung-yi
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2021, 29 : 2351 - 2366
  • [5] Self-supervised graph representation learning via bootstrapping
    Che, Feihu
    Yang, Guohua
    Zhang, Dawei
    Tao, Jianhua
    Liu, Tong
    NEUROCOMPUTING, 2021, 456 (456) : 88 - 96
  • [6] Self-Supervised Contrastive Learning for Automated Segmentation of Brain Tumor MRI Images in Schizophrenia
    Meng, Lingmiao
    Zhao, Liwei
    Yi, Xin
    Yu, Qingming
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2024, 17 (01)
  • [7] Self-supervised Learning with Adaptive Graph Structure and Function Representation for Cross-Dataset Brain Disorder Diagnosis
    Chen, Dongdong
    Yao, Linlin
    Liu, Mengjun
    Shen, Zhenrong
    Hu, Yuqi
    Song, Zhiyun
    Wang, Qian
    Zhang, Lichi
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2024, PT XI, 2024, 15011 : 612 - 622
  • [8] Self-supervised graph contrastive learning with diffusion augmentation for functional MRI analysis and brain disorder detection
    Wang, Xiaochuan
    Fang, Yuqi
    Wang, Qianqian
    Yap, Pew-Thian
    Zhu, Hongtu
    Liu, Mingxia
    MEDICAL IMAGE ANALYSIS, 2025, 101
  • [9] Multiple prior representation learning for self-supervised monocular depth estimation via hybrid transformer
    Sun, Guodong
    Liu, Junjie
    Liu, Mingxuan
    Liu, Moyun
    Zhang, Yang
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 135
  • [10] Dropout Regularization for Self-Supervised Learning of Transformer Encoder Speech Representation
    Luo, Jian
    Wang, Jianzong
    Cheng, Ning
    Xiao, Jing
    INTERSPEECH 2021, 2021, : 1169 - 1173