GraphSleepFormer: a multi-modal graph neural network for sleep staging in OSA patients

被引:0
|
作者
Wang, Chen [1 ]
Jiang, Xiuquan [1 ]
Lv, Chengyan [1 ]
Meng, Qi [1 ]
Zhao, Pengcheng [1 ]
Yan, Di [1 ]
Feng, Chao [1 ]
Xu, Fangzhou [1 ]
Lu, Shanshan [2 ,3 ]
Jung, Tzyy-Ping [4 ,5 ]
Leng, Jiancai [1 ]
机构
[1] Qilu Univ Technol, Shandong Acad Sci, Int Sch Optoelect Engn, 3501 Univ Rd, Jinan, Shandong, Peoples R China
[2] Shandong First Med Univ, Affiliated Hosp 1, Dept Neurol, Jinan, Peoples R China
[3] Shandong Prov Qianfoshan Hosp, Shandong Inst Neuroimmunol, Shandong Key Lab Rheumat Dis & Translat Med, Jinan, Peoples R China
[4] Univ Calif San Diego, Inst Neural Computat, San Diego, CA 92093 USA
[5] Univ Calif San Diego, Inst Engn Med, San Diego, CA 92093 USA
基金
中国国家自然科学基金;
关键词
polysomnography; sleep stage classification; graph convolutional network; obstructive sleep apnea; graphormer; K-FOLD; APNEA; CLASSIFICATION; MODEL; RNN;
D O I
10.1088/1741-2552/adb996
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Objective. Obstructive sleep apnea (OSA) is a prevalent sleep disorder. Accurate sleep staging is one of the prerequisites in the study of sleep-related disorders and the evaluation of sleep quality. We introduce a novel GraphSleepFormer (GSF) network designed to effectively capture global dependencies and node characteristics in graph-structured data. Approach. The network incorporates centrality coding and spatial coding into its architecture. It employs adaptive learning of adjacency matrices for spatial encoding between channels located on the head, thereby encoding graph structure information to enhance the model's representation and understanding of spatial relationships. Centrality encoding integrates the degree matrix into node features, assigning varying degrees of attention to different channels. Ablation experiments demonstrate the effectiveness of these encoding methods. The Shapley Additive Explanations (SHAP) method was employed to evaluate the contribution of each channel in sleep staging, highlighting the necessity of using multimodal data. Main results. We trained our model on overnight polysomnography data collected from 28 OSA patients in a clinical setting and achieved an overall accuracy of 80.10%. GSF achieved performance comparable to state-of-the-art methods on two subsets of the ISRUC database. Significance. The GSF Accurately identifies sleep periods, providing a critical basis for diagnosing and treating OSA, thereby contributing to advancements in sleep medicine.
引用
收藏
页数:16
相关论文
共 50 条
  • [11] Multi-modal Neural Network for Traffic Event Detection
    Chen, Qi
    Wang, Wei
    2019 IEEE 2ND INTERNATIONAL CONFERENCE ON ELECTRONICS AND COMMUNICATION ENGINEERING (ICECE 2019), 2019, : 26 - 30
  • [12] CMGNet: Collaborative multi-modal graph network for video captioning
    Rao, Qi
    Yu, Xin
    Li, Guang
    Zhu, Linchao
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2024, 238
  • [13] Multi-Modal Graph Interaction for Multi-Graph Convolution Network in Urban Spatiotemporal Forecasting
    Zhang, Lingyu
    Geng, Xu
    Qin, Zhiwei
    Wang, Hongjun
    Wang, Xiao
    Zhang, Ying
    Liang, Jian
    Wu, Guobin
    Song, Xuan
    Wang, Yunhai
    SUSTAINABILITY, 2022, 14 (19)
  • [14] Multi-Modal Knowledge Graph Transformer Framework for Multi-Modal Entity Alignment
    Li, Qian
    Ji, Cheng
    Guo, Shu
    Liang, Zhaoji
    Wang, Lihong
    Li, Jianxin
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 987 - 999
  • [15] Multi-Modal Physiological Signals Based Squeeze-and-Excitation Network With Domain Adversarial Learning for Sleep Staging
    Jia, Ziyu
    Cai, Xiyang
    Jiao, Zehui
    IEEE SENSORS JOURNAL, 2022, 22 (04) : 3464 - 3471
  • [16] Multi-Modal Sarcasm Detection via Cross-Modal Graph Convolutional Network
    Liang, Bin
    Lou, Chenwei
    Li, Xiang
    Yang, Min
    Gui, Lin
    He, Yulan
    Pei, Wenjie
    Xu, Ruifeng
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 1767 - 1777
  • [17] An Adaptive Dual-channel Multi-modal graph neural network for few-shot learning
    Yang, Jieyi
    Dong, Yihong
    Li, Guoqing
    KNOWLEDGE-BASED SYSTEMS, 2025, 310
  • [18] HetEmotionNet: Two-Stream Heterogeneous Graph Recurrent Neural Network for Multi-modal Emotion Recognition
    Jia, Ziyu
    Lin, Youfang
    Wang, Jing
    Feng, Zhiyang
    Xie, Xiangheng
    Chen, Caijie
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 1047 - 1056
  • [19] Multi-modal Graph Neural Network with Transformer-Guided Adaptive Diffusion for Preclinical Alzheimer Classification
    Sim, Jaeyoon
    Lee, Minjae
    Wu, Guorong
    Kim, Won Hwa
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2024, PT V, 2024, 15005 : 511 - 521
  • [20] On Graph Calculi for Multi-modal Logics
    Veloso, Paulo A. S.
    Veloso, Sheila R. M.
    Benevides, Mario R. F.
    ELECTRONIC NOTES IN THEORETICAL COMPUTER SCIENCE, 2015, 312 : 231 - 252