GraphSleepFormer: a multi-modal graph neural network for sleep staging in OSA patients

被引:0
|
作者
Wang, Chen [1 ]
Jiang, Xiuquan [1 ]
Lv, Chengyan [1 ]
Meng, Qi [1 ]
Zhao, Pengcheng [1 ]
Yan, Di [1 ]
Feng, Chao [1 ]
Xu, Fangzhou [1 ]
Lu, Shanshan [2 ,3 ]
Jung, Tzyy-Ping [4 ,5 ]
Leng, Jiancai [1 ]
机构
[1] Qilu Univ Technol, Shandong Acad Sci, Int Sch Optoelect Engn, 3501 Univ Rd, Jinan, Shandong, Peoples R China
[2] Shandong First Med Univ, Affiliated Hosp 1, Dept Neurol, Jinan, Peoples R China
[3] Shandong Prov Qianfoshan Hosp, Shandong Inst Neuroimmunol, Shandong Key Lab Rheumat Dis & Translat Med, Jinan, Peoples R China
[4] Univ Calif San Diego, Inst Neural Computat, San Diego, CA 92093 USA
[5] Univ Calif San Diego, Inst Engn Med, San Diego, CA 92093 USA
基金
中国国家自然科学基金;
关键词
polysomnography; sleep stage classification; graph convolutional network; obstructive sleep apnea; graphormer; K-FOLD; APNEA; CLASSIFICATION; MODEL; RNN;
D O I
10.1088/1741-2552/adb996
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Objective. Obstructive sleep apnea (OSA) is a prevalent sleep disorder. Accurate sleep staging is one of the prerequisites in the study of sleep-related disorders and the evaluation of sleep quality. We introduce a novel GraphSleepFormer (GSF) network designed to effectively capture global dependencies and node characteristics in graph-structured data. Approach. The network incorporates centrality coding and spatial coding into its architecture. It employs adaptive learning of adjacency matrices for spatial encoding between channels located on the head, thereby encoding graph structure information to enhance the model's representation and understanding of spatial relationships. Centrality encoding integrates the degree matrix into node features, assigning varying degrees of attention to different channels. Ablation experiments demonstrate the effectiveness of these encoding methods. The Shapley Additive Explanations (SHAP) method was employed to evaluate the contribution of each channel in sleep staging, highlighting the necessity of using multimodal data. Main results. We trained our model on overnight polysomnography data collected from 28 OSA patients in a clinical setting and achieved an overall accuracy of 80.10%. GSF achieved performance comparable to state-of-the-art methods on two subsets of the ISRUC database. Significance. The GSF Accurately identifies sleep periods, providing a critical basis for diagnosing and treating OSA, thereby contributing to advancements in sleep medicine.
引用
收藏
页数:16
相关论文
共 50 条
  • [21] Heterogeneous multi-modal graph network for arterial travel time prediction
    Fang, Jie
    He, Hangyu
    Xu, Mengyun
    Wu, Xiongwei
    APPLIED INTELLIGENCE, 2025, 55 (06)
  • [22] An enhanced multi-modal brain graph network for classifying neuropsychiatric disorders
    Liu, Liangliang
    Wang, Yu-Ping
    Wang, Yi
    Zhang, Pei
    Xiong, Shufeng
    MEDICAL IMAGE ANALYSIS, 2022, 81
  • [23] Integrated Heterogeneous Graph Attention Network for Incomplete Multi-modal Clustering
    Wang, Yu
    Yao, Xinjie
    Zhu, Pengfei
    Li, Weihao
    Cao, Meng
    Hu, Qinghua
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (09) : 3847 - 3866
  • [24] Heterogeneous-Grained Multi-Modal Graph Network for Outfit Recommendation
    Xu, Rucong
    Wang, Jianfeng
    Li, Yun
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024, 8 (02): : 1788 - 1799
  • [25] Graph Interactive Network with Adaptive Gradient for Multi-Modal Rumor Detection
    Sun, Tiening
    Qian, Zhong
    Li, Peifeng
    Zhu, Qiaoming
    PROCEEDINGS OF THE 2023 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2023, 2023, : 316 - 324
  • [26] Channel Estimation Algorithm Based on Multi-modal Neural Network
    Xue, Wenli
    Zhu, Hongwei
    Nian, Zhongyuan
    Wu, Xueyang
    Cui, Mingshi
    Mu, Chunfang
    Yang, Weiming
    Chen, Zhigang
    2024 9TH INTERNATIONAL CONFERENCE ON SIGNAL AND IMAGE PROCESSING, ICSIP, 2024, : 206 - 210
  • [27] NEURAL NETWORK EVALUATION OF MULTI-MODAL STARTLE EYEBLINK MEASUREMENTS
    Lovelace, Christopher T.
    Derakhshani, Reza
    Burgoon, Judee K.
    PSYCHOPHYSIOLOGY, 2009, 46 : S68 - S68
  • [28] MultiJAF: Multi-modal joint entity alignment framework for multi-modal knowledge graph
    Cheng, Bo
    Zhu, Jia
    Guo, Meimei
    NEUROCOMPUTING, 2022, 500 : 581 - 591
  • [29] Multi-modal network Protocols
    Balan, RK
    Akella, A
    Seshan, S
    ACM SIGCOMM COMPUTER COMMUNICATION REVIEW, 2002, 32 (01) : 60 - 60
  • [30] Structure Aware Multi-Graph Network for Multi-Modal Emotion Recognition in Conversations
    Zhang, Duzhen
    Chen, Feilong
    Chang, Jianlong
    Chen, Xiuyi
    Tian, Qi
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 3987 - 3997