Unravelling sleep patterns: Supervised contrastive learning with self-attention for sleep stage classification

被引:0
|
作者
Kumar, Chandra Bhushan [1 ]
Mondal, Arnab Kumar [2 ]
Bhatia, Manvir [3 ]
Panigrahi, Bijaya Ketan [4 ]
Gandhi, Tapan Kumar [4 ]
机构
[1] Indian Inst Technol, Bharti Sch Telecommun & Management, New Delhi, India
[2] Indian Inst Technol, Sch Informat Technol, New Delhi, India
[3] Neurol & Sleep Ctr, New Delhi, India
[4] Indian Inst Technol, Dept Elect Engn, New Delhi, India
关键词
automatic sleep stage classification; Supervised contrastive learning; Feature representation learning; Deep learning for sleep stage classification; And self-attention mechanism;
D O I
10.1016/j.asoc.2024.112298
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sleep data scoring is a crucial and primary step for diagnosing sleep disorders to know the sleep stages from the PSG signals. This study uses supervised contrastive learning with a self-attention mechanism to classify sleep stages. We propose a deep learning framework for automatic sleep stage classification, which involves two training phases: (1) the feature representation learning phase, in which the feature representation network (encoder) learns to extract features from the electroencephalogram (EEG) signals, and (2) the classification network training phase, where a pre-trained encoder (trained during phase I) along with the classifier head is fine-tuned for the classification task. The PSG data shows a non-uniform distribution of sleep stages, with wake (W) (around 30% of total samples) and N2 stages (around 58% and 37% of total samples in Physionet EDF-Sleep 2013 and 2018 datasets, respectively) being more prevalent, leading to an imbalanced dataset. The imbalanced data issue is addressed using a weighted softmax cross-entropy loss function that assigns higher weights to minority sleep stages. Additionally, an oversampling technique (the synthetic minority oversampling technique (SMOTE) (Chawla et al., 2002)[1] ) is applied to generate synthetic samples for minority classes. The proposed model is evaluated on the Physionet EDF-Sleep 2013 and 2018 datasets using Fpz-Cz and Pz-Oz EEG channels. It achieved an overall accuracy of 94.1%, a macro F1 score of 92.64, and a Cohen's Kappa coefficient of 0.92. Ablation studies demonstrated the importance of triplet loss-based pre-training and oversampling for enhancing performance. The proposed model requires minimal pre-processing, eliminating the need for extensive signal processing expertise, and thus is well-suited for clinicians diagnosing sleep disorders.
引用
收藏
页数:15
相关论文
共 50 条
  • [21] Contrastive self-supervised learning for neurodegenerative disorder classification
    Gryshchuk, Vadym
    Singh, Devesh
    Teipel, Stefan
    Dyrba, Martin
    ADNI Study Grp
    AIBL Study Grp
    FTLDNI Study Grp
    FRONTIERS IN NEUROINFORMATICS, 2025, 19
  • [22] Kernel Self-Attention for Weakly-supervised Image Classification using Deep Multiple Instance Learning
    Rymarczyk, Dawid
    Borowa, Adriana
    Tabor, Jacek
    Zielinski, Bartosz
    2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2021), 2021, : 1720 - 1729
  • [23] Metric Learning for Automatic Sleep Stage Classification
    Huy Phan
    Quan Do
    The-Luan Do
    Duc-Lung Vu
    2013 35TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC), 2013, : 5025 - 5028
  • [24] UIESC: An Underwater Image Enhancement Framework via Self-Attention and Contrastive Learning
    Chen, Renzhang
    Cai, Zhanchuan
    Yuan, Jieyu
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (12) : 11701 - 11711
  • [25] Deep Learning Method for Sleep Stage Classification
    Cen, Ling
    Yu, Zhu Liang
    Tang, Yun
    Shi, Wen
    Kluge, Tilmann
    Ser, Wee
    NEURAL INFORMATION PROCESSING (ICONIP 2017), PT II, 2017, 10635 : 796 - 802
  • [26] Application of Machine Learning to Sleep Stage Classification
    Smith, Andrew
    Anand, Hardik
    Milosavljevic, Snezana
    Rentschler, Katherine M.
    Pocivavsek, Ana
    Valafar, Homayoun
    2021 INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND COMPUTATIONAL INTELLIGENCE (CSCI 2021), 2021, : 349 - 354
  • [27] Attention as Relation: Learning Supervised Multi-head Self-Attention for Relation Extraction
    Liu, Jie
    Chen, Shaowei
    Wang, Bingquan
    Zhang, Jiaxin
    Li, Na
    Xu, Tong
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 3787 - 3793
  • [28] StAGN: Spatial-Temporal Adaptive Graph Network via Contrastive Learning for Sleep Stage Classification
    Chen, Junyang
    Dai, Yidan
    Chen, Xianhui
    Shen, Yingshan
    Yan Luximon
    Wang, Hailiang
    He, Yuxin
    Ma, Wenjun
    Fan, Xiaomao
    PROCEEDINGS OF THE 2023 SIAM INTERNATIONAL CONFERENCE ON DATA MINING, SDM, 2023, : 199 - 207
  • [29] Supervised Contrastive Learning for Product Classification
    Azizi, Sahel
    Fang, Uno
    Adibi, Sasan
    Li, Jianxin
    ADVANCED DATA MINING AND APPLICATIONS, ADMA 2021, PT II, 2022, 13088 : 341 - 355
  • [30] A Thangka cultural element classification model based on self-supervised contrastive learning and MS Triplet Attention
    Tang, Wenjing
    Xie, Qing
    VISUAL COMPUTER, 2024, 40 (06): : 3919 - 3935