Fine-tuning of pre-processing filters enables scalp-EEG based training of subcutaneous EEG models

被引:0
|
作者
Lechner, Lukas [1 ]
Helge, Asbjoern Wulff [2 ]
Ahrens, Esben [3 ]
Bachler, Martin [1 ]
Hametner, Bernhard [1 ]
Gritsch, Gerhard [1 ]
Kluge, Tilmann [1 ]
Hartmann, Manfred [1 ]
机构
[1] AIT Austrian Inst Technol, Ctr Hlth & Bioresources, Vienna, Austria
[2] UNEEG Med AS, Epilepsy Sci, Allerod, Denmark
[3] T&W Engn, Data Sci, Allerod, Denmark
关键词
deep learning; eeg; wearable devices; sleep scoring;
D O I
10.1109/BSN58485.2023.10331106
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The increasing availability of minimally invasive electroencephalogram (EEG) devices for ultra-long-term recordings has opened new possibilities for advanced EEG analysis, but the resulting large amount of generated data leads to a strong need for computational analyses. Deep neural networks (DNNs) have shown to be powerful for this purpose, but the lack of annotated data from these novel devices is a barrier to DNN training. We propose a novel technique based on fine-tuning of linear pre-processing filters, which is capable of compensating for variations in electrode positions and amplifier characteristics and enables training of models for subcutaneous EEG on largely available scalp EEG data. The effectiveness of the method is demonstrated on a state-of-the-art EEG-based sleep scoring model, where we show that the performance on a database used for training can be retained on the subcutaneous EEG by fine-tuning on data from only three subjects.
引用
收藏
页数:4
相关论文
共 30 条
  • [21] ACTUNE: Uncertainty-Based Active Self-Training for Active Fine-Tuning of Pretrained Language Models
    Yu, Yue
    Kong, Lingkai
    Zhang, Jieyu
    Zhang, Rongzhi
    Zhang, Chao
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 1422 - 1436
  • [22] Training Deep Spiking Convolutional Neural Networks With STDP-Based Unsupervised Pre-training Followed by Supervised Fine-Tuning
    Lee, Chankyu
    Panda, Priyadarshini
    Srinivasan, Gopalakrishnan
    Roy, Kaushik
    FRONTIERS IN NEUROSCIENCE, 2018, 12
  • [23] Breaking the Barrier Between Pre-training and Fine-tuning: A Hybrid Prompting Model for Knowledge-Based VQA
    Sun, Zhongfan
    Hu, Yongli
    Gao, Qingqing
    Jiang, Huajie
    Gao, Junbin
    Sun, Yanfeng
    Yin, Baocai
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 4065 - 4073
  • [24] CasANGCL: pre -training and fine-tuning model based on cascaded attention network and graph contrastive learning for molecular property prediction
    Zheng, Zixi
    Tan, Yanyan
    Wang, Hong
    Yu, Shengpeng
    Liu, Tianyu
    Liang, Cheng
    BRIEFINGS IN BIOINFORMATICS, 2023, 24 (01)
  • [25] Bridging pre-trained models to continual learning: A hypernetwork based framework with parameter-efficient fine-tuning techniques
    Ding, Fengqian
    Xu, Chen
    Liu, Han
    Zhou, Bin
    Zhou, Hongchao
    INFORMATION SCIENCES, 2024, 674
  • [26] P3 Ranker: Mitigating the Gaps between Pre-training and Ranking Fine-tuning with Prompt-based Learning and Pre-finetuning
    Hu, Xiaomeng
    Yu, Shi
    Xiong, Chenyan
    Liu, Zhenghao
    Liu, Zhiyuan
    Yu, Ge
    PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22), 2022, : 1956 - 1962
  • [27] Fine-tuning long short-term memory models for seamless transition in hydrological modelling: From pre-training to post-application
    Chen, Xingtian
    Zhang, Yuhang
    Ye, Aizhong
    Li, Jinyang
    Hsu, Kuolin
    Sorooshian, Soroosh
    ENVIRONMENTAL MODELLING & SOFTWARE, 2025, 186
  • [28] Cross-Domain Aspect-Based Sentiment Classification with a Pre-Training and Fine-Tuning Strategy for Low-Resource Domains
    Zhao, Chuanjun
    Wu, Meiling
    Yang, Xinyi
    Sun, Xuzhuang
    Wang, Suge
    Li, Deyu
    ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2024, 23 (04)
  • [29] Adapter-based fine-tuning of pre-trained multilingual language models for code-mixed and code-switched text classification
    Himashi Rathnayake
    Janani Sumanapala
    Raveesha Rukshani
    Surangika Ranathunga
    Knowledge and Information Systems, 2022, 64 : 1937 - 1966
  • [30] Adapter-based fine-tuning of pre-trained multilingual language models for code-mixed and code-switched text classification
    Rathnayake, Himashi
    Sumanapala, Janani
    Rukshani, Raveesha
    Ranathunga, Surangika
    KNOWLEDGE AND INFORMATION SYSTEMS, 2022, 64 (07) : 1937 - 1966