Deep learning based on self-supervised pre-training: Application on sandstone content prediction

被引:0
|
作者
Wang, Chong Ming [1 ]
Wang, Xing Jian [2 ]
Chen, Yang [1 ]
Wen, Xue Mei [1 ]
Zhang, Yong Heng [1 ]
Li, Qing Wu [1 ]
机构
[1] Chengdu Univ Technol, Coll Geophys, Chengdu, Peoples R China
[2] Chengdu Univ Technol, State Key Lab Oil & Gas Reservoir Geol & Exploitat, Chengdu, Sichuan, Peoples R China
关键词
RNN-recurrent neural network; self-supervised; pre-train; seismic signal; sandstone content;
D O I
10.3389/feart.2022.1081998
中图分类号
P [天文学、地球科学];
学科分类号
07 ;
摘要
Deep learning has been widely used in various fields and showed promise in recent years. Therefore, deep learning is the future trend to realize seismic data's intelligent and automatic interpretation. However, traditional deep learning only uses labeled data to train the model, and thus, does not utilize a large amount of unlabeled data. Self-supervised learning, widely used in Natural Language Processing (NLP) and computer vision, is an effective method of learning information from unlabeled data. Thus, a pretext task is designed with reference to Masked Autoencoders (MAE) to realize self-supervised pre-training of unlabeled seismic data. After pre-training, we fine-tune the model to the downstream task. Experiments show that the model can effectively extract information from unlabeled data through the pretext task, and the pre-trained model has better performance in downstream tasks.
引用
收藏
页数:7
相关论文
共 50 条
  • [1] DialogueBERT: A Self-Supervised Learning based Dialogue Pre-training Encoder
    Zhang, Zhenyu
    Guo, Tao
    Chen, Meng
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 3647 - 3651
  • [2] Self-supervised ECG pre-training
    Liu, Han
    Zhao, Zhenbo
    She, Qiang
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2021, 70
  • [3] Masked Feature Prediction for Self-Supervised Visual Pre-Training
    Wei, Chen
    Fan, Haoqi
    Xie, Saining
    Wu, Chao-Yuan
    Yuille, Alan
    Feichtenhofer, Christoph
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 14648 - 14658
  • [4] Dense Contrastive Learning for Self-Supervised Visual Pre-Training
    Wang, Xinlong
    Zhang, Rufeng
    Shen, Chunhua
    Kong, Tao
    Li, Lei
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 3023 - 3032
  • [5] Class incremental learning with self-supervised pre-training and prototype learning
    Liu, Wenzhuo
    Wu, Xin-Jian
    Zhu, Fei
    Yu, Ming-Ming
    Wang, Chuang
    Liu, Cheng-Lin
    PATTERN RECOGNITION, 2025, 157
  • [6] Self-supervised Pre-training of Text Recognizers
    Kiss, Martin
    Hradis, Michal
    DOCUMENT ANALYSIS AND RECOGNITION-ICDAR 2024, PT IV, 2024, 14807 : 218 - 235
  • [7] Self-supervised Pre-training for Mirror Detection
    Lin, Jiaying
    Lau, Rynson W. H.
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 12193 - 12202
  • [8] Self-supervised Pre-training for Nuclei Segmentation
    Haq, Mohammad Minhazul
    Huang, Junzhou
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT II, 2022, 13432 : 303 - 313
  • [9] EFFECTIVENESS OF SELF-SUPERVISED PRE-TRAINING FOR ASR
    Baevski, Alexei
    Mohamed, Abdelrahman
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 7694 - 7698
  • [10] Progressive self-supervised learning: A pre-training method for crowd counting
    Gu, Yao
    Zheng, Zhe
    Wu, Yingna
    Xie, Guangping
    Ni, Na
    PATTERN RECOGNITION LETTERS, 2025, 188 : 148 - 154