Deep learning based on self-supervised pre-training: Application on sandstone content prediction

被引:0
|
作者
Wang, Chong Ming [1 ]
Wang, Xing Jian [2 ]
Chen, Yang [1 ]
Wen, Xue Mei [1 ]
Zhang, Yong Heng [1 ]
Li, Qing Wu [1 ]
机构
[1] Chengdu Univ Technol, Coll Geophys, Chengdu, Peoples R China
[2] Chengdu Univ Technol, State Key Lab Oil & Gas Reservoir Geol & Exploitat, Chengdu, Sichuan, Peoples R China
关键词
RNN-recurrent neural network; self-supervised; pre-train; seismic signal; sandstone content;
D O I
10.3389/feart.2022.1081998
中图分类号
P [天文学、地球科学];
学科分类号
07 ;
摘要
Deep learning has been widely used in various fields and showed promise in recent years. Therefore, deep learning is the future trend to realize seismic data's intelligent and automatic interpretation. However, traditional deep learning only uses labeled data to train the model, and thus, does not utilize a large amount of unlabeled data. Self-supervised learning, widely used in Natural Language Processing (NLP) and computer vision, is an effective method of learning information from unlabeled data. Thus, a pretext task is designed with reference to Masked Autoencoders (MAE) to realize self-supervised pre-training of unlabeled seismic data. After pre-training, we fine-tune the model to the downstream task. Experiments show that the model can effectively extract information from unlabeled data through the pretext task, and the pre-trained model has better performance in downstream tasks.
引用
收藏
页数:7
相关论文
共 50 条
  • [41] LPCL: Localized prominence contrastive learning for self-supervised dense visual pre-training
    Chen, Zihan
    Zhu, Hongyuan
    Cheng, Hao
    Mi, Siya
    Zhang, Yu
    Geng, Xin
    PATTERN RECOGNITION, 2023, 135
  • [42] Self-supervised Pre-training with Masked Shape Prediction for 3D Scene Understanding
    Jiang, Li
    Yang, Zetong
    Shi, Shaoshuai
    Golyanik, Vladislav
    Dai, Dengxin
    Schiele, Bernt
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 1168 - 1178
  • [43] A soft scanning electron microscopy for efficient segmentation of alloy microstructures based on a new self-supervised pre-training deep learning network
    Zhang, Jinhan
    Yu, Jingtai
    Wei, Xiaoran
    Zhou, Kun
    Niu, Weifei
    Wei, Yushun
    Zhao, Cong
    Chen, Gang
    Jin, Fengmin
    Song, Kai
    MATERIALS CHARACTERIZATION, 2024, 218
  • [44] Self-supervised pre-training with contrastive and masked autoencoder methods for dealing with small datasets in deep learning for medical imaging
    Daniel Wolf
    Tristan Payer
    Catharina Silvia Lisson
    Christoph Gerhard Lisson
    Meinrad Beer
    Michael Götz
    Timo Ropinski
    Scientific Reports, 13
  • [45] Self-supervised pre-training with contrastive and masked autoencoder methods for dealing with small datasets in deep learning for medical imaging
    Wolf, Daniel
    Payer, Tristan
    Lisson, Catharina Silvia
    Lisson, Christoph Gerhard
    Beer, Meinrad
    Gotz, Michael
    Ropinski, Timo
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [46] Complementary Mask Self-Supervised Pre-training Based on Teacher-Student Network
    Ye, Shaoxiong
    Huang, Jing
    Zhu, Lifu
    2023 3RD ASIA-PACIFIC CONFERENCE ON COMMUNICATIONS TECHNOLOGY AND COMPUTER SCIENCE, ACCTCS, 2023, : 199 - 206
  • [47] Masked self-supervised pre-training model for EEG-based emotion recognition
    Hu, Xinrong
    Chen, Yu
    Yan, Jinlin
    Wu, Yuan
    Ding, Lei
    Xu, Jin
    Cheng, Jun
    COMPUTATIONAL INTELLIGENCE, 2024, 40 (03)
  • [48] Self-Supervised pre-training model based on Multi-view for MOOC Recommendation
    Tian, Runyu
    Cai, Juanjuan
    Li, Chuanzhen
    Wang, Jingling
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 252
  • [49] Token Boosting for Robust Self-Supervised Visual Transformer Pre-training
    Li, Tianjiao
    Foo, Lin Geng
    Hu, Ping
    Shang, Xindi
    Rahmani, Hossein
    Yuan, Zehuan
    Liu, Jun
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 24027 - 24038
  • [50] Joint Encoder-Decoder Self-Supervised Pre-training for ASR
    Arunkumar, A.
    Umesh, S.
    INTERSPEECH 2022, 2022, : 3418 - 3422