An Improved Transformer Framework for Well-Overflow Early Detection via Self-Supervised Learning

被引:3
|
作者
Yi, Wan [1 ]
Liu, Wei [2 ]
Fu, Jiasheng [2 ]
He, Lili [1 ]
Han, Xiaosong [1 ]
机构
[1] Jilin Univ, Coll Comp Sci & Technol, Key Lab Symbol Computat & Knowledge Engn, Natl Educ Minist, Changchun 130012, Peoples R China
[2] CNPC Engn Technol R&D Co Ltd, Natl Engn Res Ctr Oil & Gas Drilling & Complet Tec, Beijing 102206, Peoples R China
基金
中国国家自然科学基金;
关键词
oil drilling; overflow; time series prediction; abnormal detection; self-supervised learning; transformer; DRILLING OVERFLOW; PREDICTION; MODEL;
D O I
10.3390/en15238799
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
Oil drilling has always been considered a vital part of resource exploitation, and during which overflow is the most common and tricky threat that may cause blowout, a catastrophic accident. Therefore, to prevent further damage, it is necessary to detect overflow as early as possible. However, due to the unbalanced distribution and the lack of labeled data, it is difficult to design a suitable solution. To address this issue, an improved Transformer Framework based on self-supervised learning is proposed in this paper, which can accurately detect overflow 20 min in advance when the labeled data are limited and severely imbalanced. The framework includes a self-supervised pre-training scheme, which focuses on long-term time dependence that offers performance benefits over fully supervised learning on downstream tasks and makes unlabeled data useful in the training process. Next, to better extract temporal features and adapt to multi-task training process, a Transformer-based auto-encoder with temporal convolution layer is proposed. In the experiment, we used 20 min data to detect overflow in the next 20 min. The results show that the proposed framework can reach 98.23% accuracy and 0.84 F1 score, which is much better than other methods. We also compare several modifications of our framework and different pre-training tasks in the ablation experiment to prove the advantage of our methods. Finally, we also discuss the influence of important hyperparameters on efficiency and accuracy in the experiment.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Ensemble Learning with Feature Fusion for Well-Overflow Detection
    Cui, Ziliang
    Liu, Li
    Xiong, Yinzhou
    Liu, Yinguo
    Su, Yu
    Man, Zhimin
    Wang, Ye
    NEURAL COMPUTING FOR ADVANCED APPLICATIONS, NCAA 2024, PT III, 2025, 2183 : 75 - 89
  • [2] A Well-Overflow Prediction Algorithm Based on Semi-Supervised Learning
    Liu, Wei
    Fu, Jiasheng
    Liang, Yanchun
    Cao, Mengchen
    Han, Xiaosong
    ENERGIES, 2022, 15 (12)
  • [3] An Improved Self-Supervised Framework for Feature Point Detection
    Wu, Yunhui
    Li, Jun
    ELEVENTH INTERNATIONAL CONFERENCE ON DIGITAL IMAGE PROCESSING (ICDIP 2019), 2019, 11179
  • [4] Pavement anomaly detection based on transformer and self-supervised learning
    Lin, Zijie
    Wang, Hui
    Li, Shenglin
    AUTOMATION IN CONSTRUCTION, 2022, 143
  • [5] Self-Supervised AcousticWord Embedding Learning via Correspondence Transformer Encoder
    Lin, Jingru
    Yue, Xianghu
    Ao, Junyi
    Li, Haizhou
    INTERSPEECH 2023, 2023, : 2988 - 2992
  • [6] TransDSSL: Transformer Based Depth Estimation via Self-Supervised Learning
    Han, Daechan
    Shin, Jeongmin
    Kim, Namil
    Hwang, Soonmin
    Choi, Yukyung
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (04): : 10969 - 10976
  • [7] Self-Supervised Pretraining via Multimodality Images With Transformer for Change Detection
    Zhang, Yuxiang
    Zhao, Yang
    Dong, Yanni
    Du, Bo
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [8] A NOVEL CONTRASTIVE LEARNING FRAMEWORK FOR SELF-SUPERVISED ANOMALY DETECTION
    Li, Jingze
    Lian, Zhichao
    Li, Min
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 3366 - 3370
  • [9] Self-Supervised Graph Transformer for Deepfake Detection
    Khormali, Aminollah
    Yuan, Jiann-Shiun
    IEEE ACCESS, 2024, 12 : 58114 - 58127
  • [10] Improving Streaming Transformer Based ASR Under a Framework of Self-supervised Learning
    Cao, Songjun
    Kang, Yueteng
    Fu, Yanzhe
    Xu, Xiaoshuo
    Sun, Sining
    Zhang, Yike
    Ma, Long
    INTERSPEECH 2021, 2021, : 706 - 710