Self-supervised Health Representation Decomposition based on contrast learning

被引:12
|
作者
Wang, Yilin [1 ]
Shen, Lei [2 ]
Zhang, Yuxuan [1 ]
Li, Yuanxiang [1 ,3 ]
Zhang, Ruixin [2 ]
Yang, Yongshen [1 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Aeronaut & Astronaut, Shanghai, Peoples R China
[2] Tecent, YouTu Lab, Shanghai, Peoples R China
[3] Shanghai Jiao Tong Univ, Sch Aeronaut & Astronaut, Shanghai 200240, Peoples R China
关键词
Prognostics and Health Management; Self-supervised learning; Representation learning; Remaining Useful Life Prediction; Fault Diagnosis; USEFUL LIFE PREDICTION; METHODOLOGY;
D O I
10.1016/j.ress.2023.109455
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Accurately predicting the Remaining Useful Life (RUL) of equipment and diagnosing faults (FD) in Prognostics and Health Management (PHM) applications requires effective feature engineering. However, the large amount of time series data now available in industry is often unlabeled and contaminated by variable working conditions and noise, making it challenging for traditional feature engineering methods to extract meaningful system state representations from raw data. To address this issue, this paper presents a Self-supervised Health Representation Decomposition Learning(SHRDL) framework that is based on contrast learning. To extract effective representations from raw data with variable working conditions and noise, SHRDL incorporates an Attention-based Decomposition Network (ADN) as its encoder. During the contrast learning process, we incorporate cycle information as a priori and define a new loss function, the Cycle Information Modified Contrastive loss (CIMCL), which helps the model focus more on the contrast between hard samples. We evaluated SHRDL on three popular PHM datasets (N-CMAPPS engine dataset, NASA, and CALCE battery datasets) and found that it significantly improved RUL prediction and FD performance. Experimental results demonstrate that SHRDL can learn health representations from unlabeled data under variable working conditions and is robust to noise interference.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] Self-supervised Consensus Representation Learning for Attributed Graph
    Liu, Changshu
    Wen, Liangjian
    Kang, Zhao
    Luo, Guangchun
    Tian, Ling
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 2654 - 2662
  • [42] TRIBYOL: TRIPLET BYOL FOR SELF-SUPERVISED REPRESENTATION LEARNING
    Li, Guang
    Togo, Ren
    Ogawa, Takahiro
    Haseyama, Miki
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3458 - 3462
  • [43] Self-Supervised Fair Representation Learning without Demographics
    Chai, Junyi
    Wang, Xiaoqian
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [44] Understanding Representation Learnability of Nonlinear Self-Supervised Learning
    Yang, Ruofeng
    Li, Xiangyuan
    Jiang, Bo
    Li, Shuai
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 9, 2023, : 10807 - 10815
  • [45] Randomly shuffled convolution for self-supervised representation learning
    Oh, Youngjin
    Jeon, Minkyu
    Ko, Dohwan
    Kim, Hyunwoo J.
    INFORMATION SCIENCES, 2023, 623 : 206 - 219
  • [46] Self-supervised representation learning for SAR change detection
    Davis, Eric K.
    Houglund, Ian
    Franz, Douglas
    Allen, Michael
    ALGORITHMS FOR SYNTHETIC APERTURE RADAR IMAGERY XXX, 2023, 12520
  • [47] Heuristic Attention Representation Learning for Self-Supervised Pretraining
    Van Nhiem Tran
    Liu, Shen-Hsuan
    Li, Yung-Hui
    Wang, Jia-Ching
    SENSORS, 2022, 22 (14)
  • [48] Self-Supervised Learning With Segmental Masking for Speech Representation
    Yue, Xianghu
    Lin, Jingru
    Gutierrez, Fabian Ritter
    Li, Haizhou
    IEEE Journal on Selected Topics in Signal Processing, 2022, 16 (06): : 1367 - 1379
  • [49] Self-supervised representation learning for surgical activity recognition
    Paysan, Daniel
    Haug, Luis
    Bajka, Michael
    Oelhafen, Markus
    Buhmann, Joachim M.
    INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY, 2021, 16 (11) : 2037 - 2044
  • [50] AtmoDist: Self-supervised representation learning for atmospheric dynamics
    Hoffmann, Sebastian
    Lessig, Christian
    ENVIRONMENTAL DATA SCIENCE, 2023, 2