Self-supervised Health Representation Decomposition based on contrast learning

被引:12
|
作者
Wang, Yilin [1 ]
Shen, Lei [2 ]
Zhang, Yuxuan [1 ]
Li, Yuanxiang [1 ,3 ]
Zhang, Ruixin [2 ]
Yang, Yongshen [1 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Aeronaut & Astronaut, Shanghai, Peoples R China
[2] Tecent, YouTu Lab, Shanghai, Peoples R China
[3] Shanghai Jiao Tong Univ, Sch Aeronaut & Astronaut, Shanghai 200240, Peoples R China
关键词
Prognostics and Health Management; Self-supervised learning; Representation learning; Remaining Useful Life Prediction; Fault Diagnosis; USEFUL LIFE PREDICTION; METHODOLOGY;
D O I
10.1016/j.ress.2023.109455
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Accurately predicting the Remaining Useful Life (RUL) of equipment and diagnosing faults (FD) in Prognostics and Health Management (PHM) applications requires effective feature engineering. However, the large amount of time series data now available in industry is often unlabeled and contaminated by variable working conditions and noise, making it challenging for traditional feature engineering methods to extract meaningful system state representations from raw data. To address this issue, this paper presents a Self-supervised Health Representation Decomposition Learning(SHRDL) framework that is based on contrast learning. To extract effective representations from raw data with variable working conditions and noise, SHRDL incorporates an Attention-based Decomposition Network (ADN) as its encoder. During the contrast learning process, we incorporate cycle information as a priori and define a new loss function, the Cycle Information Modified Contrastive loss (CIMCL), which helps the model focus more on the contrast between hard samples. We evaluated SHRDL on three popular PHM datasets (N-CMAPPS engine dataset, NASA, and CALCE battery datasets) and found that it significantly improved RUL prediction and FD performance. Experimental results demonstrate that SHRDL can learn health representations from unlabeled data under variable working conditions and is robust to noise interference.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Dense Semantic Contrast for Self-Supervised Visual Representation Learning
    Li, Xiaoni
    Zhou, Yu
    Zhang, Yifei
    Zhang, Aoting
    Wang, Wei
    Jiang, Ning
    Wu, Haiying
    Wang, Weiping
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 1368 - 1376
  • [2] Generative Subgraph Contrast for Self-Supervised Graph Representation Learning
    Han, Yuehui
    Hui, Le
    Jiang, Haobo
    Qian, Jianjun
    Xie, Jin
    COMPUTER VISION - ECCV 2022, PT XXX, 2022, 13690 : 91 - 107
  • [3] Dense lead contrast for self-supervised representation learning of multilead electrocardiograms
    Liu, Wenhan
    Li, Zhoutong
    Zhang, Huaicheng
    Chang, Sheng
    Wang, Hao
    He, Jin
    Huang, Qijun
    INFORMATION SCIENCES, 2023, 634 : 189 - 205
  • [4] Whitening for Self-Supervised Representation Learning
    Ermolov, Aleksandr
    Siarohin, Aliaksandr
    Sangineto, Enver
    Sebe, Nicu
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [5] Self-Supervised Representation Learning for CAD
    Jones, Benjamin T.
    Hu, Michael
    Kodnongbua, Milin
    Kim, Vladimir G.
    Schulz, Adriana
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 21327 - 21336
  • [6] Generation-based Multi-view Contrast for Self-supervised Graph Representation Learning
    Han, Yuehui
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2024, 18 (05)
  • [7] Contrast-Reconstruction Representation Learning for Self-Supervised Skeleton-Based Action Recognition
    Wang, Peng
    Wen, Jun
    Si, Chenyang
    Qian, Yuntao
    Wang, Liang
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 6224 - 6238
  • [8] CLEAR: Cluster-Enhanced Contrast for Self-Supervised Graph Representation Learning
    Luo, Xiao
    Ju, Wei
    Qu, Meng
    Gu, Yiyang
    Chen, Chong
    Deng, Minghua
    Hua, Xian-Sheng
    Zhang, Ming
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (01) : 899 - 912
  • [9] Sub-graph Contrast for Scalable Self-Supervised Graph Representation Learning
    Jiao, Yizhu
    Xiong, Yun
    Zhang, Jiawei
    Zhang, Yao
    Zhang, Tianqi
    Zhu, Yangyong
    20TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2020), 2020, : 222 - 231
  • [10] Self-Supervised Dynamic Graph Representation Learning via Temporal Subgraph Contrast
    Chen, Ke-Jia
    Liu, Linsong
    Jiang, Linpu
    Chen, Jingqiang
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2024, 18 (01)