Self-supervised efficient sample weighting for multi-exit networks

被引:0
|
作者
Liu, Kai [1 ]
Moon, Seungbin [1 ]
机构
[1] Sejong Univ, Dept Comp Engn, Seoul 05006, South Korea
关键词
Sample weighting; Early-exit networks; Self-supervised; Efficient training;
D O I
10.1016/j.knosys.2023.111003
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Dynamic sample weighting is an effective approach for improving the inference accuracy and efficiency of different classifiers in multi-exit networks. By bringing the early-exit behavior from testing to training, such a method allows input samples to contribute differently at each sub-network, reducing the gap between training and inference. Recent methods mainly rely on learning a weighting mechanism through reinforcement learning and meta-learning based frameworks jointly with the multi-exit models. However, additional data are usually needed and essential for these approaches, which results in more computation consumption and limits their general applicability. To address these two problems, we propose a Self-supervised Efficient Sample Weighting (SESW) method to predict weights for input samples based on their losses at each exit. We pose the problem as a self-supervised multi-class classification problem. The SESW module is trained to predict if the log-likelihood for the loss of each sample in the current training stage belongs to the most confident classifier. The results of our experiments demonstrate that the proposed method performs comparably to state-of-the-art methods in terms of prediction accuracy and dynamic inference efficiency while achieving a significant improvement in training time.(c) 2023 Elsevier B.V. All rights reserved.
引用
收藏
页数:9
相关论文
共 50 条
  • [31] Self-Supervised Contrastive Learning In Spiking Neural Networks
    Bahariasl, Yeganeh
    Kheradpisheh, Saeed Reza
    PROCEEDINGS OF THE 13TH IRANIAN/3RD INTERNATIONAL MACHINE VISION AND IMAGE PROCESSING CONFERENCE, MVIP, 2024, : 181 - 185
  • [32] Self-supervised role learning for graph neural networks
    Sankar, Aravind
    Wang, Junting
    Krishnan, Adit
    Sundaram, Hari
    KNOWLEDGE AND INFORMATION SYSTEMS, 2022, 64 (08) : 2091 - 2121
  • [34] Reconfigurable Optical Datacom Networks by Self-supervised Learning
    Liu, Che-Yu
    Chen, Xiaoliang
    Proietti, Roberto
    Li, Zhaohui
    Ben Yoo, S. J.
    PROCEEDINGS OF THE 2021 ACM SIGCOMM 2021 WORKSHOP ON OPTICAL SYSTEMS (OPTSYS '21), 2021, : 23 - 27
  • [35] Self-supervised graph transformer networks for social recommendation
    Li, Qinyao
    Yang, Qimeng
    Tian, Shengwei
    Yu, Long
    COMPUTERS & ELECTRICAL ENGINEERING, 2025, 123
  • [36] Self-supervised role learning for graph neural networks
    Aravind Sankar
    Junting Wang
    Adit Krishnan
    Hari Sundaram
    Knowledge and Information Systems, 2022, 64 : 2091 - 2121
  • [37] Multi-student Collaborative Self-supervised Distillation
    Yang, Yinan
    Chen, Li
    Wu, Shaohui
    Sun, Zhuang
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, ICIC 2023, PT II, 2023, 14087 : 199 - 210
  • [38] Multi-task Self-Supervised Visual Learning
    Doersch, Carl
    Zisserman, Andrew
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 2070 - 2079
  • [39] Self-supervised learning for multi-view stereo
    Ito S.
    Kaneko N.
    Sumi K.
    Seimitsu Kogaku Kaishi/Journal of the Japan Society for Precision Engineering, 2020, 86 (12): : 1042 - 1050
  • [40] Multi-behavior Self-supervised Learning for Recommendation
    Xu, Jingcao
    Wang, Chaokun
    Wu, Cheng
    Song, Yang
    Zheng, Kai
    Wang, Xiaowei
    Wang, Changping
    Zhou, Guorui
    Gai, Kun
    PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023, 2023, : 496 - 505