Tradeoff of generalization error in unsupervised learning

被引:0
|
作者
Kim, Gilhan [1 ]
Lee, Hojun [1 ]
Jo, Junghyo [2 ]
Baek, Yongjoo [1 ]
机构
[1] Seoul Natl Univ, Ctr Theoret Phys, Dept Phys & Astron, Seoul 08826, South Korea
[2] Seoul Natl Univ, Dept Phys Educ & Ctr Theoret Phys, Seoul 08826, South Korea
基金
新加坡国家研究基金会;
关键词
machine learning; classical phase transitions; stochastic processes; MODEL;
D O I
10.1088/1742-5468/ace42c
中图分类号
O3 [力学];
学科分类号
08 ; 0801 ;
摘要
Finding the optimal model complexity that minimizes the generalization error (GE) is a key issue of machine learning. For the conventional supervised learning, this task typically involves the bias-variance tradeoff: lowering the bias by making the model more complex entails an increase in the variance. Meanwhile, little has been studied about whether the same tradeoff exists for unsupervised learning. In this study, we propose that unsupervised learning generally exhibits a two-component tradeoff of the GE, namely the model error (ME) and the data error (DE)-using a more complex model reduces the ME at the cost of the DE, with the DE playing a more significant role for a smaller training dataset. This is corroborated by training the restricted Boltzmann machine to generate the configurations of the two-dimensional Ising model at a given temperature and the totally asymmetric simple exclusion process with given entry and exit rates. Our results also indicate that the optimal model tends to be more complex when the data to be learned are more complex.
引用
收藏
页数:15
相关论文
共 50 条
  • [41] Model-Induced Generalization Error Bound for Information-Theoretic Representation Learning in Source-Data-Free Unsupervised Domain Adaptation
    Yang, Baoyao
    Yeh, Hao-Wei
    Harada, Tatsuya
    Yuen, Pong C.
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 419 - 432
  • [42] Towards Unsupervised Domain Generalization
    Zhang, Xingxuan
    Zhou, Linjun
    Xu, Renzhe
    Cui, Peng
    Shen, Zheyan
    Liu, Haoxin
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 4900 - 4910
  • [43] On the Generalization Ability of Unsupervised Pretraining
    Deng, Yuyang
    Hong, Junyuan
    Zhou, Jiayu
    Mandavi, Mehrdad
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [44] On Challenges in Unsupervised Domain Generalization
    Narayanan, Vaasudev
    Deshmukh, Aniket Anand
    Dogan, Urun
    Balasubramanian, Vineeth N.
    NEURIPS 2021 WORKSHOP ON PRE-REGISTRATION IN MACHINE LEARNING, VOL 181, 2021, 181 : 42 - 58
  • [45] Compositional Generalization in Unsupervised Compositional Representation Learning: A Study on Disentanglement and Emergent Language
    Xu, Zhenlin
    Niethammer, Marc
    Raffel, Colin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [46] Generalization of feedback error learning (FEL) to strictly proper MIMO systems
    AlAli, Basel
    Hirata, Kentaro
    Sugimoto, Kenji
    2006 AMERICAN CONTROL CONFERENCE, VOLS 1-12, 2006, 1-12 : 356 - +
  • [47] Active learning using localized generalization error of candidate sample as criterion
    Chan, PPK
    Ng, WWY
    Yeung, DS
    INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS, VOL 1-4, PROCEEDINGS, 2005, : 3604 - 3609
  • [48] Information-Theoretic Bounds on the Moments of the Generalization Error of Learning Algorithms
    Aminian, Gholamali
    Toni, Laura
    Rodrigues, Miguel R. D.
    2021 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2021, : 682 - 687
  • [49] Fast generalization error bound of deep learning from a kernel perspective
    Suzuki, Taiji
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 84, 2018, 84
  • [50] Certifying the True Error: Machine Learning in Coq with Verified Generalization Guarantees
    Bagnall, Alexander
    Stewart, Gordon
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 2662 - 2669