Neural Networks with Marginalized Corrupted Hidden Layer

被引:1
|
作者
Li, Yanjun [1 ]
Xin, Xin [1 ]
Guo, Ping [1 ,2 ]
机构
[1] Beijing Inst Technol, Sch Comp Sci & Technol, Beijing 100081, Peoples R China
[2] Beijing Normal Univ, Image Proc & Pattern Recognit Lab, Beijing 100875, Peoples R China
来源
关键词
Neural network; Overfitting; Classification; REPRESENTATIONS;
D O I
10.1007/978-3-319-26555-1_57
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Overfitting is an important problem in neural networks (NNs) training. When the number of samples in the training set is limited, explicitly extending the training set with artificially generated samples is an effective solution. However, this method has the problem of high computational costs. In this paper we propose a new learning scheme to train single-hidden layer feedforward neural networks (SLFNs) with implicitly extended training set. The training set is extended by corrupting the hidden layer outputs of training samples with noise from exponential family distribution. When the number of corruption approaches infinity, in objective function explicitly generated samples can be expressed as the form of expectation. Our method, called marginalized corrupted hidden layer (MCHL), trains SLFNs by minimizing the loss function expected values under the corrupting distribution. In this way MCHL is trained with infinite samples. Experimental results on multiple data sets show that MCHL can be trained efficiently, and generalizes better to test data.
引用
收藏
页码:506 / 514
页数:9
相关论文
共 50 条
  • [31] Efficiently Learning One Hidden Layer Neural Networks From Queries
    Chen, Sitan
    Klivans, Adam R.
    Meka, Raghu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [32] Approximation error of single hidden layer neural networks with fixed weights
    Ismailov, Vugar E.
    INFORMATION PROCESSING LETTERS, 2024, 185
  • [33] Comments on "Classification ability of single hidden layer feedforward neural networks"
    Sandberg, IW
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2001, 12 (03): : 642 - 643
  • [34] Expressive Numbers of Two or More Hidden Layer ReLU Neural Networks
    Inoue, Kenta
    2019 SEVENTH INTERNATIONAL SYMPOSIUM ON COMPUTING AND NETWORKING WORKSHOPS (CANDARW 2019), 2019, : 129 - 135
  • [35] Image Stitching with single-hidden layer feedforward Neural Networks
    Yan, Min
    Yin, Qian
    Guo, Ping
    2016 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2016, : 4162 - 4169
  • [36] A new optimization algorithm for single hidden layer feedforward neural networks
    Li, Leong Kwan
    Shao, Sally
    Yiu, Ka-Fai Cedric
    APPLIED SOFT COMPUTING, 2013, 13 (05) : 2857 - 2862
  • [37] Comparisons of Single- and Multiple-Hidden-Layer Neural Networks
    Nakama, Takehiko
    ADVANCES IN NEURAL NETWORKS - ISNN 2011, PT I, 2011, 6675 : 270 - 279
  • [38] Learning with Marginalized Corrupted Features and Labels Together
    Li, Yingming
    Yang, Ming
    Xu, Zenglin
    Zhang, Zhongfei
    THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 1251 - 1257
  • [39] Fourier neural networks and generalized single hidden layer networks in aircraft engine fault diagnostics
    Tan, H. S.
    JOURNAL OF ENGINEERING FOR GAS TURBINES AND POWER-TRANSACTIONS OF THE ASME, 2006, 128 (04): : 773 - 782
  • [40] Guiding Hidden Layer Representations for Improved Rule Extraction from Neural Networks
    Huynh, Thuan Q.
    Reggia, James A.
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2011, 22 (02): : 264 - 275