Comparing error minimized extreme learning machines and support vector sequential feed-forward neural networks

被引:16
|
作者
Romero, Enrique [1 ]
Alquezar, Rene [2 ]
机构
[1] Univ Politecn Cataluna, Dept Llenguatges & Sistemes Informat, Barcelona, Spain
[2] Univ Politecn Cataluna, CSIC, Inst Robot & Informat Ind, Barcelona, Spain
关键词
Error minimized extreme learning machines; Support vector sequential feed-forward neural networks; Sequential approximations; ALGORITHM; ENSEMBLE;
D O I
10.1016/j.neunet.2011.08.005
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, error minimized extreme learning machines (EM-ELMs) have been proposed as a simple and efficient approach to build single-hidden-layer feed-forward networks (SLFNs) sequentially. They add random hidden nodes one by one (or group by group) and update the output weights incrementally to minimize the sum-of-squares error in the training set. Other very similar methods that also construct SLFNs sequentially had been reported earlier with the main difference that their hidden-layer weights are a subset of the data instead of being random. These approaches are referred to as support vector sequential feed-forward neural networks (SV-SFNNs), and they are a particular case of the sequential approximation with optimal coefficients and interacting frequencies (SAOCIF) method. In this paper, it is firstly shown that EM-ELMs can also be cast as a particular case of SAOCIF. In particular, EM-ELMs can easily be extended to test some number of random candidates at each step and select the best of them, as SAOCIF does. Moreover, it is demonstrated that the cost of the computation of the optimal output-layer weights in the originally proposed EM-ELMs can be improved if it is replaced by the one included in SAOCIF. Secondly, we present the results of an experimental study on 10 benchmark classification and 10 benchmark regression data sets, comparing EM-ELMs and SV-SFNNs, that was carried out under the same conditions for the two models. Although both models have the same (efficient) computational cost, a statistically significant improvement in generalization performance of SV-SFNNs vs. EM-ELMs was found in 12 out of the 20 benchmark problems. (C) 2011 Elsevier Ltd. All rights reserved.
引用
收藏
页码:122 / 129
页数:8
相关论文
共 50 条
  • [41] Ear recognition with feed-forward artificial neural networks
    Sibai, Fadi N.
    Nuaimi, Amna
    Maamari, Amna
    Kuwair, Rasha
    NEURAL COMPUTING & APPLICATIONS, 2013, 23 (05): : 1265 - 1273
  • [42] Feed-forward and recurrent neural networks in signal prediction
    Prochazka, Ales
    Pavelka, Ales
    ICCC 2007: 5TH IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL CYBERNETICS, PROCEEDINGS, 2007, : 93 - 96
  • [43] Probabilistic robustness estimates for feed-forward neural networks
    Couellan, Nicolas
    NEURAL NETWORKS, 2021, 142 : 138 - 147
  • [44] Feed-forward neural networks for secondary structure prediction
    Barlow, T.W.
    Journal of Molecular Graphics, 1995, 13 (03):
  • [45] Ear recognition with feed-forward artificial neural networks
    Fadi N. Sibai
    Amna Nuaimi
    Amna Maamari
    Rasha Kuwair
    Neural Computing and Applications, 2013, 23 : 1265 - 1273
  • [46] Optimal identification using feed-forward neural networks
    Vergara, V
    Sinne, S
    Moraga, C
    FROM NATURAL TO ARTIFICIAL NEURAL COMPUTATION, 1995, 930 : 1052 - 1059
  • [47] Optimizing FPGA implementation of Feed-Forward Neural Networks
    Oniga, S.
    Tisan, A.
    Mic, D.
    Buchman, A.
    Vida-Ratiu, A.
    PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON OPTIMIZATION OF ELECTRICAL AND ELECTRONIC EQUIPMENT, VOL IV, 2008, : 31 - 36
  • [48] Evapotranspiration estimation using feed-forward neural networks
    Kisi, Ozgur
    NORDIC HYDROLOGY, 2006, 37 (03) : 247 - 260
  • [49] SafetyCage: A misclassification detector for feed-forward neural networks
    Johnsen, Pal Vegard
    Remonato, Filippo
    NORTHERN LIGHTS DEEP LEARNING CONFERENCE, VOL 233, 2024, 233 : 113 - 119
  • [50] Invariance priors for Bayesian feed-forward neural networks
    von Toussaint, Udo
    Gori, Silvio
    Dose, Volker
    NEURAL NETWORKS, 2006, 19 (10) : 1550 - 1557