Comparing error minimized extreme learning machines and support vector sequential feed-forward neural networks

被引:16
|
作者
Romero, Enrique [1 ]
Alquezar, Rene [2 ]
机构
[1] Univ Politecn Cataluna, Dept Llenguatges & Sistemes Informat, Barcelona, Spain
[2] Univ Politecn Cataluna, CSIC, Inst Robot & Informat Ind, Barcelona, Spain
关键词
Error minimized extreme learning machines; Support vector sequential feed-forward neural networks; Sequential approximations; ALGORITHM; ENSEMBLE;
D O I
10.1016/j.neunet.2011.08.005
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, error minimized extreme learning machines (EM-ELMs) have been proposed as a simple and efficient approach to build single-hidden-layer feed-forward networks (SLFNs) sequentially. They add random hidden nodes one by one (or group by group) and update the output weights incrementally to minimize the sum-of-squares error in the training set. Other very similar methods that also construct SLFNs sequentially had been reported earlier with the main difference that their hidden-layer weights are a subset of the data instead of being random. These approaches are referred to as support vector sequential feed-forward neural networks (SV-SFNNs), and they are a particular case of the sequential approximation with optimal coefficients and interacting frequencies (SAOCIF) method. In this paper, it is firstly shown that EM-ELMs can also be cast as a particular case of SAOCIF. In particular, EM-ELMs can easily be extended to test some number of random candidates at each step and select the best of them, as SAOCIF does. Moreover, it is demonstrated that the cost of the computation of the optimal output-layer weights in the originally proposed EM-ELMs can be improved if it is replaced by the one included in SAOCIF. Secondly, we present the results of an experimental study on 10 benchmark classification and 10 benchmark regression data sets, comparing EM-ELMs and SV-SFNNs, that was carried out under the same conditions for the two models. Although both models have the same (efficient) computational cost, a statistically significant improvement in generalization performance of SV-SFNNs vs. EM-ELMs was found in 12 out of the 20 benchmark problems. (C) 2011 Elsevier Ltd. All rights reserved.
引用
收藏
页码:122 / 129
页数:8
相关论文
共 50 条
  • [31] Feed-forward support vector machine without multipliers
    Anguita, Davide
    Pischiutta, Stefano
    Ridella, Sandro
    Sterpi, Dario
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2006, 17 (05): : 1328 - 1331
  • [32] Hybridization of the probabilistic neural networks with feed-forward neural networks for forecasting
    Khashei, Mehdi
    Bijari, Mehdi
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2012, 25 (06) : 1277 - 1288
  • [33] Learning styles' recognition in e-learning environments with feed-forward neural networks
    Villaverde, J. E.
    Godoy, D.
    Amandi, A.
    JOURNAL OF COMPUTER ASSISTED LEARNING, 2006, 22 (03) : 197 - 206
  • [34] On the Duality Between Belief Networks and Feed-Forward Neural Networks
    Baggenstoss, Paul M.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (01) : 190 - 200
  • [35] Field Programmable Neural Array for Feed-Forward Neural Networks
    Bohrn, Marek
    Fujcik, Lukas
    Vrba, Radimir
    2013 36TH INTERNATIONAL CONFERENCE ON TELECOMMUNICATIONS AND SIGNAL PROCESSING (TSP), 2013, : 727 - 731
  • [36] Air quality prediction in Milan: feed-forward neural networks, pruned neural networks and lazy learning
    Corani, G
    ECOLOGICAL MODELLING, 2005, 185 (2-4) : 513 - 529
  • [37] Learning color-appearance models by means of feed-forward neural networks
    Campadelli, P
    Gangai, C
    Schettini, R
    COLOR RESEARCH AND APPLICATION, 1999, 24 (06): : 411 - 421
  • [38] Resistive switching synapses for unsupervised learning in feed-forward and recurrent neural networks
    Milo, V.
    Pedretti, G.
    Laudato, M.
    Bricalli, A.
    Ambrosi, E.
    Bianchi, S.
    Chicca, E.
    Ielmini, D.
    2018 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2018,
  • [39] A mean field algorithm for Bayes learning in large feed-forward neural networks
    Opper, M
    Winther, O
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 9: PROCEEDINGS OF THE 1996 CONFERENCE, 1997, 9 : 225 - 231
  • [40] Target switch algorithm: A constructive learning procedure for feed-forward neural networks
    Campbell, Colin
    Vicente, C. Perez
    Neural Computation, 1995, 7 (06)