Ensembling neural networks: Many could be better than all

被引:1396
|
作者
Zhou, ZH [1 ]
Wu, JX [1 ]
Tang, W [1 ]
机构
[1] Nanjing Univ, Natl Lab Novel Software Technol, Nanjing 210093, Peoples R China
关键词
neural networks; neural network ensemble; machine learning; selective ensemble; boosting; bagging; genetic algorithm; bias-variance decomposition;
D O I
10.1016/S0004-3702(02)00190-X
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural network ensemble is a learning paradigm where many neural networks are jointly used to solve a problem. In this paper, the relationship between the ensemble and its component neural networks is analyzed from the context of both regression and classification, which reveals that it may be better to ensemble many instead of all of the neural networks at hand. This result is interesting because at present, most approaches ensemble all the available neural networks for prediction. Then, in order to show that the appropriate neural networks for composing an ensemble can be effectively selected from a set of available neural networks, an approach named GASEN is presented. GASEN trains a number of neural networks at first. Then it assigns random weights to those networks and employs genetic algorithm to evolve the weights so that they can characterize to some extent the fitness of the neural networks in constituting an ensemble. Finally it selects some neural networks based on the evolved weights to make up the ensemble. A large empirical study shows that, compared with some popular ensemble approaches such as Bagging and Boosting, GASEN can generate neural network ensembles with far smaller sizes but stronger generalization ability. Furthermore, in order to understand the working mechanism of GASEN, the bias-variance decomposition of the error is provided in this paper, which shows that the success of GASEN may lie in that it can significantly reduce the bias as well as the variance. (C) 2002 Elsevier Science B.V. All rights reserved.
引用
收藏
页码:239 / 263
页数:25
相关论文
共 50 条
  • [21] Are many robots better than one?
    Gini, M
    INTELLIGENT AUTONOMOUS SYSTEMS 6, 2000, : 1091 - 1092
  • [22] Classification of 2-dimensional array patterns: Assembling many small neural networks is better than using a large one
    Chen, Liang
    Xue, Wei
    Tokuda, Naoyuki
    NEURAL NETWORKS, 2010, 23 (06) : 770 - 781
  • [23] Localizing discriminative regions for fine-grained visual recognition: One could be better than many
    Fang, Fen
    Liu, Yun
    Xu, Qianli
    NEUROCOMPUTING, 2024, 610
  • [24] IS PART BETTER THAN ALL
    MILLER, G
    INSTITUTIONAL INVESTOR, 1984, 18 (02): : 143 - 145
  • [25] Ensembling Convolutional Neural Networks for Perceptual Image Quality Assessment
    Ahmed, Nisar
    Asif, Hafiz Muhammad Shahzad
    2019 13TH INTERNATIONAL CONFERENCE ON MATHEMATICS, ACTUARIAL SCIENCE, COMPUTER SCIENCE AND STATISTICS (MACS-13), 2019,
  • [26] Ensembling Neural Networks for Digital Pathology Images Classification and Segmentation
    Pimkin, Artem
    Makarchuk, Gleb
    Kondratenko, Vladimir
    Pisov, Maxim
    Krivov, Egor
    Belyaev, Mikhail
    IMAGE ANALYSIS AND RECOGNITION (ICIAR 2018), 2018, 10882 : 877 - 886
  • [27] Biomimetic membranes could be better than RO
    不详
    FILTRATION & SEPARATION, 2008, : 6 - 6
  • [28] What could be better than indulging in TIGG?
    Yamagata, T
    TRENDS IN GLYCOSCIENCE AND GLYCOTECHNOLOGY, 1999, 11 (59) : 159 - 178
  • [29] Two walls could be better than one
    不详
    NANOTECHNOLOGY, 2005, 16 (04)
  • [30] PBIL ensemble: Many better than one
    Zhou, Shude
    Sun, Zengqi
    2005 ICSC CONGRESS ON COMPUTATIONAL INTELLIGENCE METHODS AND APPLICATIONS (CIMA 2005), 2005, : 258 - 263