Mixture of experts: a literature survey

被引:237
|
作者
Masoudnia, Saeed [1 ]
Ebrahimpour, Reza [2 ]
机构
[1] Univ Tehran, Sch Math Stat & Comp Sci, Tehran, Iran
[2] Shahid Rajaee Teacher Training Univ, Dept Elect & Comp Engn, Brain & Intelligent Syst Res Lab, Tehran, Iran
关键词
Classifier combining; Mixture of experts; Mixture of implicitly localised experts; Mixture of explicitly localised expert; INDEPENDENT FACE RECOGNITION; NETWORK STRUCTURE; ENSEMBLE METHODS; MACHINE; CLASSIFICATION; CLASSIFIERS; ALGORITHM; MODEL;
D O I
10.1007/s10462-012-9338-y
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Mixture of experts (ME) is one of the most popular and interesting combining methods, which has great potential to improve performance in machine learning. ME is established based on the divide-and-conquer principle in which the problem space is divided between a few neural network experts, supervised by a gating network. In earlier works on ME, different strategies were developed to divide the problem space between the experts. To survey and analyse these methods more clearly, we present a categorisation of the ME literature based on this difference. Various ME implementations were classified into two groups, according to the partitioning strategies used and both how and when the gating network is involved in the partitioning and combining procedures. In the first group, The conventional ME and the extensions of this method stochastically partition the problem space into a number of subspaces using a special employed error function, and experts become specialised in each subspace. In the second group, the problem space is explicitly partitioned by the clustering method before the experts' training process starts, and each expert is then assigned to one of these sub-spaces. Based on the implicit problem space partitioning using a tacit competitive process between the experts, we call the first group the mixture of implicitly localised experts (MILE), and the second group is called mixture of explicitly localised experts (MELE), as it uses pre-specified clusters. The properties of both groups are investigated in comparison with each other. Investigation of MILE versus MELE, discussing the advantages and disadvantages of each group, showed that the two approaches have complementary features. Moreover, the features of the ME method are compared with other popular combining methods, including boosting and negative correlation learning methods. As the investigated methods have complementary strengths and limitations, previous researches that attempted to combine their features in integrated approaches are reviewed and, moreover, some suggestions are proposed for future research directions.
引用
收藏
页码:275 / 293
页数:19
相关论文
共 50 条
  • [1] Mixture of experts: a literature survey
    Saeed Masoudnia
    Reza Ebrahimpour
    Artificial Intelligence Review, 2014, 42 : 275 - 293
  • [2] Mixture of vector experts
    Henderson, M
    Shawe-Taylor, J
    Zerovnik, J
    ALGORITHMIC LEARNING THEORY, 2005, 3734 : 386 - 398
  • [3] Hierarchical Routing Mixture of Experts
    Zhao, Wenbo
    Gao, Yang
    Memon, Shahan Ali
    Raj, Bhiksha
    Singh, Rita
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 7900 - 7906
  • [4] Latent Mixture of Discriminative Experts
    Ozkan, Derya
    Morency, Louis-Philippe
    IEEE TRANSACTIONS ON MULTIMEDIA, 2013, 15 (02) : 326 - 338
  • [5] Mixture of Experts with Genetic Algorithms
    Cleofas, Laura
    Maria Valdovinos, Rosa
    Juarez, C.
    ADVANCES IN COMPUTATIONAL INTELLIGENCE, 2009, 61 : 331 - 338
  • [6] Statistical mechanics of the mixture of experts
    Kang, KJ
    Oh, JH
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 9: PROCEEDINGS OF THE 1996 CONFERENCE, 1997, 9 : 183 - 189
  • [7] Mixture of feature specified experts
    Kheradpisheh, Saeed Reza
    Sharifizadeh, Fatemeh
    Nowzari-Dalini, Abbas
    Ganjtabesh, Mohammad
    Ebrahimpour, Reza
    INFORMATION FUSION, 2014, 20 : 242 - 251
  • [8] Spatial Mixture-of-Experts
    Dryden, Nikoli
    Hoefler, Torsten
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [9] Twenty Years of Mixture of Experts
    Yuksel, Seniha Esen
    Wilson, Joseph N.
    Gader, Paul D.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2012, 23 (08) : 1177 - 1193
  • [10] Laplace mixture of linear experts
    Nguyen, Hien D.
    McLachlan, Geoffrey J.
    COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2016, 93 : 177 - 191