Output Layer Structure Optimization for Weighted Regularized Extreme Learning Machine Based on Binary Method

被引:1
|
作者
Yang, Sibo [1 ]
Wang, Shusheng [1 ]
Sun, Lanyin [2 ]
Luo, Zhongxuan [3 ]
Bao, Yuan [4 ]
机构
[1] Dalian Maritime Univ, Sch Sci, Dalian 116026, Peoples R China
[2] Xinyang Normal Univ, Sch Math & Stat, Xinyang 464000, Peoples R China
[3] Dalian Univ Technol, Sch Software, Dalian 116620, Peoples R China
[4] Dalian Maritime Univ, Sch Informat Sci & Technol, Dalian 116026, Peoples R China
来源
SYMMETRY-BASEL | 2023年 / 15卷 / 01期
基金
中国国家自然科学基金;
关键词
weighted regularized extreme learning machine (WRELM); multi-class classification problems; binary method; output nodes; hidden-output weights; PENROSE GENERALIZED INVERSE; NEURAL-NETWORK; MULTICLASS CLASSIFICATION; REGRESSION; FILTER; ERROR; MODEL;
D O I
10.3390/sym15010244
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
In this paper, we focus on the redesign of the output layer for the weighted regularized extreme learning machine (WRELM). For multi-classification problems, the conventional method of the output layer setting, named "one-hot method", is as follows: Let the class of samples be r; then, the output layer node number is r and the ideal output of s-th class is denoted by the s-th unit vector in R-r (1 <= s <= r). Here, in this article, we propose a "binary method" to optimize the output layer structure: Let 2(p-1) < r <= 2(p), where p >= 2, and p output nodes are utilized and, simultaneously, the ideal outputs are encoded in binary numbers. In this paper, the binary method is employed in WRELM. The weights are updated through iterative calculation, which is the most important process in general neural networks. While in the extreme learning machine, the weight matrix is calculated in least square method. That is, the coefficient matrix of the linear equations we solved is symmetric. For WRELM, we continue this idea. And the main part of the weight-solving process is a symmetry matrix. Compared with the one-hot method, the binary method requires fewer output layer nodes, especially when the number of sample categories is high. Thus, some memory space can be saved when storing data. In addition, the number of weights connecting the hidden and the output layer will also be greatly reduced, which will directly reduce the calculation time in the process of training the network. Numerical experiments are conducted to prove that compared with the one-hot method, the binary method can reduce the output nodes and hidden-output weights without damaging the learning precision.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] A self-adaptive evolutionary weighted extreme learning machine for binary imbalance learning
    Tang X.
    Chen L.
    Progress in Artificial Intelligence, 2018, 7 (2) : 95 - 118
  • [32] Bearing Fault Diagnosis Based On Binary Harris Hawk Optimization And Extreme Learning Machine
    Souaidia, Chouaib
    Ayeb, Brahim
    Fares, Abderraouf
    PROGRAM OF THE 2ND INTERNATIONAL CONFERENCE ON ELECTRICAL ENGINEERING AND AUTOMATIC CONTROL, ICEEAC 2024, 2024,
  • [33] Artificial bee colony optimization-based weighted extreme learning machine for imbalanced data learning
    Tang, Xiaofen
    Chen, Li
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2019, 22 (Suppl 3): : S6937 - S6952
  • [34] Artificial bee colony optimization-based weighted extreme learning machine for imbalanced data learning
    Xiaofen Tang
    Li Chen
    Cluster Computing, 2019, 22 : 6937 - 6952
  • [35] An Adaptive Learning Algorithm for Regularized Extreme Learning Machine
    Zhang, Yuao
    Wu, Qingbiao
    Hu, Jueliang
    IEEE ACCESS, 2021, 9 : 20736 - 20745
  • [36] Fabric wrinkle evaluation model with regularized extreme learning machine based on improved Harris Hawks optimization
    Li, Jianqiang
    Shi, Weimin
    Yang, Donghe
    JOURNAL OF THE TEXTILE INSTITUTE, 2022, 113 (02) : 199 - 211
  • [37] A multi-output two-stage locally regularized model construction method using the extreme learning machine
    Du, Dajun
    Li, Kang
    Li, Xue
    Fei, Minrui
    Wang, Haikuan
    NEUROCOMPUTING, 2014, 128 : 104 - 112
  • [38] Timeliness Online Regularized Extreme Learning Machine
    Luo, Xiong
    Yang, Xiaona
    Jiang, Changwei
    Ban, Xiaojuan
    PROCEEDINGS OF ELM-2015, VOL 1: THEORY, ALGORITHMS AND APPLICATIONS (I), 2016, 6 : 477 - 487
  • [39] Regularized extreme learning machine for regression problems
    Martinez-Martinez, Jose M.
    Escandell-Montero, Pablo
    Soria-Olivas, Emilio
    Martin-Guerrero, Jose D.
    Magdalena-Benedito, Rafael
    Gomez-Sanchis, Juan
    NEUROCOMPUTING, 2011, 74 (17) : 3716 - 3721
  • [40] Timeliness online regularized extreme learning machine
    Luo, Xiong
    Yang, Xiaona
    Jiang, Changwei
    Ban, Xiaojuan
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2018, 9 (03) : 465 - 476