An evolutionary extreme learning machine algorithm for multi-cube unit single-layer neural networks

被引:0
|
作者
Christou, Vasileios [1 ]
Tzallas, Alexandros T. [1 ]
Gogos, Christos [1 ]
Tsipouras, Markos G. [2 ]
Tsoumanis, Georgios [1 ]
Giannakeas, Nikolaos [1 ]
机构
[1] Univ Ioannina, Dept Informat & Telecommun, Arta, Greece
[2] Univ Western Macedonia, Dept Elect & Comp Engn, Kozani, Greece
关键词
Extreme learning machine; Higher-order neuron; Genetic algorithm; Multi-cube neuron; Single-layer neural network; OPTIMIZATION; PREDICTION; REGRESSION; MODELS;
D O I
10.1016/j.asoc.2025.112788
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The low-order unit type is the most commonly used neuron architecture, which multiplies each input with a corresponding weight. These types have a restriction to linear separable problems, and overcoming this issue would require more advanced units. Higher-order units like the multi-cube neuron treat the input vector as a set of multi-dimensional hyper-cubes where each cube's site corresponds to a weight. Training multi-cube unit (MCU) single-layer neural networks (SLNNs) is possible with the use of the extreme learning machine (ELM) algorithm, which has a very fast training speed because it does not use an iterative process like gradient-based methods. The training procedure begins by randomizing the hidden layer weights and thresholds, and then, with the help of the Moore-Penrose pseudo-inverse, it can analytically calculate the output layer(s) weights. Earlier works with MCU SLNNs showed significantly increased generalization performance in classification and regression problems compared to traditional low-order neuron types. The proposed evolutionary higher- order ELM (EHO-ELM) algorithm utilizes a modified self-adaptive genetic algorithm (GA) to create an SLNN containing MCUs in its hidden and output layers. EHO-ELM can automatically determine the optimal number and structure of cubic sub-units for each MCU of the neural network. Also, it can automatically tune the hidden layer weights and thresholds to increase the constructed network's generalization ability. According to our knowledge, it is the first algorithm that can optimize the hidden weights vector and the sub-cube structure of MCUs at the same time. This paper's experimental work section compares EHO-ELM in 25 datasets with 14 existing machine-learning methods. The compared approaches include ten gradient-based methods, support vector machine (SVM), and three ELM-based methods. The experimental results revealed that the proposed method had the best generalization performance. The significance of these results was verified using the Wilcoxon sign-rank test.
引用
收藏
页数:18
相关论文
共 50 条
  • [1] An Incremental Optimal Weight Learning Machine of Single-Layer Neural Networks
    Ke, Hai-Feng
    Lu, Cheng-Bo
    Li, Xiao-Bo
    Zhang, Gao-Yan
    Mei, Ying
    Shen, Xue-Wen
    SCIENTIFIC PROGRAMMING, 2018, 2018
  • [2] Evolutionary learning algorithm for multi-layer morphological neural networks
    He, Chunmei
    Information Technology Journal, 2013, 12 (04) : 852 - 856
  • [3] Broad Learning System: structural extensions on single-layer and multi-layer neural networks
    Liu, Zhulin
    Chen, C. L. Philip
    2017 INTERNATIONAL CONFERENCE ON SECURITY, PATTERN ANALYSIS, AND CYBERNETICS (SPAC), 2017, : 136 - 141
  • [4] An Extreme Learning Machine Based Pretraining Method for Multi-Layer Neural Networks
    Noinongyao, Pavit
    Watchareeruetai, Ukrit
    2018 JOINT 10TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING AND INTELLIGENT SYSTEMS (SCIS) AND 19TH INTERNATIONAL SYMPOSIUM ON ADVANCED INTELLIGENT SYSTEMS (ISIS), 2018, : 608 - 613
  • [5] A FAST LEARNING ALGORITHM FOR MULTI-LAYER EXTREME LEARNING MACHINE
    Tang, Jiexiong
    Deng, Chenwei
    Huang, Guang-Bin
    Hou, Junhui
    2014 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2014, : 175 - 178
  • [6] A Hybrid Constructive Algorithm for Single-Layer Feedforward Networks Learning
    Wu, Xing
    Rozycki, Pawel
    Wilamowski, Bogdan M.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2015, 26 (08) : 1659 - 1668
  • [7] AN IMPROVEMENT OF EXTREME LEARNING MACHINE FOR COMPACT SINGLE-HIDDEN-LAYER FEEDFORWARD NEURAL NETWORKS
    Huynh, Hieu Trung
    Won, Yonggwan
    Kim, Jung-Ja
    INTERNATIONAL JOURNAL OF NEURAL SYSTEMS, 2008, 18 (05) : 433 - 441
  • [8] A new Jacobian matrix for optimal learning of single-layer neural networks
    Peng, Jian-Xun
    Li, Kang
    Irwin, George W.
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2008, 19 (01): : 119 - 129
  • [9] Supervised Learning of Single-Layer Spiking Neural Networks for Image Classification
    Ma, Qiang
    Lin, Xianghong
    Wang, Xiangwen
    2018 2ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE APPLICATIONS AND TECHNOLOGIES (AIAAT 2018), 2018, 435
  • [10] An Evolutionary Multi-Layer Extreme Learning Machine for Data Clustering Problems
    Wu, Xian
    Zhou, Tianfang
    Yi, Kaixiang
    Fei, Minrui
    Chen, Yayu
    Ding, JiaLan
    2021 PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE (CCC), 2021, : 1978 - 1983