Selfish herds optimization algorithm with orthogonal design and information update for training multi-layer perceptron neural network

被引:17
|
作者
Zhao, Ruxin [1 ]
Wang, Yongli [1 ]
Hu, Peng [1 ]
Jelodar, Hamed [1 ]
Yuan, Chi [1 ]
Li, YanChao [1 ]
Masood, Isma [1 ]
Rabbani, Mandi [1 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing 210094, Jiangsu, Peoples R China
基金
中国国家自然科学基金;
关键词
Selfish herd optimization algorithm; Orthogonal design; Multi-layer perceptron (MLP) neural network; Information update; Meta-heuristic optimization algorithm; CLASSIFICATION; RECOGNITION; MODEL;
D O I
10.1007/s10489-018-1373-1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Selfish herd optimization algorithm is a novel meta-heuristic optimization algorithm, which simulates the group behavior of herds when attacked by predators in nature. With the further research of algorithm, it is found that the algorithm cannot get a better global optimal solution in solving some problems. In order to improve the optimization ability of the algorithm, we propose a selfish herd optimization algorithm with orthogonal design and information update (OISHO) in this paper. Through using orthogonal design method, a more competitive candidate solution can be generated. If the candidate solution is better than the global optimal solution, it will replace the global optimal solution. At the same time, at the end of each iteration, we update the population information of the algorithm. The purpose is to increase the diversity of the population, so that the algorithm expands its search space to find better solutions. In order to verify the effectiveness of the proposed algorithm, it is used to train multi-layer perceptron (MLP) neural network. For training multi-layer perceptron neural network, this is a challenging task to present a satisfactory and effective training algorithm. We chose twenty different datasets from UCI machine learning repository as training dataset, and the experimental results are compared with SSA, GG-GSA, GSO, GOA, WOA and SOS, respectively. Experimental results show that the proposed algorithm has better optimization accuracy, convergence speed and stability compared with other algorithms for training multi-layer perceptron neural network.
引用
收藏
页码:2339 / 2381
页数:43
相关论文
共 50 条