Vortex search optimization algorithm for training of feed-forward neural network

被引:0
|
作者
Tahir Sağ
Zainab Abdullah Jalil Jalil
机构
[1] Selcuk University,Department of Computer Engineering
关键词
FNN; Classification; Optimization; Training neural-networks; Vortex search;
D O I
暂无
中图分类号
学科分类号
摘要
Training of feed-forward neural-networks (FNN) is a challenging nonlinear task in supervised learning systems. Further, derivative learning-based methods are frequently inadequate for the training phase and cause a high computational complexity due to the numerous weight values that need to be tuned. In this study, training of neural-networks is considered as an optimization process and the best values of weights and biases in the structure of FNN are determined by Vortex Search (VS) algorithm. The VS algorithm is a novel metaheuristic optimization method recently developed, inspired by the vortex shape of stirred liquids. VS fulfills the training task to set the optimal weights and biases stated in a matrix. In this context, the proposed VS-based learning method for FNNs (VS-FNN) is conducted to analyze the effectiveness of the VS algorithm in FNN training for the first time in the literature. The proposed method is applied to six datasets whose names are 3-bit XOR, Iris Classification, Wine-Recognition, Wisconsin-Breast-Cancer, Pima-Indians-Diabetes, and Thyroid-Disease. The performance of the proposed algorithm is analyzed by comparing with other training methods based on Artificial Bee Colony Optimization (ABC), Particle Swarm Optimization (PSO), Simulated Annealing (SA), Genetic Algorithm (GA) and Stochastic Gradient Descent (SGD) algorithms. The experimental results show that VS-FNN is generally leading and competitive. It is also said that VS-FNN can be used as a capable tool for neural networks.
引用
收藏
页码:1517 / 1544
页数:27
相关论文
共 50 条
  • [41] Stochastic optimization methods for fitting polyclass and feed-forward neural network models
    Kooperberg, C
    Stone, CJ
    JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 1999, 8 (02) : 169 - 189
  • [42] An improved training method for feed-forward neural networks
    Lendl, M
    Unbehauen, R
    CLASSIFICATION IN THE INFORMATION AGE, 1999, : 320 - 327
  • [43] A new scheme for training feed-forward neural networks
    AbdelWahhab, O
    SidAhmed, MA
    PATTERN RECOGNITION, 1997, 30 (03) : 519 - 524
  • [44] A Training Set Reduction Algorithm for Feed-forward Neural Network Using Minimum Boundary Vector Distance Selection
    Fuangkhon, Piyabute
    Tanprasert, Thitipong
    2014 INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE, ELECTRONICS AND ELECTRICAL ENGINEERING (ISEEE), VOLS 1-3, 2014, : 70 - +
  • [45] Optimizing and Learning Algorithm for Feed-forward Neural Networks
    Bachiller, Pilar
    González, Julia
    Journal of Advanced Computational Intelligence and Intelligent Informatics, 2001, 5 (01) : 51 - 57
  • [46] Generalized Net Model for Parallel Optimization of Feed-Forward Neural Network with Variable Learning Rate Backpropagation Algorithm
    Atanassov, K.
    Krawczak, M.
    Sotirov, S.
    2008 4TH INTERNATIONAL IEEE CONFERENCE INTELLIGENT SYSTEMS, VOLS 1 AND 2, 2008, : 690 - +
  • [47] The High Precise Optimization Algorithm and Rational Construct Study of Multi-Layered Feed-forward Neural Network
    Hou Xiang-lin
    Liu Ya-li
    Li Qi
    26TH CHINESE CONTROL AND DECISION CONFERENCE (2014 CCDC), 2014, : 2354 - 2359
  • [48] A new training algorithm for feed-forward neural networks with application to the XOR classification problem
    Yu, J.
    Xing, J.
    Xiao, D.
    DYNAMICS OF CONTINUOUS DISCRETE AND IMPULSIVE SYSTEMS-SERIES B-APPLICATIONS & ALGORITHMS, 2006, 13E : 1997 - 2000
  • [49] Training Feed-Forward Artificial Neural Networks with a modified artificial bee colony algorithm
    Xu, Feiyi
    Pun, Chi-Man
    Li, Haolun
    Zhang, Yushu
    Song, Yurong
    Gao, Hao
    NEUROCOMPUTING, 2020, 416 : 69 - 84
  • [50] A Systematic Algorithm to Escape from Local Minima in Training Feed-Forward Neural Networks
    Cheung, Chi-Chung
    Xu, Sean Shensheng
    Ng, Sin-Chun
    2016 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2016, : 396 - 402