An improved butterfly optimization algorithm for training the feed-forward artificial neural networks

被引:11
|
作者
Irmak, Busra [1 ]
Karakoyun, Murat [1 ]
Gulcu, Saban [1 ]
机构
[1] Necmettin Erbakan Univ, Dept Comp Engn, Konya, Turkey
关键词
Artificial neural networks; Butterfly optimization algorithm; Chaos; Multilayer perceptron; Training artificial neural networks;
D O I
10.1007/s00500-022-07592-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificial neural network (ANN) which is an information processing technique developed by modeling the nervous system of the human brain is one of the most powerful learning methods today. One of the factors that make ANN successful is its training algorithm. In this paper, an improved butterfly optimization algorithm (IBOA) based on the butterfly optimization algorithm was proposed for training the feed-forward artificial neural networks. The IBOA algorithm has the chaotic property which helps optimization algorithms to explore the search space more dynamically and globally. In the experiments, ten chaotic maps were used. The success of the IBOA algorithm was tested on 13 benchmark functions which are well known to those working on global optimization and are frequently used for testing and analysis of optimization algorithms. The Tent-mapped IBOA algorithm outperformed the other algorithms in most of the benchmark functions. Moreover, the success of the IBOA-MLP algorithm also has been tested on five classification datasets (xor, balloon, iris, breast cancer, and heart) and the IBOA-MLP algorithm was compared with four algorithms in the literature. According to the statistical performance metrics (sensitivity, specificity, precision, F1-score, and Friedman test), the IBOA-MLP outperformed the other algorithms and proved to be successful in training the feed-forward artificial neural networks.
引用
收藏
页码:3887 / 3905
页数:19
相关论文
共 50 条
  • [41] Multiplication units in feed-forward neural networks and its training
    Li, DZ
    Hirasawa, K
    Hu, JL
    Murata, J
    ICONIP'02: PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING: COMPUTATIONAL INTELLIGENCE FOR THE E-AGE, 2002, : 75 - 79
  • [42] Constructing and training feed-forward neural networks for pattern classification
    Jiang, XD
    Wah, AHKS
    PATTERN RECOGNITION, 2003, 36 (04) : 853 - 867
  • [43] Evolutionary approach to training feed-forward and recurrent neural networks
    Riley, Jeff
    Ciesielski, Victor B.
    International Conference on Knowledge-Based Intelligent Electronic Systems, Proceedings, KES, 1998, 3 : 596 - 602
  • [44] Training of large-scale feed-forward neural networks
    Seiffert, Udo
    2006 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORK PROCEEDINGS, VOLS 1-10, 2006, : 5324 - 5329
  • [45] Filtering Training Data When Training Feed-Forward Artificial Neural Network
    Moniz, Krishna
    Yuan, Yuyu
    TRUSTWORTHY COMPUTING AND SERVICES, 2014, 426 : 212 - 218
  • [46] Weight Optimization in Artificial Neural Network Training by Improved Monarch Butterfly Algorithm
    Bacanin, Nebojsa
    Bezdan, Timea
    Zivkovic, Miodrag
    Chhabra, Amit
    MOBILE COMPUTING AND SUSTAINABLE INFORMATICS, 2022, 68 : 397 - 409
  • [47] Yet another genetic algorithm for feed-forward neural networks
    Neruda, R
    NINTH IEEE INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 1997, : 375 - 380
  • [48] Dual gradient descent algorithm on two-layered feed-forward artificial neural networks
    Choi, Bumghi
    Lee, Ju-Hong
    Park, Tae-Su
    NEW TRENDS IN APPLIED ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 2007, 4570 : 696 - +
  • [49] Single-Iteration Training Algorithm for Multi-Layer Feed-Forward Neural Networks
    J. Barhen
    R. Cogswell
    V. Protopopescu
    Neural Processing Letters, 2000, 11 : 113 - 129
  • [50] Single-iteration training algorithm for multi-layer feed-forward neural networks
    Barhen, J
    Cogswell, R
    Protopopescu, V
    NEURAL PROCESSING LETTERS, 2000, 11 (02) : 113 - 129