Training of Feed-Forward Neural Networks by Using Optimization Algorithms Based on Swarm-Intelligent for Maximum Power Point Tracking

被引:3
|
作者
Kaya, Ebubekir [1 ]
Kaya, Ceren Bastemur [2 ]
Bendes, Emre [1 ]
Atasever, Sema [1 ]
Ozturk, Basak [1 ]
Yazlik, Bilgin [1 ]
机构
[1] Nevsehir Haci Bektas Veli Univ, Engn Architecture Fac, Dept Comp Engn, TR-50300 Nevsehir, Turkiye
[2] Nevsehir Haci Bektas Veli Univ, Nevsehir Vocat Sch, Dept Comp Technol, TR-50300 Nevsehir, Turkiye
关键词
swarm intelligence; feed-forward neural network; maximum power point tracking; metaheuristic algorithm; META-HEURISTIC ALGORITHMS; ARTIFICIAL BEE COLONY; PHOTOVOLTAIC SYSTEM; MPPT ALGORITHM; DESIGN;
D O I
10.3390/biomimetics8050402
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
One of the most used artificial intelligence techniques for maximum power point tracking is artificial neural networks. In order to achieve successful results in maximum power point tracking, the training process of artificial neural networks is important. Metaheuristic algorithms are used extensively in the literature for neural network training. An important group of metaheuristic algorithms is swarm-intelligent-based optimization algorithms. In this study, feed-forward neural network training is carried out for maximum power point tracking by using 13 swarm-intelligent-based optimization algorithms. These algorithms are artificial bee colony, butterfly optimization, cuckoo search, chicken swarm optimization, dragonfly algorithm, firefly algorithm, grasshopper optimization algorithm, krill herd algorithm, particle swarm optimization, salp swarm algorithm, selfish herd optimizer, tunicate swarm algorithm, and tuna swarm optimization. Mean squared error is used as the error metric, and the performances of the algorithms in different network structures are evaluated. Considering the results, a success ranking score is obtained for each algorithm. The three most successful algorithms in both training and testing processes are the firefly algorithm, selfish herd optimizer, and grasshopper optimization algorithm, respectively. The training error values obtained with these algorithms are 4.5 x 10-4, 1.6 x 10-3, and 2.3 x 10-3, respectively. The test error values are 4.6 x 10-4, 1.6 x 10-3, and 2.4 x 10-3, respectively. With these algorithms, effective results have been achieved in a low number of evaluations. In addition to these three algorithms, other algorithms have also achieved mostly acceptable results. This shows that the related algorithms are generally successful ANFIS training algorithms for maximum power point tracking.
引用
收藏
页数:23
相关论文
共 50 条
  • [41] Tracking multiple insects using multilayer feed-forward networks
    Kumar, N. Ravi
    Janakiraman, T. N.
    Thiagarajan, Hemalatha
    Subaharan, K.
    ICSCN 2008: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING COMMUNICATIONS AND NETWORKING, 2008, : 417 - +
  • [42] Fixed-point implementations for feed-forward artificial neural networks
    Llamocca, Daniel
    INTEGRATION-THE VLSI JOURNAL, 2023, 92 : 1 - 14
  • [43] Vortex search optimization algorithm for training of feed-forward neural network
    Tahir Sağ
    Zainab Abdullah Jalil Jalil
    International Journal of Machine Learning and Cybernetics, 2021, 12 : 1517 - 1544
  • [44] Vortex search optimization algorithm for training of feed-forward neural network
    Sag, Tahir
    Jalil, Zainab Abdullah Jalil
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2021, 12 (05) : 1517 - 1544
  • [45] An ensemble of differential evolution and Adam for training feed-forward neural networks
    Xue, Yu
    Tong, Yiling
    Neri, Ferrante
    INFORMATION SCIENCES, 2022, 608 : 453 - 471
  • [46] Unsupervised, smooth training of feed-forward neural networks for mismatch compensation
    Surendran, AC
    Lee, CH
    Rahim, M
    1997 IEEE WORKSHOP ON AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING, PROCEEDINGS, 1997, : 482 - 489
  • [47] An evolutionary approach to training feed-forward and recurrent neural networks.
    Riley, J
    Ciesielski, VB
    1998 SECOND INTERNATIONAL CONFERENCE ON KNOWLEDGE-BASED INTELLIGENT ELECTRONIC SYSTEMS, KES '98, PROCEEDINGS, VOL, 3, 1998, : 596 - 602
  • [48] Feed-forward neural network training using sparse representation
    Yang, Jie
    Ma, Jun
    EXPERT SYSTEMS WITH APPLICATIONS, 2019, 116 : 255 - 264
  • [49] Hybrid learning schemes for fast training of feed-forward neural networks
    Karayiannis, NB
    MATHEMATICS AND COMPUTERS IN SIMULATION, 1996, 41 (1-2) : 13 - 28