Experimental Comparison of Stochastic Optimizers in Deep Learning

被引:18
|
作者
Okewu, Emmanuel [1 ]
Adewole, Philip [2 ]
Sennaike, Oladipupo [2 ]
机构
[1] Univ Lagos, Ctr Informat Technol & Syst, Lagos, Nigeria
[2] Univ Lagos, Dept Comp Sci, Lagos, Nigeria
关键词
Deep learning; Deep neural networks; Error function; Neural network parameters; Stochastic optimization; NEURAL-NETWORKS;
D O I
10.1007/978-3-030-24308-1_55
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The stochastic optimization problem in deep learning involves finding optimal values of loss function and neural network parameters using a meta-heuristic search algorithm. The fact that these values cannot be reasonably obtained by using a deterministic optimization technique underscores the need for an iterative method that randomly picks data segments, arbitrarily determines initial values of optimization (network) parameters and steadily computes series of error functions until a tolerable error is attained. The typical stochastic optimization algorithm for training deep neural networks as a non-convex optimization problem is gradient descent. It has existing extensions like Stochastic Gradient Descent, Adagrad, Adadelta, RMSProp and Adam. In terms of accuracy, convergence rate and training time, each of these stochastic optimizers represents an improvement. However, there is room for further improvement. This paper presents outcomes of series of experiments conducted with a view to providing empirical evidences of successes made so far. We used Python deep learning libaries (Tensorflow and Keras API) for our experiments. Each algorithm is executed, results collated, and a case made for further research in deep learning to improve training time and convergence rate of deep neural network, as well as accuracy of outcomes. This is in response to the growing demands for deep learning in mission-critical and highly sophisticated decision making processes across industry verticals.
引用
收藏
页码:704 / 715
页数:12
相关论文
共 50 条
  • [1] A Comparative Study of Recently Deep Learning Optimizers
    Liu, Yan
    Zhang, Maojun
    Zhong, Zhiwei
    Zeng, Xiangrong
    Long, Xin
    INTERNATIONAL CONFERENCE ON ALGORITHMS, HIGH PERFORMANCE COMPUTING, AND ARTIFICIAL INTELLIGENCE (AHPCAI 2021), 2021, 12156
  • [2] Learning Fast Optimizers for Contextual Stochastic Integer Programs
    Nair, Vinod
    Dvijotham, Krishnamurthy
    Dunning, Iain
    Vinyals, Oriol
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2018, : 591 - 600
  • [3] Comparison of different optimizers implemented on the deep learning architectures for COVID-19 classification
    Verma, Poonam
    Tripathi, Vikas
    Pant, Bhaskar
    MATERIALS TODAY-PROCEEDINGS, 2021, 46 : 11098 - 11102
  • [4] NeuroEvoBench: Benchmarking Evolutionary Optimizers for Deep Learning Applications
    Lange, Robert Tjarko
    Tang, Yujin
    Tian, Yingtao
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [5] Evolution and Role of Optimizers in Training Deep Learning Models
    XiaoHao Wen
    MengChu Zhou
    IEEE/CAA Journal of Automatica Sinica, 2024, 11 (10) : 2039 - 2042
  • [6] Evolution and Role of Optimizers in Training Deep Learning Models
    Wen, XiaoHao
    Zhou, MengChu
    IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2024, 11 (10) : 2039 - 2042
  • [7] Experimental comparison of stochastic iterative learning control algorithms
    Cai, Zhonglun
    Freeman, Chris T.
    Lewin, Paul L.
    Rogers, Eric
    2008 AMERICAN CONTROL CONFERENCE, VOLS 1-12, 2008, : 4548 - 4553
  • [8] A Comparison of Stochastic Optimizers Applied to Dynamic Sensor Scheduling for Satellite Tracking
    Newman, Andrew J.
    Martin, Sean R.
    Rodriguez, Benjamin M.
    Mehta, Nishant L.
    Klatt, Eric M.
    SIGNAL PROCESSING, SENSOR FUSION, AND TARGET RECOGNITION XIX, 2010, 7697
  • [9] Multi-Objective Particle Swarm Optimizers: An Experimental Comparison
    Durillo, Juan J.
    Garcia-Nieto, Jose
    Nebro, Antonio J.
    Coello Coello, Carlos A.
    Luna, Francisco
    Alba, Enrique
    EVOLUTIONARY MULTI-CRITERION OPTIMIZATION: 5TH INTERNATIONAL CONFERENCE, EMO 2009, 2009, 5467 : 495 - +
  • [10] Descending through a Crowded Valley-Benchmarking Deep Learning Optimizers
    Schmidt, Robin M.
    Schneider, Frank
    Hennig, Philipp
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139