The Improved Stochastic Fractional Order Gradient Descent Algorithm

被引:2
|
作者
Yang, Yang [1 ]
Mo, Lipo [1 ,2 ]
Hu, Yusen [1 ]
Long, Fei [3 ]
机构
[1] Beijing Technol & Business Univ, Sch Math & Stat, Beijing 100048, Peoples R China
[2] Beijing Technol & Business Univ, Sch Future Technol, Beijing 100048, Peoples R China
[3] Guizhou Inst Technol, Sch Artificial Intelligence & Elect Engn, Special Key Lab Artificial Intelligence & Intellig, Guiyang 550003, Peoples R China
关键词
machine learning; fractional calculus; stochastic gradient descent; convex optimization; online optimization; NEURAL-NETWORKS;
D O I
10.3390/fractalfract7080631
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
This paper mainly proposes some improved stochastic gradient descent (SGD) algorithms with a fractional order gradient for the online optimization problem. For three scenarios, including standard learning rate, adaptive gradient learning rate, and momentum learning rate, three new SGD algorithms are designed combining a fractional order gradient and it is shown that the corresponding regret functions are convergent at a sub-linear rate. Then we discuss the impact of the fractional order on the convergence and monotonicity and prove that the better performance can be obtained by adjusting the order of the fractional gradient. Finally, several practical examples are given to verify the superiority and validity of the proposed algorithm.
引用
收藏
页数:16
相关论文
共 50 条
  • [41] Optimization of Fractional-order Stochastic Resonance Parameters Based On Improved Genetic Algorithm
    Wang, Yangbaihui
    Zheng, Yongjun
    Huang, Ming
    Hu, Xiaofeng
    PROCEEDINGS OF THE 32ND 2020 CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2020), 2020, : 3250 - 3255
  • [42] Stochastic Gradient Algorithm Based on an Improved Higher Order Exponentiated Error Cost Function
    Bin Mansoor, Umair
    Asad, Syed Muhammad
    Zerguine, Azzedine
    CONFERENCE RECORD OF THE 2014 FORTY-EIGHTH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, 2014, : 900 - 903
  • [43] Adaptive gradient descent optimization algorithm with improved differential term
    Ge, Quan-Bo
    Zhang, Jian-Chao
    Yang, Qin-Min
    Li, Hong
    Kongzhi Lilun Yu Yingyong/Control Theory and Applications, 2022, 39 (04): : 623 - 632
  • [44] Second-Order Guarantees of Stochastic Gradient Descent in Nonconvex Optimization
    Vlaski, Stefan
    Sayed, Ali H.
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2022, 67 (12) : 6489 - 6504
  • [45] ZEROTH-ORDER STOCHASTIC PROJECTED GRADIENT DESCENT FOR NONCONVEX OPTIMIZATION
    Liu, Sijia
    Li, Xingguo
    Chen, Pin-Yu
    Haupt, Jarvis
    Amini, Lisa
    2018 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP 2018), 2018, : 1179 - 1183
  • [46] Adaptive Gradient Estimation Stochastic Parallel Gradient Descent Algorithm for Laser Beam Cleanup
    Ma, Shiqing
    Yang, Ping
    Lai, Boheng
    Su, Chunxuan
    Zhao, Wang
    Yang, Kangjian
    Jin, Ruiyan
    Cheng, Tao
    Xu, Bing
    PHOTONICS, 2021, 8 (05)
  • [47] A fractional gradient descent algorithm robust to the initial weights of multilayer perceptron
    Xie, Xuetao
    Pu, Yi-Fei
    Wang, Jian
    NEURAL NETWORKS, 2023, 158 : 154 - 170
  • [48] Unforgeability in Stochastic Gradient Descent
    Baluta, Teodora
    Nikolic, Ivica
    Jain, Racchit
    Aggarwal, Divesh
    Saxena, Prateek
    PROCEEDINGS OF THE 2023 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, CCS 2023, 2023, : 1138 - 1152
  • [49] Preconditioned Stochastic Gradient Descent
    Li, Xi-Lin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (05) : 1454 - 1466
  • [50] Stochastic Reweighted Gradient Descent
    El Hanchi, Ayoub
    Stephens, David A.
    Maddison, Chris J.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,