Optimal stochastic gradient descent algorithm for filtering

被引:1
|
作者
Turali, M. Yigit [1 ]
Koc, Ali T. [1 ]
Kozat, Suleyman S. [1 ]
机构
[1] Bilkent Univ, Dept Elect & Elect Engn, TR-06800 Ankara, Turkiye
关键词
Learning rate; Linear filtering; Optimization; Stochastic gradient descent; PREDICTION;
D O I
10.1016/j.dsp.2024.104731
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Stochastic Gradient Descent (SGD) is a fundamental optimization technique in machine learning, due to its efficiency in handling large-scale data. Unlike typical SGD applications, which rely on stochastic approximations, this work explores the convergence properties of SGD from a deterministic perspective. We address the crucial aspect of learning rate settings, a common obstacle in optimizing SGD performance, particularly in complex environments. In contrast to traditional methods that often provide convergence results based on statistical expectations (which are usually not justified), our approach introduces universally applicable learning rates. These rates ensure that a model trained with SGD matches the performance of the best linear filter asymptotically, applicable irrespective of the data sequence length and independent of statistical assumptions about the data. By establishing learning rates that scale as mu = O(1/t), we offer a solution that sidesteps the need for prior data knowledge, a prevalent limitation in real-world applications. To this end, we provide a robust framework for SGD's application across varied settings, guaranteeing convergence results that hold under both deterministic and stochastic scenarios without any underlying assumptions.
引用
收藏
页数:6
相关论文
共 50 条
  • [21] FTSGD: An Adaptive Stochastic Gradient Descent Algorithm for Spark MLlib
    Zhang, Hong
    Liu, Zixia
    Huang, Hai
    Wang, Liqiang
    2018 16TH IEEE INT CONF ON DEPENDABLE, AUTONOM AND SECURE COMP, 16TH IEEE INT CONF ON PERVAS INTELLIGENCE AND COMP, 4TH IEEE INT CONF ON BIG DATA INTELLIGENCE AND COMP, 3RD IEEE CYBER SCI AND TECHNOL CONGRESS (DASC/PICOM/DATACOM/CYBERSCITECH), 2018, : 828 - 835
  • [22] Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm
    Deanna Needell
    Nathan Srebro
    Rachel Ward
    Mathematical Programming, 2016, 155 : 549 - 573
  • [23] A Novel Stochastic Gradient Descent Algorithm for Learning Principal Subspaces
    Le Lan, Charline
    Greaves, Joshua
    Farebrother, Jesse
    Rowland, Mark
    Pedregosa, Fabian
    Agarwal, Rishabh
    Bellemare, Marc
    arXiv, 2022,
  • [24] Stochastic Gradient Descent, Weighted Sampling, and the Randomized Kaczmarz algorithm
    Needell, Deanna
    Srebro, Nathan
    Ward, Rachel
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 27 (NIPS 2014), 2014, 27
  • [25] Performance of stochastic parallel gradient descent algorithm in coherent combination
    Li X.
    He Y.
    1600, Chinese Optical Society (36):
  • [26] A Stochastic Gradient Descent Algorithm Based on Adaptive Differential Privacy
    Deng, Yupeng
    Li, Xiong
    He, Jiabei
    Liu, Yuzhen
    Liang, Wei
    COLLABORATIVE COMPUTING: NETWORKING, APPLICATIONS AND WORKSHARING, COLLABORATECOM 2022, PT II, 2022, 461 : 133 - 152
  • [27] Stochastic parallel gradient descent algorithm for adaptive optics system
    Ma H.
    Zhang P.
    Zhang J.
    Fan C.
    Wang Y.
    Qiangjiguang Yu Lizishu/High Power Laser and Particle Beams, 2010, 22 (06): : 1206 - 1210
  • [28] A Novel Stochastic Gradient Descent Algorithm for Learning Principal Subspaces
    Le Lan, Charline
    Greaves, Joshua
    Farebrother, Jesse
    Rowland, Mark
    Pedregosa, Fabian
    Agarwal, Rishabh
    Bellemare, Marc
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 206, 2023, 206
  • [29] Preconditioned Gradient Descent Algorithm for Inverse Filtering on Spatially Distributed Networks
    Cheng, Cheng
    Emirov, Nazar
    Sun, Qiyu
    IEEE SIGNAL PROCESSING LETTERS, 2020, 27 : 1834 - 1838
  • [30] Efficient Discrete Optimal Transport Algorithm by Accelerated Gradient Descent
    An, Dongsheng
    Lei, Na
    Xu, Xiaoyin
    Gu, Xianfeng
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 10119 - 10128