Adaptive Stochastic Gradient Descent (SGD) for erratic datasets

被引:2
|
作者
Dagal, Idriss [1 ]
Tanrioven, Kursat [1 ]
Nayir, Ahmet [1 ]
Akin, Burak [2 ]
机构
[1] Istanbul Beykent Univ, Elect Engn, Hadim Koruyolu Caddesi 19, TR-34450 Istanbul, Turkiye
[2] Yildiz Tech Univ, Elect Engn, Davutpasa Caddesi, TR-34220 Istanbul, Turkiye
关键词
Gradient descent; Stochastic Gradient Descent; Accuracy; Principal Component Analysis; QUASI-NEWTON METHOD; NEURAL NETWORKS; ALGORITHM; MLP;
D O I
10.1016/j.future.2024.107682
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Stochastic Gradient Descent (SGD) is a highly efficient optimization algorithm, particularly well suited for large datasets due to its incremental parameter updates. In this study, we apply SGD to a simple linear classifier using logistic regression, a widely used method for binary classification tasks. Unlike traditional batch Gradient Descent (GD), which processes the entire dataset simultaneously, SGD offers enhanced scalability and performance for streaming and large-scale data. Our experiments reveal that SGD outperforms GD across multiple performance metrics, achieving 45.83% accuracy compared to GD's 41.67 %, and excelling in precision (60 % vs. 45.45 %), recall (100 % vs. 60 %), and F1-score (100 % vs. 62 %). Additionally, SGD achieves 99.99 % of Principal Component Analysis (PCA) accuracy, slightly surpassing GD's 99.92 %. These results highlight SGD's superior efficiency and flexibility for large-scale data environments, driven by its ability to balance precision and recall effectively. To further enhance SGD's robustness, the proposed method incorporates adaptive learning rates, momentum, and logistic regression, addressing traditional GD drawbacks. These modifications improve the algorithm's stability, convergence behavior, and applicability to complex, large-scale optimization tasks where standard GD often struggles, making SGD a highly effective solution for challenging data-driven scenarios.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] ADINE: An Adaptive Momentum Method for Stochastic Gradient Descent
    Srinivasan, Vishwak
    Sankar, Adepu Ravi
    Balasubramanian, Vineeth N.
    PROCEEDINGS OF THE ACM INDIA JOINT INTERNATIONAL CONFERENCE ON DATA SCIENCE AND MANAGEMENT OF DATA (CODS-COMAD'18), 2018, : 249 - 256
  • [22] A(DP)2SGD: Asynchronous Decentralized Parallel Stochastic Gradient Descent With Differential Privacy
    Xu, Jie
    Zhang, Wei
    Wang, Fei
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (11) : 8036 - 8047
  • [23] Nystrom-SGD: Fast Learning of Kernel-Classifiers with Conditioned Stochastic Gradient Descent
    Pfahler, Lukas
    Morik, Katharina
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2018, PT II, 2019, 11052 : 209 - 224
  • [24] OD-SGD: One-Step Delay Stochastic Gradient Descent for Distributed Training
    Xu, Yemao
    Dong, Dezun
    Zhao, Yawei
    Xu, Weixia
    Liao, Xiangke
    ACM TRANSACTIONS ON ARCHITECTURE AND CODE OPTIMIZATION, 2020, 17 (04)
  • [25] Stochastic Gradient Descent on a Tree: an Adaptive and Robust Approach to Stochastic Convex Optimization
    Vakili, Sattar
    Salgia, Sudeep
    Zhao, Qing
    2019 57TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2019, : 432 - 438
  • [26] FTSGD: An Adaptive Stochastic Gradient Descent Algorithm for Spark MLlib
    Zhang, Hong
    Liu, Zixia
    Huang, Hai
    Wang, Liqiang
    2018 16TH IEEE INT CONF ON DEPENDABLE, AUTONOM AND SECURE COMP, 16TH IEEE INT CONF ON PERVAS INTELLIGENCE AND COMP, 4TH IEEE INT CONF ON BIG DATA INTELLIGENCE AND COMP, 3RD IEEE CYBER SCI AND TECHNOL CONGRESS (DASC/PICOM/DATACOM/CYBERSCITECH), 2018, : 828 - 835
  • [27] Adaptive Sampling for Incremental Optimization Using Stochastic Gradient Descent
    Papa, Guillaume
    Bianchi, Pascal
    Clemencon, Stephan
    ALGORITHMIC LEARNING THEORY, ALT 2015, 2015, 9355 : 317 - 331
  • [28] A Stochastic Gradient Descent Algorithm Based on Adaptive Differential Privacy
    Deng, Yupeng
    Li, Xiong
    He, Jiabei
    Liu, Yuzhen
    Liang, Wei
    COLLABORATIVE COMPUTING: NETWORKING, APPLICATIONS AND WORKSHARING, COLLABORATECOM 2022, PT II, 2022, 461 : 133 - 152
  • [29] Stochastic parallel gradient descent algorithm for adaptive optics system
    Ma H.
    Zhang P.
    Zhang J.
    Fan C.
    Wang Y.
    Qiangjiguang Yu Lizishu/High Power Laser and Particle Beams, 2010, 22 (06): : 1206 - 1210
  • [30] P-SGD: A Stochastic Gradient Descent Solution for Privacy-Preserving During Protection Transitions
    Bou-Chaaya, Karam
    Chbeir, Richard
    Barhamgi, Mahmoud
    Arnould, Philippe
    Benslimane, Djamal
    ADVANCED INFORMATION SYSTEMS ENGINEERING (CAISE 2021), 2021, 12751 : 37 - 53