Adaptive Stochastic Gradient Descent (SGD) for erratic datasets

被引:3
|
作者
Dagal, Idriss [1 ]
Tanrioven, Kursat [1 ]
Nayir, Ahmet [1 ]
Akin, Burak [2 ]
机构
[1] Istanbul Beykent Univ, Elect Engn, Hadim Koruyolu Caddesi 19, TR-34450 Istanbul, Turkiye
[2] Yildiz Tech Univ, Elect Engn, Davutpasa Caddesi, TR-34220 Istanbul, Turkiye
来源
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE | 2025年 / 166卷
关键词
Gradient descent; Stochastic Gradient Descent; Accuracy; Principal Component Analysis; QUASI-NEWTON METHOD; NEURAL NETWORKS; ALGORITHM; MLP;
D O I
10.1016/j.future.2024.107682
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Stochastic Gradient Descent (SGD) is a highly efficient optimization algorithm, particularly well suited for large datasets due to its incremental parameter updates. In this study, we apply SGD to a simple linear classifier using logistic regression, a widely used method for binary classification tasks. Unlike traditional batch Gradient Descent (GD), which processes the entire dataset simultaneously, SGD offers enhanced scalability and performance for streaming and large-scale data. Our experiments reveal that SGD outperforms GD across multiple performance metrics, achieving 45.83% accuracy compared to GD's 41.67 %, and excelling in precision (60 % vs. 45.45 %), recall (100 % vs. 60 %), and F1-score (100 % vs. 62 %). Additionally, SGD achieves 99.99 % of Principal Component Analysis (PCA) accuracy, slightly surpassing GD's 99.92 %. These results highlight SGD's superior efficiency and flexibility for large-scale data environments, driven by its ability to balance precision and recall effectively. To further enhance SGD's robustness, the proposed method incorporates adaptive learning rates, momentum, and logistic regression, addressing traditional GD drawbacks. These modifications improve the algorithm's stability, convergence behavior, and applicability to complex, large-scale optimization tasks where standard GD often struggles, making SGD a highly effective solution for challenging data-driven scenarios.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Stochastic gradient descent for barycenters in Wasserstein space
    Backhoff, Julio
    Fontbona, Joaquin
    Rios, Gonzalo
    Tobar, Felipe
    JOURNAL OF APPLIED PROBABILITY, 2025, 62 (01) : 15 - 43
  • [22] An Improvised Sentiment Analysis Model on Twitter Data Using Stochastic Gradient Descent (SGD) Optimization Algorithm in Stochastic Gate Neural Network (SGNN)
    Vidyashree K.P.
    Rajendra A.B.
    SN Computer Science, 4 (2)
  • [23] Accelerating Stochastic Gradient Descent using Adaptive Mini-Batch Size
    Alsadi, Muayyad Saleh
    Ghnemat, Rawan
    Awajan, Arafat
    2019 2ND INTERNATIONAL CONFERENCE ON NEW TRENDS IN COMPUTING SCIENCES (ICTCS), 2019, : 393 - 399
  • [24] Stochastic gradient descent analysis for the evaluation of a speaker recognition
    Nasef, Ashrf
    Marjanovic-Jakovljevic, Marina
    Njegus, Angelina
    ANALOG INTEGRATED CIRCUITS AND SIGNAL PROCESSING, 2017, 90 (02) : 389 - 397
  • [25] Dendrite morphological neurons trained by stochastic gradient descent
    Zamora, Erik
    Sossa, Humberto
    NEUROCOMPUTING, 2017, 260 : 420 - 431
  • [26] Stochastic Gradient Descent and Its Variants in Machine Learning
    Netrapalli, Praneeth
    JOURNAL OF THE INDIAN INSTITUTE OF SCIENCE, 2019, 99 (02) : 201 - 213
  • [27] Guided parallelized stochastic gradient descent for delay compensation
    Sharma, Anuraganand
    APPLIED SOFT COMPUTING, 2021, 102
  • [28] The Improved Stochastic Fractional Order Gradient Descent Algorithm
    Yang, Yang
    Mo, Lipo
    Hu, Yusen
    Long, Fei
    FRACTAL AND FRACTIONAL, 2023, 7 (08)
  • [29] Adjusted stochastic gradient descent for latent factor analysis
    Li, Qing
    Xiong, Diwen
    Shang, Mingsheng
    INFORMATION SCIENCES, 2022, 588 : 196 - 213
  • [30] Adaptive Stochastic Gradient Descent Method for Convex and Non-Convex Optimization
    Chen, Ruijuan
    Tang, Xiaoquan
    Li, Xiuting
    FRACTAL AND FRACTIONAL, 2022, 6 (12)