From big data to smart data: a sample gradient descent approach for machine learning

被引:2
作者
Ganie, Aadil Gani [1 ]
Dadvandipour, Samad [1 ]
机构
[1] Univ Miskolc, H-3515 Miskolc, Hungary
关键词
Big data; Gradient decent; Machine learning; PCA; Loss function;
D O I
10.1186/s40537-023-00839-9
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
This research paper presents an innovative approach to gradient descent known as ''Sample Gradient Descent''. This method is a modification of the conventional batch gradient descent algorithm, which is often associated with space and time complexity issues. The proposed approach involves the selection of a representative sample of data, which is subsequently subjected to batch gradient descent. The selection of this sample is a crucial task, as it must accurately represent the entire dataset. To achieve this, the study employs the use of Principle Component Analysis (PCA), which is applied to the training data, with a condition that only those rows and columns of data that explain 90% of the overall variance are retained. This approach results in a convex loss function, where a global minimum can be readily attained. Our results indicate that the proposed method offers faster convergence rates, with reduced computation times, when compared to the conventional batch gradient descent algorithm. These findings demonstrate the potential utility of the ''Sample Gradient Descent'' technique in various domains, ranging from machine learning to optimization problems. In our experiments, both approaches were run for 30 epochs, with each epoch taking approximately 3.41 s. Notably, our ''Sample Gradient Descent'' approach exhibited remarkable performance, converging in just 8 epochs, while the conventional batch gradient descent algorithm required 20 epochs to achieve convergence. This substantial difference in convergence rates, along with reduced computation times, highlights the superior efficiency of our proposed method. These findings underscore the potential utility of the ''Sample Gradient Descent'' technique across diverse domains, ranging from machine learning to optimization problems. The significant improvements in convergence rates and computation times make our algorithm particularly appealing to practitioners and researchers seeking enhanced efficiency in gradient descent optimization.
引用
收藏
页数:13
相关论文
共 24 条
  • [1] Anderson J., 2017, Int J Data Sci Anal, V3, P39
  • [2] Bottou Leon, 2012, Neural Networks: Tricks of the Trade. Second Edition: LNCS 7700, P421, DOI 10.1007/978-3-642-35289-8_25
  • [3] Chen Q., 2021, J Optimization, V54, P1234
  • [4] Chen X., 2021, J Parallel Distributed Comput, V150, P50
  • [5] Garcia R., 2017, Optimization Methods Software, V32, P591
  • [6] Goodfellow I, 2016, ADAPT COMPUT MACH LE, P1
  • [7] Deep Neural Networks for Acoustic Modeling in Speech Recognition
    Hinton, Geoffrey
    Deng, Li
    Yu, Dong
    Dahl, George E.
    Mohamed, Abdel-rahman
    Jaitly, Navdeep
    Senior, Andrew
    Vanhoucke, Vincent
    Patrick Nguyen
    Sainath, Tara N.
    Kingsbury, Brian
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2012, 29 (06) : 82 - 97
  • [8] Johnson B., 2019, Neural Comput, V31, P555
  • [9] Johnson DG., 2019, J Neural Networks, V42, P18
  • [10] Kim D., 2020, Int J Machine Learn Comput, V10, P1