Runtime Data Layout Scheduling for Machine Learning Dataset

被引:5
|
作者
You, Yang [1 ]
Demmel, James [1 ]
机构
[1] Univ Calif Berkeley, Div Comp Sci, Berkeley, CA 94720 USA
来源
2017 46TH INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING (ICPP) | 2017年
关键词
parallel auto-tuning; machine learning;
D O I
10.1109/ICPP.2017.54
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Machine Learning (ML) approaches are widely-used classification/regression methods for data mining applications. However, the time-consuming training process greatly limits the efficiency of ML approaches. We use the example of SVM (traditional ML algorithm) and DNN (state-of-the-art ML algorithm) to illustrate the idea in this paper. For SVM, a major performance bottleneck of current tools is that they use a unified data storage format because the data formats can have a significant influence on the complexity of storage and computation, memory bandwidth, and the efficiency of parallel processing. To address the problem above, we study the factors influencing the algorithm's performance and conduct auto-tuning to speed up SVM training. DNN training is even slower than SVM. For example, using a 8-core CPUs to train AlexNet model by CIFAR-10 dataset costs 8.2 hours. CIFAR-10 is only 170 MB, which is not efficient for distributed processing. Moreover, due to the algorithm limitation, only a small batch of data can be processed at each iteration. We focus on finding the right algorithmic parameters and using auto-tuning techniques to make the algorithm run faster. For SVM training, our implementation achieves 1.7-16.3x speedup (6.8x on average) against the non-adaptive case (using the worst data format) for various datasets. For DNN training on CIFAR-10 dataset, we reduce the time from 8.2 hours to only roughly 1 minute. We use the benchmark of dollars per speedup to help the users to select the right deep learning hardware.
引用
收藏
页码:452 / 461
页数:10
相关论文
共 50 条
  • [11] Runtime prediction of big data jobs: performance comparison of machine learning algorithms and analytical models
    Nasim Ahmed
    Andre L. C. Barczak
    Mohammad A. Rashid
    Teo Susnjak
    Journal of Big Data, 9
  • [12] CuneiML: A Cuneiform Dataset for Machine Learning
    Chen, Danlu
    Agarwal, Aditi
    Berg-Kirkpatrick, Taylor
    Myerston, Jacobo
    JOURNAL OF OPEN HUMANITIES DATA, 2023, 9
  • [13] TRAINING DATASET FOR THE MACHINE LEARNING APPROACH IN GLACIER MONITORING APPLYING SAR DATA
    Piwowar, Lukasz
    Lucka, Magdalena
    Witkowski, Wojciech
    IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2023, : 191 - 194
  • [14] Production Planning and Scheduling Using Machine Learning and Data Science Processes
    De Modesti, Paulo Henrique
    Carvalhar Fernandes, Ederson
    Borsato, Milton
    SPS2020, 2020, 13 : 155 - 166
  • [15] A Machine Learning Approach for Layout Inference in Spreadsheets
    Koci, Elvis
    Thiele, Maik
    Romero, Oscar
    Lehner, Wolfgang
    KDIR: PROCEEDINGS OF THE 8TH INTERNATIONAL JOINT CONFERENCE ON KNOWLEDGE DISCOVERY, KNOWLEDGE ENGINEERING AND KNOWLEDGE MANAGEMENT - VOL. 1, 2016, : 77 - 88
  • [16] The ripple effect of dataset reuse: Contextualising the data lifecycle for machine learning data sets and social impact
    Park, Jaihyun
    Cordell, Ryan
    JOURNAL OF INFORMATION SCIENCE, 2023,
  • [17] Measuring and Visualizing Dataset Coverage for Machine Learning
    Kuhn, D. Richard
    Raunak, M. S.
    Kacker, Raghu N.
    COMPUTER, 2025, 58 (04) : 18 - 26
  • [18] Fast and simple dataset selection for machine learning
    Peter, Timm J.
    Nelles, Oliver
    AT-AUTOMATISIERUNGSTECHNIK, 2019, 67 (10) : 833 - 842
  • [19] Machine Learning on Volatile Instances: Convergence, Runtime, and Cost Tradeoffs
    Zhang, Xiaoxi
    Wang, Jianyu
    Lee, Li-Feng
    Yang, Tom
    Kalra, Akansha
    Joshi, Gauri
    Joe-Wong, Carlee
    IEEE-ACM TRANSACTIONS ON NETWORKING, 2022, 30 (01) : 215 - 228
  • [20] Handling Imbalanced Dataset Classification in Machine Learning
    Yadav, Seema
    Bhole, Girish P.
    2020 IEEE PUNE SECTION INTERNATIONAL CONFERENCE (PUNECON), 2020, : 38 - 43