DLBench: a comprehensive experimental evaluation of deep learning frameworks

被引:34
作者
Elshawi, Radwa [1 ]
Wahab, Abdul [1 ]
Barnawi, Ahmed [3 ]
Sakr, Sherif [2 ]
机构
[1] Univ Tartu, Tartu, Estonia
[2] Univ Tartu, Inst Comp Sci, Data Syst Grp, Tartu, Estonia
[3] King Abdulaziz Univ, Jeddah, Saudi Arabia
来源
CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS | 2021年 / 24卷 / 03期
关键词
Deep learning; Experimental evaluation; CNN; LSTM; ARCHITECTURES;
D O I
10.1007/s10586-021-03240-4
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep Learning (DL) has achieved remarkable progress over the last decade on various tasks such as image recognition, speech recognition, and natural language processing. In general, three main crucial aspects fueled this progress: the increasing availability of large amount of digitized data, the increasing availability of affordable parallel and powerful computing resources (e.g., GPU) and the growing number of open source deep learning frameworks that facilitate and ease the development process of deep learning architectures. In practice, the increasing popularity of deep learning frameworks calls for benchmarking studies that can effectively evaluate and understand the performance characteristics of these systems. In this paper, we conduct an extensive experimental evaluation and analysis of six popular deep learning frameworks, namely, TensorFlow, MXNet, PyTorch, Theano, Chainer, and Keras, using three types of DL architectures Convolutional Neural Networks (CNN), Faster Region-based Convolutional Neural Networks (Faster R-CNN), and Long Short Term Memory (LSTM). Our experimental evaluation considers different aspects for its comparison including accuracy, training time, convergence and resource consumption patterns. Our experiments have been conducted on both CPU and GPU environments using different datasets. We report and analyze the performance characteristics of the studied frameworks. In addition, we report a set of insights and important lessons that we have learned from conducting our experiments.
引用
收藏
页码:2017 / 2038
页数:22
相关论文
共 53 条
  • [1] Abadi Martin, 2016, ARXIV160304467
  • [2] Data mining techniques for analyzing healthcare conditions of urban space-person lung using meta-heuristic optimized neural networks
    Abugabah, Ahed
    AlZubi, Ahmad Ali
    Al-Obeidat, Feras
    Alarifi, Abdulaziz
    Alwadain, Ayed
    [J]. CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2020, 23 (03): : 1781 - 1794
  • [3] [Anonymous], 2016, OpenBLAS: an optimized BLAS library
  • [4] [Anonymous], 2010, P 9 PYTH SCI C, DOI DOI 10.25080/MAJORA-92BF1922-003
  • [5] [Anonymous], 2018, P DEEP LEARN DAY SIG
  • [6] [Anonymous], 2017, T REPORT
  • [7] [Anonymous], 2017, COMMUNICATION
  • [8] [Anonymous], 2017, MKL DNN SCALABLE DEE
  • [9] Awan A A, 2017, P MACHINE LEARNING H, P1
  • [10] Bahrampour S., 2016, Comparative study of caffe, neon, theano, and torch for deep learning