From distributed machine to distributed deep learning: a comprehensive survey

被引:8
作者
Dehghani, Mohammad [1 ]
Yazdanparast, Zahra [2 ]
机构
[1] Univ Tehran, Tehran, Iran
[2] Tarbiat Modares Univ, Tehran, Iran
关键词
Artificial intelligence; Machine learning; Distributed machine learning; Distributed deep learning; Ditributed reinforcement learning; Data-parallelism; Model-parallelism; PARALLEL; SVM; OPTIMIZATION; STRATEGIES;
D O I
10.1186/s40537-023-00829-x
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Artificial intelligence has made remarkable progress in handling complex tasks, thanks to advances in hardware acceleration and machine learning algorithms. However, to acquire more accurate outcomes and solve more complex issues, algorithms should be trained with more data. Processing this huge amount of data could be time-consuming and require a great deal of computation. To address these issues, distributed machine learning has been proposed, which involves distributing the data and algorithm across several machines. There has been considerable effort put into developing distributed machine learning algorithms, and different methods have been proposed so far. We divide these algorithms in classification and clustering (traditional machine learning), deep learning and deep reinforcement learning groups. Distributed deep learning has gained more attention in recent years and most of the studies have focused on this approach. Therefore, we mostly concentrate on this category. Based on the investigation of the mentioned algorithms, we highlighted the limitations that should be addressed in future research.
引用
收藏
页数:21
相关论文
共 107 条
[1]   A Hybrid Parallelization Approach for Distributed and Scalable Deep Learning [J].
Akintoye, Samson B. ;
Han, Liangxiu ;
Zhang, Xin ;
Chen, Haoming ;
Zhang, Daoqiang .
IEEE ACCESS, 2022, 10 :77950-77961
[2]   A MapReduce-based distributed SVM algorithm for automatic image annotation [J].
Alham, Nasullah Khalid ;
Li, Maozhen ;
Liu, Yang ;
Hammoud, Suhel .
COMPUTERS & MATHEMATICS WITH APPLICATIONS, 2011, 62 (07) :2801-2811
[3]  
Alloghani M., 2020, Supervised and Unsupervised Learning for Data Science, P3, DOI [10.1007/978-3-030-22475-2_1, DOI 10.1007/978-3-030-22475-2_1]
[4]  
Alqahtani S, 2019, Arxiv, DOI arXiv:1909.02061
[5]  
[Anonymous], 2011, Advances in Neural Information Processing Systems
[6]   Privacy preservation in Distributed Deep Learning: A survey on Distributed Deep Learning, privacy preservation techniques used and interesting research directions [J].
Antwi-Boasiako, Emmanuel ;
Zhou, Shijie ;
Liao, Yongjian ;
Liu, Qihe ;
Wang, Yuyu ;
Owusu-Agyemang, Kwabena .
JOURNAL OF INFORMATION SECURITY AND APPLICATIONS, 2021, 61
[7]   Demystifying Parallel and Distributed Deep Learning: An In-depth Concurrency Analysis [J].
Ben-Nun, Tal ;
Hoefler, Torsten .
ACM COMPUTING SURVEYS, 2019, 52 (04)
[8]   A new scalable distributed k-means algorithm based on Cloud micro-services for High-performance computing [J].
Benchara, Fatema Zahra ;
Youssfi, Mohamed .
PARALLEL COMPUTING, 2021, 101
[9]  
Cheatham T, 1996, Tools and environments for parallel and distributed systems
[10]  
Chen CC, 2019, Arxiv, DOI arXiv:1809.02839