共 28 条
[1]
ZHU Hongrui, YUAN Guojun, YAO Chengji, Et al., Survey on network of distributed deep learning training, Journal of Computer Research and Development, 58, 1, pp. 98-115, (2021)
[2]
WANG Shuai, LI Dan, Research progress on network performance optimization of distributed machine learning system, Chinese Journal of Computers, 45, 7, pp. 1384-1411, (2022)
[3]
JU Tao, ZHAO Yuyang, LIU Shuai, Et al., A parallel optimization method of deep learning model for image recognition, Journal of Xi'an Jiaotong University, 57, 1, pp. 141-151, (2023)
[4]
LI Mu, ANDERSEN D G, PARK J W, Et al., Scaling distributed machine learning with the parameter server, Proceedings of the 11th USENIX Conference on Operating Systems Design and Implementation, pp. 583-598, (2014)
[5]
ZHU Huming, LI Pei, JIAO Licheng, Et al., Review of parallel deep neural network, Chinese Journal of Computers, 41, 8, pp. 1861-1881, (2018)
[6]
BOTTOU L, CURTIS F E, NOCEDAL J, Et al., Optimization methods for large-scale machine learning, SIAM Review, 60, 2, pp. 223-311, (2018)
[7]
GAO Heran, WU Heng, XU Yuanjia, Et al., Survey on memory swapping mechanism for deep learning training, Journal of Software, 34, 12, pp. 5862-5886, (2023)
[8]
JU Tao, LIU Shuai, WANG Zhiqiang, Et al., Task segmentation and parallel optimization of DNN model [J/OL], Journal of Beijing University of Aeronautics and Astronautics, pp. 1-18
[9]
YAN Guangfeng, LI Tan, WU Kui, Et al., Killing two birds with one stone: quantization achieves privacy in distributed learning, Digital Signal Processing, 146, (2024)
[10]
CHEN Shida, LIU Qiang, HAN Liang, Gradient sparsification compression approach to reducing communication in distributed training, Journal of Zhejiang University(Engineering Science), 55, 2, pp. 386-394, (2021)