A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors

被引:3
作者
Zhang, Jilin [1 ,2 ,3 ,4 ,5 ]
Tu, Hangdi [1 ,2 ]
Ren, Yongjian [1 ,2 ]
Wan, Jian [1 ,2 ,4 ,5 ]
Zhou, Li [1 ,2 ]
Li, Mingwei [1 ,2 ]
Wang, Jue [6 ]
Yu, Lifeng [7 ,8 ]
Zhao, Chang [1 ,2 ]
Zhang, Lei [9 ]
机构
[1] Hangzhou Dianzi Univ, Sch Comp Sci & Technol, Hangzhou 310018, Zhejiang, Peoples R China
[2] Minist Educ, Key Lab Complex Syst Modeling & Simulat, Hangzhou 310018, Zhejiang, Peoples R China
[3] Zhejiang Univ, Coll Elect Engn, Hangzhou 310058, Zhejiang, Peoples R China
[4] Zhejiang Univ Sci & Technol, Sch Informat & Elect Engn, Hangzhou 310023, Zhejiang, Peoples R China
[5] Zhejiang Prov Engn Ctr Media Data Cloud Proc & An, Hangzhou 310018, Zhejiang, Peoples R China
[6] Chinese Acad Sci, Supercomp Ctr Comp Network Informat Ctr, Beijing 100190, Peoples R China
[7] Hithink RoyalFlush Informat Network Co Ltd, Hangzhou 310023, Zhejiang, Peoples R China
[8] Financial Informat Engn Technol Res Ctr Zhejiang, Hangzhou 310023, Zhejiang, Peoples R China
[9] Beijing Univ Civil Engn & Architecture, Dept Comp Sci, Beijing 100044, Peoples R China
基金
国家高技术研究发展计划(863计划); 中国国家自然科学基金;
关键词
disturbed machine learning; sensors; dynamic synchronous parallel strategy (DSP); parameter server (PS); FRAMEWORK;
D O I
10.3390/s17102172
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors.
引用
收藏
页数:17
相关论文
共 40 条
[1]  
Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
[2]  
Ahmed Amr., 2012, International conference on Web Search and Data Mining, V51, P1257
[3]  
[Anonymous], ABSTRACT MACHINE MOD
[4]  
[Anonymous], 2011, Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD)
[5]  
[Anonymous], 2012, NIPS
[6]  
[Anonymous], 2014, OPERATING SYSTEMS DE
[7]  
[Anonymous], 2014, P USENIX OSDI
[8]  
[Anonymous], MORE EFFECTIVE DISTR
[9]  
[Anonymous], 1989, P ADV NEUR INF PROC
[10]  
[Anonymous], ARXIV15120127