Parameter Communication Consistency Model for Large-Scale Security Monitoring Based on Mobile Computing

被引:2
作者
Yang, Rui [1 ,2 ]
Zhang, Jilin [1 ,2 ,3 ]
Wan, Jian [1 ,2 ,4 ]
Zhou, Li [1 ,2 ]
Shen, Jing [1 ,2 ]
Zhang, Yunchen [1 ,2 ]
Wei, Zhenguo [5 ]
Zhang, Juncong [5 ]
Wang, Jue [6 ]
机构
[1] Hangzhou Dianzi Univ, Sch Comp Sci & Technol, Hangzhou 310018, Peoples R China
[2] Hangzhou Dianzi Univ, Minist Educ, Key Lab Complex Syst Modeling & Simulat, Hangzhou 310018, Peoples R China
[3] Chinese Acad Sci, Inst Comp Technol, State Key Lab Comp Architecture, Beijing 100190, Peoples R China
[4] Zhejiang Univ Sci & Technol, Sch Informat & Elect Engn, Hangzhou 310023, Peoples R China
[5] Chinese Acad Sci, Comp Network Informat Ctr, Beijing 100190, Peoples R China
[6] Zhejiang Dawning Informat Technol Co Ltd, Hangzhou 310051, Peoples R China
基金
中国国家自然科学基金;
关键词
Mobile computing; security monitoring; distributed machine learning; limited synchronous parallel model; parameter server;
D O I
10.1109/ACCESS.2019.2956632
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the application of mobile computing in the security field, security monitoring big data has also begun to emerge, providing favorable support for smart city construction and city-scale and investment expansion. Mobile computing takes full advantage of the computing power and communication capabilities of various sensing devices and uses these devices to form a computing cluster. When using such clusters for training of distributed machine learning models, the load imbalance and network transmission delay result in low efficiency of model training. Therefore, this paper proposes a distributed machine learning parameter communication consistency model based on the parameter server idea, which is called the limited synchronous parallel model. The model is based on the fault-tolerant characteristics of the machine learning algorithm, and it dynamically limits the size of the synchronization barrier of the parameter server, reduces the synchronization communication overhead, and ensures the accuracy of the model training; thus, the model realizes finite asynchronous calculation between the worker nodes and gives full play to the overall performance of the cluster. The implementation of cluster dynamic load balancing experiments shows that the model can fully utilize the cluster performance during the training of distributed machine learning models to ensure the accuracy of the model and improve the training speed.
引用
收藏
页码:171884 / 171897
页数:14
相关论文
共 45 条
[21]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[22]  
Ho Qirong, 2013, Adv Neural Inf Process Syst, V2013, P1223
[23]   FlexPS: Flexible Parallelism Control in Parameter Server Architecture [J].
Huang, Yuzhen ;
Jin, Tatiana ;
Wu, Yidi ;
Cai, Zhenkun ;
Yan, Xiao ;
Yang, Fan ;
Li, Jinfeng ;
Guo, Yuying ;
Cheng, James .
PROCEEDINGS OF THE VLDB ENDOWMENT, 2018, 11 (05) :566-579
[24]   Caffe: Convolutional Architecture for Fast Feature Embedding [J].
Jia, Yangqing ;
Shelhamer, Evan ;
Donahue, Jeff ;
Karayev, Sergey ;
Long, Jonathan ;
Girshick, Ross ;
Guadarrama, Sergio ;
Darrell, Trevor .
PROCEEDINGS OF THE 2014 ACM CONFERENCE ON MULTIMEDIA (MM'14), 2014, :675-678
[25]   Angel: a new large-scale machine learning system [J].
Jiang, Jie ;
Yu, Lele ;
Jiang, Jiawei ;
Liu, Yuhong ;
Cui, Bin .
NATIONAL SCIENCE REVIEW, 2018, 5 (02) :216-236
[26]   DiFacto - Distributed Factorization Machines [J].
Li, Mu ;
Liu, Ziqi ;
Smola, Alexander J. ;
Wang, Yu-Xiang .
PROCEEDINGS OF THE NINTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM'16), 2016, :377-386
[27]  
Li Mu, 2014, OPERATING SYSTEMS DE, P583
[28]   Adjusting forwarder nodes and duty cycle using packet aggregation routing for body sensor networks [J].
Liu, Xiao ;
Zhao, Ming ;
Liu, Anfeng ;
Wong, Kelvin Kian Loong .
INFORMATION FUSION, 2020, 53 :183-195
[29]  
Meng XR, 2016, J MACH LEARN RES, V17
[30]   CNTK: Microsoft's Open-Source Deep-Learning Toolkit [J].
Seide, Frank ;
Agarwal, Amit .
KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, :2135-2135