Scaling-Up Distributed Processing of Data Streams for Machine Learning

被引:14
作者
Nokleby, Matthew [1 ]
Raja, Haroon [2 ]
Bajwa, Waheed U. [3 ,4 ]
机构
[1] Target AI, Minneapolis, MN 55402 USA
[2] Univ Michigan, Dept Elect Engn & Comp Sci, Ann Arbor, MI 48109 USA
[3] Rutgers State Univ, Dept Elect & Comp Engn, New Brunswick, NJ 08854 USA
[4] Rutgers State Univ, Dept Stat, New Brunswick, NJ 08854 USA
基金
美国国家科学基金会;
关键词
Machine learning; Training data; Distributed databases; Computational modeling; Data models; Optimization; Stochastic processes; Convex optimization; distributed training; empirical risk minimization (ERM); federated learning; machine learning; minibatching; principal component analysis (PCA); stochastic gradient descent (SGD); stochastic optimization (SO); streaming data; STOCHASTIC OPTIMIZATION; SUBGRADIENT METHODS; VARIANCE-REDUCTION; ONLINE PREDICTION; CONVERGENCE; CONSENSUS; STABILITY; ALGORITHM; APPROXIMATION; CONVEX;
D O I
10.1109/JPROC.2020.3021381
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Emerging applications of machine learning in numerous areas including online social networks, remote sensing, Internet-of-Things (IoT) systems, smart grids, and more involve continuous gathering of and learning from streams of data samples. Real-time incorporation of streaming data into the learned machine learning models is essential for improved inference in these applications. Furthermore, these applications often involve data that are either inherently gathered at geographically distributed entities due to physical reasons, for example, IoT systems and smart grids, or that are intentionally distributed across multiple computing machines for memory, storage, computational, and/or privacy reasons. Training of machine learning models in this distributed, streaming setting requires solving stochastic optimization (SO) problems in a collaborative manner over communication links between the physical entities. When the streaming data rate is high compared with the processing capabilities of individual computing entities and/or the rate of the communications links, this poses a challenging question: How can one best leverage the incoming data for distributed training of machine learning models under constraints on computing capabilities and/or communications rate? A large body of research in distributed online optimization has emerged in recent decades to tackle this and related problems. This article reviews recently developed methods that focus on large-scale distributed SO in the compute- and bandwidth-limited regimes, with an emphasis on convergence analysis that explicitly accounts for the mismatch between computation, communication, and streaming rates and provides sufficient conditions for order-optimum convergence. In particular, it focuses on methods that solve: 1) distributed stochastic convex problems and 2) distributed principal component analysis, which is a nonconvex problem with the geometric structure that permits global convergence. For such methods, this article discusses recent advances in terms of distributed algorithmic designs when faced with high-rate streaming data. Furthermore, it reviews theoretical guarantees underlying these methods that show that there exist regimes in which systems can learn from distributed processing of streaming data at order-optimal rates nearly as fast as if all the data were processed at a single superpowerful machine.
引用
收藏
页码:1984 / 2012
页数:29
相关论文
共 124 条
[1]  
Abadi M, 2016, ACM SIGPLAN NOTICES, V51, P1, DOI [10.1145/2951913.2976746, 10.1145/3022670.2976746]
[2]  
Agarwal A, 2009, IMMUNE INFERTILITY, P155, DOI 10.1007/978-3-642-01379-9_3.2
[3]  
Allen-Zhu Z, 2016, PR MACH LEARN RES, V48
[4]  
Allen-Zhu Z, 2017, PR MACH LEARN RES, V70
[5]  
Allen-Zhu ZY, 2016, PR MACH LEARN RES, V48
[6]   First Efficient Convergence for Streaming k-PCA: a Global, Gap-Free, and Near-Optimal Rate [J].
Allen-Zhu, Zeyuan ;
Li, Yuanzhi .
2017 IEEE 58TH ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS), 2017, :487-492
[7]  
[Anonymous], 2012, Handbook of computational statistics: Concepts and methods
[8]  
[Anonymous], 2003, STOCHASTIC APPROXIMA, DOI DOI 10.1007/978-1-4471-4285-0_3
[9]  
[Anonymous], 2011, Advances in Neural Information Processing Systems
[10]  
[Anonymous], 2016, CORR