DISTRIBUTED LINEAR REGRESSION BY AVERAGING

被引:29
作者
Dobriban, Edgar [1 ]
Sheng, Yue [2 ]
机构
[1] Univ Penn, Dept Stat, Philadelphia, PA 19104 USA
[2] Univ Penn, Grad Grp Appl Math & Computat Sci, Dept Math, Philadelphia, PA 19104 USA
关键词
Linear regression; distributed learning; parallel computation; random matrix theory; high dimensional; DIVIDE-AND-CONQUER; QUANTILE REGRESSION; FRAMEWORK;
D O I
10.1214/20-AOS1984
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Distributed statistical learning problems arise commonly when dealing with large datasets. In this setup, datasets are partitioned over machines, which compute locally, and communicate short messages. Communication is often the bottleneck. In this paper, we study one-step and iterative weighted parameter averaging in statistical linear models under data parallelism. We do linear regression on each machine, send the results to a central server and take a weighted average of the parameters. Optionally, we iterate, sending back the weighted average and doing local ridge regressions centered at it. How does this work compared to doing linear regression on the full data? Here, we study the performance loss in estimation and test error, and confidence interval length in high dimensions, where the number of parameters is comparable to the training data size. We find the performance loss in one-step weighted averaging, and also give results for iterative averaging. We also find that different problems are affected differently by the distributed framework. Estimation error and confidence interval length increases a lot, while prediction error increases much less. We rely on recent results from random matrix theory, where we develop a new calculus of deterministic equivalents as a tool of broader interest.
引用
收藏
页码:918 / 943
页数:26
相关论文
共 63 条
  • [1] Agarwal A, 2014, J MACH LEARN RES, V15, P1111
  • [2] Ali A, 2019, PR MACH LEARN RES, V89
  • [3] [Anonymous], 1989, PARALLEL DISTRIBUTED
  • [4] [Anonymous], 2017, P MACHINE LEARNING R
  • [5] [Anonymous], 2012, ADV NEURAL INFORM PR
  • [6] [Anonymous], 2007, NIPS
  • [7] Bai Z, 2010, SPRINGER SER STAT, P1, DOI 10.1007/978-1-4419-0661-8
  • [8] BANERJEE M., 2018, ARXIV180608542
  • [9] DIVIDE AND CONQUER IN NONSTANDARD PROBLEMS AND THE SUPER-EFFICIENCY PHENOMENON
    Banerjee, Moulinath
    Durot, Cecile
    Sen, Bodhisattva
    [J]. ANNALS OF STATISTICS, 2019, 47 (02) : 720 - 757
  • [10] DISTRIBUTED TESTING AND ESTIMATION UNDER SPARSE HIGH DIMENSIONAL MODELS
    Battey, Heather
    Fan, Jianqing
    Liu, Han
    Lu, Junwei
    Zhu, Ziwei
    [J]. ANNALS OF STATISTICS, 2018, 46 (03) : 1352 - 1382