MinMax Sampling: A Near-optimal Global Summary for Aggregation in the Wide Area

被引:5
|
作者
Zhao, Yikai [1 ,2 ,3 ]
Zhang, Yinda [1 ,2 ,3 ]
Li, Yuanpeng [1 ,2 ,3 ]
Zhou, Yi [1 ,2 ,3 ]
Chen, Chunhui [1 ,5 ]
Yang, Tong [1 ,2 ,3 ,4 ]
Cui, Bin [1 ,2 ,3 ]
机构
[1] Peking Univ, Beijing, Peoples R China
[2] Peking Univ, Sch Comp Sci, Beijing, Peoples R China
[3] Peking Univ, Natl Engn Lab Big Data Anal Technol & Applicat, Beijing, Peoples R China
[4] Peng Cheng Lab, Shenzhen, Peoples R China
[5] Peking Univ, Sch Software & Microelect, Beijing, Peoples R China
来源
PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA (SIGMOD '22) | 2022年
基金
中国国家自然科学基金;
关键词
Sampling; Wide-Area Network; Federated Learning; FREQUENT;
D O I
10.1145/3514221.3526160
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Nowadays, wide-area data analyses are pervasive with emerging geo-distributed systems. These analyses often need to do the global aggregation in the wide area. Since scarce and variable WAN bandwidth may degrade the aggregation performance, it is highly desired to design a communication scheme for global aggregation in WAN. Unfortunately, no existing algorithm can meet the three design requirements of communication schemes: fast computation, adaptive transmission, and accurate aggregation. In this paper, we propose MinMax Sampling, a fast, adaptive, and accurate communication scheme for global aggregation in WAN. We first focus on the accuracy and design a scheme, namely MinMax(opt), to achieve optimal accuracy. However, MinMax(opt) does not meet the other two requirements: fast computation and adaptive transmission. Based on MinMax(opt), we propose MinMax(adp), which trades little accuracy for the other two requirements. We evaluate MinMax(adp) with three applications: federated learning, distributed state aggregation, and hierarchical aggregation. Our experimental results show that MinMax(adp) is superior to existing algorithms (8.44x better accuracy on average) in all three applications. The source codes of MinMax Sampling are available at Github [1].
引用
收藏
页码:744 / 758
页数:15
相关论文
共 50 条
  • [21] Adaptive Simulated Annealing: A Near-Optimal Connection between Sampling and Counting
    Stefankovic, Daniel
    Vempala, Santosh
    Vigoda, Eric
    JOURNAL OF THE ACM, 2009, 56 (03)
  • [22] A near-optimal sampling strategy for sparse recovery of polynomial chaos expansions
    Alemazkoor, Negin
    Meidani, Hadi
    JOURNAL OF COMPUTATIONAL PHYSICS, 2018, 371 : 137 - 151
  • [23] Near-Optimal Multi-Robot Motion Planning with Finite Sampling
    Dayan, Dror
    Solovey, Kiril
    Pavone, Marco
    Halperin, Dan
    IEEE TRANSACTIONS ON ROBOTICS, 2023, 39 (05) : 3422 - 3436
  • [24] Sampling diverse near-optimal solutions via algorithmic quantum annealing
    Mohseni, Masoud
    Rams, Marek M.
    Isakov, Sergei, V
    Eppens, Daniel
    Pielawa, Susanne
    Strumpfer, Johan
    Boixo, Sergio
    Neven, Hartmut
    PHYSICAL REVIEW E, 2023, 108 (06)
  • [25] Design of a near-optimal, wide-range fuzzy logic controller
    Kukolj, D
    Kuzmanovic, S
    Levi, E
    Kulic, F
    FUZZY SETS AND SYSTEMS, 2001, 120 (01) : 17 - 34
  • [26] Near-Optimal Solution to the Non-Uniform Sampling Problem in Kalman Filtering
    Hartman, David
    Baras, John S.
    2019 IEEE 58TH CONFERENCE ON DECISION AND CONTROL (CDC), 2019, : 6404 - 6411
  • [27] A Sparse Sampling Algorithm for Near-Optimal Planning in Large Markov Decision Processes
    Michael Kearns
    Yishay Mansour
    Andrew Y. Ng
    Machine Learning, 2002, 49 : 193 - 208
  • [28] A sparse sampling algorithm for near-optimal planning in large Markov decision processes
    Kearns, M
    Mansour, Y
    Ng, AY
    IJCAI-99: PROCEEDINGS OF THE SIXTEENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOLS 1 & 2, 1999, : 1324 - 1331
  • [29] Self-accelerated Thompson sampling with near-optimal regret upper bound
    Zhu, Zhenyu
    Huang, Liusheng
    Xu, Hongli
    NEUROCOMPUTING, 2020, 399 : 37 - 47
  • [30] A sparse sampling algorithm for near-optimal planning in large Markov decision processes
    Kearns, M
    Mansour, Y
    Ng, AY
    MACHINE LEARNING, 2002, 49 (2-3) : 193 - 208