A New Minimax Theorem for Randomized Algorithms (Extended Abstract†)

被引:5
作者
Ben-David, Shalev [1 ]
Blais, Eric [1 ]
机构
[1] Univ Waterloo, David R Cheriton Sch Comp Sci, Waterloo, ON, Canada
来源
2020 IEEE 61ST ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS 2020) | 2020年
关键词
Minimax; Randomized computation; Quantum computation; Query complexity; Communication complexity; Polynomial degree complexity; Circuit complexity;
D O I
10.1109/FOCS46700.2020.00045
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The celebrated minimax principle of Yao (1977) says that for any Boolean-valued function f with finite domain, there is a distribution mu over the domain of f such that computing f to error epsilon against inputs from mu is just as hard as computing f to error epsilon on worst-case inputs. Notably, however, the distribution mu depends on the target error level : the hard distribution which is tight for bounded error might be trivial to solve to small bias, and the hard distribution which is tight for a small bias level might be far from tight for bounded error levels. In this work, we introduce a new type of minimax theorem which can provide a hard distribution mu that works for all bias levels at once. We show that this works for randomized query complexity, randomized communication complexity, some randomized circuit models, quantum query and communication complexities, approximate polynomial degree, and approximate logrank. We also prove an improved version of Impagliazzo's hardcore lemma. Our proofs rely on two innovations over the classical approach of using Von Neumann's minimax theorem or linear programming duality. First, we use Sion's minimax theorem to prove a minimax theorem for ratios of bilinear functions representing the cost and score of algorithms. Second, we introduce a new way to analyze low-bias randomized algorithms by viewing them as "forecasting algorithms" evaluated by a certain proper scoring rule. The expected score of the forecasting version of a randomized algorithm appears to be a more fine-grained way of analyzing the bias of the algorithm. We show that such expected scores have many elegant mathematical properties: for example, they can be amplified linearly instead of quadratically. We anticipate forecasting algorithms will find use in future work in which a fine-grained analysis of small-bias algorithms is required.
引用
收藏
页码:403 / 411
页数:9
相关论文
共 36 条
  • [1] Aaronson S., 2020, S SIMPL ALG, P24
  • [2] Alt Helmut, 1988, J ACM, DOI [10.1145/ 42282.2, DOI 10.1145/42282.2]
  • [3] [Anonymous], 1987, The complexity of Boolean functions
  • [4] Barak Boaz, 2009, P 20 ANN ACM SIAM S, DOI [10.1137 / 1.9781611973068.129, DOI 10.1137/1.9781611973068.129]
  • [5] Bassilakis Andrew, 2020, P 47 INT C AUT LANG
  • [6] LOG DEPTH CIRCUITS FOR DIVISION AND RELATED PROBLEMS
    BEAME, PW
    COOK, SA
    HOOVER, HJ
    [J]. SIAM JOURNAL ON COMPUTING, 1986, 15 (04) : 994 - 1003
  • [7] Classical lower bounds from quantum upper bounds
    Ben-David, Shalev
    Bouland, Adam
    Garg, Ankit
    Kothari, Robin
    [J]. 2018 IEEE 59TH ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS), 2018, : 339 - 349
  • [8] Ben-David Shalev, 2020, P 61 ANN IEEE S FDN
  • [9] Optimal Separation and Strong Direct Sum for Randomized Query Complexity
    Blais, Eric
    Brody, Joshua
    [J]. 34TH COMPUTATIONAL COMPLEXITY CONFERENCE (CCC 2019), 2019, 137
  • [10] Brassard G., 2002, AMS Contemp. Math. Ser, V305, P53, DOI 10.1090/conm/305/05215