The Relative Complexity of Maximum Likelihood Estimation, MAP Estimation, and Sampling

被引:0
作者
Tosh, Christopher [1 ]
Dasgupta, Sanjoy [2 ]
机构
[1] Columbia Univ, New York, NY 10027 USA
[2] Univ Calif San Diego, San Diego, CA USA
来源
CONFERENCE ON LEARNING THEORY, VOL 99 | 2019年 / 99卷
基金
美国国家科学基金会;
关键词
Hardness; reductions; sampling;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We prove that, for a broad range of problems, maximum-a-posteriori (MAP) estimation and approximate sampling of the posterior are at least as computationally difficult as maximum-likelihood (ML) estimation. By way of illustration, we show how hardness results for ML estimation of mixtures of Gaussians and topic models carry over to MAP estimation and approximate sampling under commonly used priors.
引用
收藏
页数:43
相关论文
共 14 条
  • [1] NP-hardness of Euclidean sum-of-squares clustering
    Aloise, Daniel
    Deshpande, Amit
    Hansen, Pierre
    Popat, Preyas
    [J]. MACHINE LEARNING, 2009, 75 (02) : 245 - 248
  • [2] [Anonymous], 1953, Univ. California Publ. Statist
  • [3] Learning Topic Models - Going beyond SVD
    Arora, Sanjeev
    Ge, Rong
    Moitra, Ankur
    [J]. 2012 IEEE 53RD ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS), 2012, : 1 - 10
  • [4] Baker Alan, 1975, Transcendental Number Theory
  • [5] Bubeck S., 2015, arXiv
  • [6] Cheng Xiang, 2018, PMLR, P300
  • [7] Finding a maximum likelihood tree is hard
    Chor, Benny
    Tuller, Tamir
    [J]. JOURNAL OF THE ACM, 2006, 53 (05) : 722 - 744
  • [8] Guruswami V, 2005, PROCEEDINGS OF THE SIXTEENTH ANNUAL ACM-SIAM SYMPOSIUM ON DISCRETE ALGORITHMS, P470
  • [9] Simulated annealing for convex optimization
    Kalai, Adam Tauman
    Vempala, Santosh
    [J]. MATHEMATICS OF OPERATIONS RESEARCH, 2006, 31 (02) : 253 - 266
  • [10] Ng KW., 2011, DIRICHLET RELATED DI, DOI [10.1002/9781119995784, DOI 10.1002/9781119995784]