The Relative Complexity of Maximum Likelihood Estimation, MAP Estimation, and Sampling
被引:0
作者:
Tosh, Christopher
论文数: 0引用数: 0
h-index: 0
机构:
Columbia Univ, New York, NY 10027 USAColumbia Univ, New York, NY 10027 USA
Tosh, Christopher
[1
]
Dasgupta, Sanjoy
论文数: 0引用数: 0
h-index: 0
机构:
Univ Calif San Diego, San Diego, CA USAColumbia Univ, New York, NY 10027 USA
Dasgupta, Sanjoy
[2
]
机构:
[1] Columbia Univ, New York, NY 10027 USA
[2] Univ Calif San Diego, San Diego, CA USA
来源:
CONFERENCE ON LEARNING THEORY, VOL 99
|
2019年
/
99卷
基金:
美国国家科学基金会;
关键词:
Hardness;
reductions;
sampling;
D O I:
暂无
中图分类号:
TP18 [人工智能理论];
学科分类号:
081104 ;
0812 ;
0835 ;
1405 ;
摘要:
We prove that, for a broad range of problems, maximum-a-posteriori (MAP) estimation and approximate sampling of the posterior are at least as computationally difficult as maximum-likelihood (ML) estimation. By way of illustration, we show how hardness results for ML estimation of mixtures of Gaussians and topic models carry over to MAP estimation and approximate sampling under commonly used priors.
引用
收藏
页数:43
相关论文
共 14 条
[11]
Raginsky Maxim, 2017, P C LEARN THEOR COLT, P1674
[12]
Sontag D., 2011, Advances in Neural Information Processing Systems, P1008