Statistical Mechanics of Optimal Convex Inference in High Dimensions

被引:37
|
作者
Advani, Madhu [1 ]
Ganguli, Surya [1 ]
机构
[1] Stanford Univ, Dept Appl Phys, Stanford, CA 94305 USA
来源
PHYSICAL REVIEW X | 2016年 / 6卷 / 03期
关键词
BIG DATA; NETWORKS;
D O I
10.1103/PhysRevX.6.031034
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
A fundamental problem in modern high-dimensional data analysis involves efficiently inferring a set of P unknown model parameters governing the relationship between the inputs and outputs of N noisy measurements. Various methods have been proposed to regress the outputs against the inputs to recover the P parameters. What are fundamental limits on the accuracy of regression, given finite signal-to-noise ratios, limited measurements, prior information, and computational tractability requirements? How can we optimally combine prior information with measurements to achieve these limits? Classical statistics gives incisive answers to these questions as the measurement density alpha = (N/P) -> infinity. However, these classical results are not relevant to modern high-dimensional inference problems, which instead occur at finite a. We employ replica theory to answer these questions for a class of inference algorithms, known in the statistics literature as M-estimators. These algorithms attempt to recover the P model parameters by solving an optimization problem involving minimizing the sum of a loss function that penalizes deviations between the data and model predictions, and a regularizer that leverages prior information about model parameters. Widely cherished algorithms like maximum likelihood (ML) and maximum-a posteriori (MAP) inference arise as special cases of M-estimators. Our analysis uncovers fundamental limits on the inference accuracy of a subclass of M-estimators corresponding to computationally tractable convex optimization problems. These limits generalize classical statistical theorems like the Cramer-Rao bound to the high-dimensional setting with prior information. We further discover the optimal M-estimator for log-concave signal and noise distributions; we demonstrate that it can achieve our high-dimensional limits on inference accuracy, while ML and MAP cannot. Intriguingly, in high dimensions, these optimal algorithms become computationally simpler than ML and MAP while still outperforming them. For example, such optimal M-estimation algorithms can lead to as much as a 20% reduction in the amount of data to achieve the same performance relative to MAP. Moreover, we demonstrate a prediction of replica theory that no inference procedure whatsoever can outperform our optimal M-estimation procedure when signal and noise distributions are log-concave, by uncovering an equivalence between optimal M-estimation and optimal Bayesian inference in this setting. Our analysis also reveals insights into the nature of generalization and predictive power in high dimensions, information theoretic limits on compressed sensing, phase transitions in quadratic inference, and connections to central mathematical objects in convex optimization theory and random matrix theory.
引用
收藏
页数:16
相关论文
共 50 条
  • [31] Unifying divergence minimization and statistical inference via convex duality
    Altun, Yasemin
    Smola, Alex
    LEARNING THEORY, PROCEEDINGS, 2006, 4005 : 139 - 153
  • [32] STATISTICAL INFERENCE FOR APPROXIMATE BAYESIAN OPTIMAL DESIGN
    Jaiswal, Prateek
    Honnappa, Harsha
    2020 WINTER SIMULATION CONFERENCE (WSC), 2020, : 2138 - 2148
  • [33] On Optimal Data Compression in Multiterminal Statistical Inference
    Amari, Shun-ichi
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2011, 57 (09) : 5577 - 5587
  • [34] A NOTE ON RELATIONS AMONG PRIOR PROBABILISTIC DECISIONS, PATH PROBABILITY METHOD, OPTIMAL ENTROPY INFERENCE AND STATISTICAL MECHANICS
    HAMANN, JR
    BIANCHI, LM
    PROGRESS OF THEORETICAL PHYSICS, 1969, 42 (04): : 982 - &
  • [35] Local projection inference in high dimensions
    Adamek, Robert
    Smeekes, Stephan
    Wilms, Ines
    ECONOMETRICS JOURNAL, 2024, 27 (03): : 323 - 342
  • [36] Some Perspectives on Inference in High Dimensions
    Battey, H. S.
    Cox, D. R.
    STATISTICAL SCIENCE, 2022, 37 (01) : 110 - 122
  • [37] Inference in Deep Networks in High Dimensions
    Fletcher, Alyson K.
    Rangan, Sundeep
    Schniter, Philip
    2018 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2018, : 1884 - 1888
  • [38] The peculiar statistical mechanics of optimal learning machines
    Marsili, Matteo
    JOURNAL OF STATISTICAL MECHANICS-THEORY AND EXPERIMENT, 2019, 2019 (10):
  • [39] Statistical Optimization in High Dimensions
    Xu, Huan
    Caramanis, Constantine
    Mannor, Shie
    OPERATIONS RESEARCH, 2016, 64 (04) : 958 - 979
  • [40] On the Foundations of Statistical Mechanics: Ergodicity, Many Degrees of Freedom and Inference
    Sergio Chibbaro
    Lamberto Rondoni
    Angelo Vulpiani
    CommunicationsinTheoreticalPhysics, 2014, 62 (10) : 469 - 475