A Direct Estimation of High Dimensional Stationary Vector Autoregressions

被引:0
作者
Han, Fang [1 ]
Lu, Huanran [2 ]
Liu, Han [2 ]
机构
[1] Johns Hopkins Univ, Dept Biostat, Baltimore, MD 21205 USA
[2] Princeton Univ, Dept Operat Res & Financial Engn, Princeton, NJ 08544 USA
关键词
transition matrix; multivariate time series; vector autoregressive model; double asymptotic framework; linear program; LINEAR-REGRESSION; MATRIX ESTIMATION; SELECTION; LASSO; CONVERGENCE; MODELS; NOISY; RATES;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The vector autoregressive (VAR) model is a powerful tool in learning complex time series and has been exploited in many fields. The VAR model poses some unique challenges to researchers: On one hand, the dimensionality, introduced by incorporating multiple numbers of time series and adding the order of the vector autoregression, is usually much higher than the time series length; On the other hand, the temporal dependence structure naturally present in the VAR model gives rise to extra difficulties in data analysis. The regular way in cracking the VAR model is via "least squares" and usually involves adding different penalty terms (e.g., ridge or lasso penalty) in handling high dimensionality. In this manuscript, we propose an alternative way in estimating the VAR model. The main idea is, via exploiting the temporal dependence structure, formulating the estimating problem to a linear program. There is instant advantage of the proposed approach over the lasso type estimators: The estimation equation can be decomposed to multiple sub-equations and accordingly can be solved efficiently using parallel computing. Besides that, we also bring new theoretical insights into the VAR model analysis. So far the theoretical results developed in high dimensions (e.g., Song and Bickel, 2011 and Kock and Callot, 2015) are based on stringent assumptions that are not transparent. Our results, on the other hand, show that the spectral norms of the transition matrices play an important role in estimation accuracy and build estimation and prediction consistency accordingly. Moreover, we provide some experiments on both synthetic and real-world equity data. We show that there are empirical advantages of our method over the lasso-type estimators in parameter estimation and forecasting.
引用
收藏
页码:3115 / 3150
页数:36
相关论文
共 43 条
[1]   CONVERGENCE PROPERTIES OF THE SPLINE FIT [J].
AHLBERG, JH ;
NILSON, EN .
JOURNAL OF THE SOCIETY FOR INDUSTRIAL AND APPLIED MATHEMATICS, 1963, 11 (01) :95-104
[2]  
[Anonymous], 2011, ARXIV11063915
[3]  
Bento J., 2010, ADV NEURAL INFORM PR, V1, P172
[4]   Regularized estimation of large covariance matrices [J].
Bickel, Peter J. ;
Levina, Elizaveta .
ANNALS OF STATISTICS, 2008, 36 (01) :199-227
[5]   SIMULTANEOUS ANALYSIS OF LASSO AND DANTZIG SELECTOR [J].
Bickel, Peter J. ;
Ritov, Ya'acov ;
Tsybakov, Alexandre B. .
ANNALS OF STATISTICS, 2009, 37 (04) :1705-1732
[6]   COVARIANCE REGULARIZATION BY THRESHOLDING [J].
Bickel, Peter J. ;
Levina, Elizaveta .
ANNALS OF STATISTICS, 2008, 36 (06) :2577-2604
[7]   Basic Properties of Strong Mixing Conditions. A Survey and Some Open Questions [J].
Bradley, Richard C. .
PROBABILITY SURVEYS, 2005, 2 :107-144
[8]   Predicting multivariate responses in multiple linear regression [J].
Breiman, L ;
Friedman, JH .
JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-METHODOLOGICAL, 1997, 59 (01) :3-37
[9]   OPTIMAL RATES OF CONVERGENCE FOR COVARIANCE MATRIX ESTIMATION [J].
Cai, T. Tony ;
Zhang, Cun-Hui ;
Zhou, Harrison H. .
ANNALS OF STATISTICS, 2010, 38 (04) :2118-2144
[10]   A Constrained l1 Minimization Approach to Sparse Precision Matrix Estimation [J].
Cai, Tony ;
Liu, Weidong ;
Luo, Xi .
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2011, 106 (494) :594-607