Information-Based Optimal Subdata Selection for Big Data Linear Regression

被引:160
|
作者
Wang, HaiYing [1 ]
Yang, Min [2 ]
Stufken, John [3 ]
机构
[1] Univ Connecticut, Dept Stat, Mansfield, CT USA
[2] Univ Illinois, Dept Math Stat & Comp Sci, Chicago, IL USA
[3] Arizona State Univ, Sch Math & Stat Sci, Tempe, AZ 85287 USA
基金
美国国家科学基金会;
关键词
D-optimality; Information matrix; Linear regression; Massive data; Subdata;
D O I
10.1080/01621459.2017.1408468
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Extraordinary amounts of data are being produced in many branches of science. Proven statistical methods are no longer applicable with extraordinary large datasets due to computational limitations. A critical step in big data analysis is data reduction. Existing investigations in the context of linear regression focus on subsampling-based methods. However, not only is this approach prone to sampling errors, it also leads to a covariance matrix of the estimators that is typically bounded from below by a term that is of the order of the inverse of the subdata size. We propose a novel approach, termed information-based optimal subdata selection (IBOSS). Compared to leading existing subdata methods, the IBOSS approach has the following advantages: (i) it is significantly faster; (ii) it is suitable for distributed parallel computing; (iii) the variances of the slope parameter estimators converge to 0 as the full data size increases even if the subdata size is fixed, that is, the convergence rate depends on the full data size; (iv) data analysis for IBOSS subdata is straightforward and the sampling distribution of an IBOSS estimator is easy to assess. Theoretical results and extensive simulations demonstrate that the IBOSS approach is superior to subsampling-based methods, sometimes by orders of magnitude. The advantages of the new approach are also illustrated through analysis of real data. Supplementary materials for this article are available online.
引用
收藏
页码:393 / 405
页数:13
相关论文
共 50 条
  • [31] Information-based Parameterization of the Log-linear Model for Categorical Data Analysis
    Valérie Girardin
    Justine Lequesne
    Anne Ricordeau
    Methodology and Computing in Applied Probability, 2018, 20 : 1105 - 1121
  • [32] THE BOOTSTRAP-BASED SELECTION CRITERIA: AN OPTIMAL CHOICE FOR MODEL SELECTION IN LINEAR REGRESSION
    Shang, Junfeng
    ADVANCES AND APPLICATIONS IN STATISTICS, 2010, 14 (02) : 173 - 189
  • [33] Optimal subsampling for quantile regression in big data
    Wang, Haiying
    Ma, Yanyuan
    BIOMETRIKA, 2021, 108 (01) : 99 - 112
  • [34] ORTHOGONAL SUBSAMPLING FOR BIG DATA LINEAR REGRESSION
    Wang, Lin
    Elmstedt, Jake
    Wong, Weng Kee
    Xu, Hongquan
    ANNALS OF APPLIED STATISTICS, 2021, 15 (03): : 1273 - 1290
  • [35] Information-Based Node Selection for Joint PCA and Compressive Sensing-Based Data Aggregation
    Imanian, Gholamreza
    Pourmina, Mohammad Ali
    Salahi, Ahmad
    WIRELESS PERSONAL COMMUNICATIONS, 2021, 118 (02) : 1635 - 1654
  • [36] Information-Based Node Selection for Joint PCA and Compressive Sensing-Based Data Aggregation
    Gholamreza Imanian
    Mohammad Ali Pourmina
    Ahmad Salahi
    Wireless Personal Communications, 2021, 118 : 1635 - 1654
  • [37] Research on the Application of Information-Based Big Data Fusion Technology in College English Education Design
    Chen, Yi
    Huang, Jianwen
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2022, 2022
  • [38] Genetic Algorithm for the Mutual Information-Based Feature Selection in Univariate Time Series Data
    Siddiqi, Umair F.
    Sait, Sadiq M.
    Kaynak, Okyay
    IEEE ACCESS, 2020, 8 (08): : 9597 - 9609
  • [39] A statistic channel information-based relay selection scheme
    Wu, Su-Wen
    Lü, Xing-Zai
    Zhu, Jin-Kang
    Deng, Dan
    Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2009, 31 (05): : 1077 - 1081
  • [40] Stopping rules for mutual information-based feature selection
    Mielniczuk, Jan
    Teisseyre, Pawel
    NEUROCOMPUTING, 2019, 358 : 255 - 274