Model-Free Subsampling Method Based on Uniform Designs

被引:6
作者
Zhang, Mei [1 ]
Zhou, Yongdao [2 ,3 ]
Zhou, Zheng [3 ]
Zhang, Aijun [4 ]
机构
[1] Southwest Minzu Univ, Coll Math, Chengdu 610225, Peoples R China
[2] Nankai Univ, Sch Stat & DataScience, LPMC, Tianjin 300071, Peoples R China
[3] Nankai Univ, KLMDASR, Tianjin 300071, Peoples R China
[4] Wells Fargo, Corp Model Risk, Charlotte, NC 28202 USA
关键词
Kernel; Data models; Numerical models; Hypercubes; Computational modeling; Big Data; Statistical learning; Empirical F -discrepancy; generalized l (2) -discrepancy; koksma-hlawka inequality; model-free subsampling; reproducing kernel; DISCREPANCY;
D O I
10.1109/TKDE.2023.3297167
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Subsampling or subdata selection is a useful approach in large-scale statistical learning. Most existing studies focus on model-based subsampling methods which significantly depend on the model assumption. In this paper, we consider the model-free subsampling strategy for generating subdata from the original full data. In order to measure the goodness of representation of a subdata with respect to the original data, we propose a criterion, generalized empirical F-discrepancy (GEFD), and study its theoretical properties in connection with the classical generalized L2-discrepancy in the theory of uniform designs. These properties allow us to develop a kind of low-GEFD data-driven subsampling method based on the existing uniform designs. By simulation examples and a real case study, we show that the proposed subsampling method is superior to the random sampling method. Moreover, our method keeps robust under diverse model specifications while other popular subsampling methods are under-performing. In practice, such a model-free property is more appealing than the model-based subsampling methods, where the latter may have poor performance when the model is misspecified, as demonstrated in our simulation studies.
引用
收藏
页码:1210 / 1220
页数:11
相关论文
共 33 条
  • [1] Blum M., 1973, Journal of Computer and System Sciences, V7, P448, DOI 10.1016/S0022-0000(73)80033-9
  • [2] Optimization Methods for Large-Scale Machine Learning
    Bottou, Leon
    Curtis, Frank E.
    Nocedal, Jorge
    [J]. SIAM REVIEW, 2018, 60 (02) : 223 - 311
  • [3] Uniform sliced Latin hypercube designs
    Chen, Hao
    Huang, Hengzhen
    Lin, Dennis K. J.
    Liu, Min-Qian
    [J]. APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, 2016, 32 (05) : 574 - 584
  • [4] Chen Y., 2010, P 26 C UNC ART INT, P109
  • [5] Dheeru E. K., 2017, UCI machine learning repository
  • [6] Fang K.-T., 2018, THEORY APPL UNIFORM
  • [7] Fang KT, 2006, CH CRC COMP SCI DATA, P1
  • [8] Fang KT, 1994, Number-theoretic methods in statistics
  • [9] Hickernell F.J., 1998, LECT NOTES STAT, P106, DOI DOI 10.1007/978-1-4612-1702-2_3
  • [10] A generalized discrepancy and quadrature error bound
    Hickernell, FJ
    [J]. MATHEMATICS OF COMPUTATION, 1998, 67 (221) : 299 - 322