Early Stopping for Iterative Regularization with General Loss Functions

被引:0
作者
Hu, Ting [1 ]
Lei, Yunwen [2 ]
机构
[1] Xi An Jiao Tong Univ, Sch Management, Ctr Intelligent Decis Making & Machine Learning, Xian, Peoples R China
[2] Hong Kong Baptist Univ, Dept Math, Kowloon, Hong Kong, Peoples R China
关键词
iterative regularization; early stopping; reproducing kernel Hilbert spaces; stopping rule; cross-validation; BOOSTING ALGORITHMS; CONJUGATE-GRADIENT; CONVERGENCE; RATES; CLASSIFICATION; REGRESSION;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we investigate the early stopping strategy for the iterative regularization technique, which is based on gradient descent of convex loss functions in reproducing kernel Hilbert spaces without an explicit regularization term. This work shows that projecting the last iterate of the stopping time produces an estimator that can improve the generalization ability. Using the upper bound of the generalization errors, we establish a close link between the iterative regularization and Tikhonov regularization scheme and explain theoretically why the two schemes have similar regularization paths in the existing numerical simulations. We introduce a data-dependent way based on cross-validation to select the stopping time. We prove that the a-posteriori selection way can retain the comparable generalization errors to those obtained by our stopping rules with a-prior parameters.
引用
收藏
页数:36
相关论文
共 46 条