Fast Rates of Gaussian Empirical Gain Maximization With Heavy-Tailed Noise

被引:10
作者
Huang, Shouyou [1 ]
Feng, Yunlong [2 ]
Wu, Qiang [3 ]
机构
[1] Hubei Normal Univ, Dept Math & Stat, Huangshi 435002, Hubei, Peoples R China
[2] Univ Albany, Dept Math & Stat, Albany, NY 12222 USA
[3] Middle Tennessee State Univ, Dept Math Sci, Murfreesboro, TN 37132 USA
关键词
Convergence; Calibration; Approximation error; Robustness; Random variables; Kernel; Velocity measurement; Convergence rates; empirical gain maximization (EGM); Gauss gain function; heavy-tailed noise; robust regression; weak moment condition; CORRENTROPY; REGRESSION; FRAMEWORK;
D O I
10.1109/TNNLS.2022.3171171
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In a regression setup, we study in this brief the performance of Gaussian empirical gain maximization (EGM), which includes a broad variety of well-established robust estimation approaches. In particular, we conduct a refined learning theory analysis for Gaussian EGM, investigate its regression calibration properties, and develop improved convergence rates in the presence of heavy-tailed noise. To achieve these purposes, we first introduce a new weak moment condition that could accommodate the cases where the noise distribution may be heavy-tailed. Based on the moment condition, we then develop a novel comparison theorem that can be used to characterize the regression calibration properties of Gaussian EGM. It also plays an essential role in deriving improved convergence rates. Therefore, the present study broadens our theoretical understanding of Gaussian EGM.
引用
收藏
页码:6038 / 6043
页数:6
相关论文
共 27 条
[1]   Empirical minimization [J].
Bartlett, PL ;
Mendelson, S .
PROBABILITY THEORY AND RELATED FIELDS, 2006, 135 (03) :311-334
[2]   Entropy and Correntropy Against Minimum Square Error in Offline and Online Three-Day Ahead Wind Power Forecasting [J].
Bessa, Ricardo J. ;
Miranda, Vladimiro ;
Gama, Joao .
IEEE TRANSACTIONS ON POWER SYSTEMS, 2009, 24 (04) :1657-1666
[3]   Maximum correntropy Kalman filter [J].
Chen, Badong ;
Liu, Xi ;
Zhao, Haiquan ;
Principe, Jose C. .
AUTOMATICA, 2017, 76 :70-77
[4]   Kernel-based sparse regression with the correntropy-induced loss [J].
Chen, Hong ;
Wang, Yulong .
APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS, 2018, 44 (01) :144-164
[5]  
Cherkassky V, 1997, IEEE Trans Neural Netw, V8, P1564, DOI 10.1109/TNN.1997.641482
[6]   Consistency and robustness of kernel-based regression in convex risk minimization [J].
Christmann, Andreas ;
Steinwart, Ingo .
BERNOULLI, 2007, 13 (03) :799-819
[7]  
Cucker F, 2007, C MO AP C M, P1, DOI 10.1017/CBO9780511618796
[8]  
Feng Y., 2021, SUBMISSION
[9]   A Framework of Learning Through Empirical Gain Maximization [J].
Feng, Yunlong ;
Wu, Qiang .
NEURAL COMPUTATION, 2021, 33 (06) :1656-1697
[10]   Learning under (1+ε)-moment conditions [J].
Feng, Yunlong ;
Wu, Qiang .
APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS, 2020, 49 (02) :495-520