Insights Into the Robustness of Minimum Error Entropy Estimation

被引:75
作者
Chen, Badong [1 ]
Xing, Lei [1 ]
Xu, Bin [2 ]
Zhao, Haiquan [3 ]
Principe, Jose C. [1 ,4 ]
机构
[1] Xi An Jiao Tong Univ, Inst Artificial Intelligence & Robot, Xian 710049, Peoples R China
[2] Northwestern Polytech Univ, Sch Automat, Xian 710000, Shaanxi, Peoples R China
[3] Southwest Jiaotong Univ, Sch Elect Engn, Chengdu 611756, Sichuan, Peoples R China
[4] Univ Florida, Dept Elect & Comp Engn, Gainesville, FL 32611 USA
关键词
Estimation; minimum error entropy (MEE); robustness; CRITERION; MINIMIZATION; ALGORITHMS;
D O I
10.1109/TNNLS.2016.2636160
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The minimum error entropy (MEE) is an important and highly effective optimization criterion in information theoretic learning (ITL). For regression problems, MEE aims at minimizing the entropy of the prediction error such that the estimated model preserves the information of the data generating system as much as possible. In many real world applications, the MEE estimator can outperform significantly the well-known minimum mean square error (MMSE) estimator and show strong robustness to noises especially when data are contaminated by non-Gaussian (multimodal, heavy tailed, discrete valued, and so on) noises. In this brief, we present some theoretical results on the robustness of MEE. For a one-parameter linear errors-in-variables (EIV) model and under some conditions, we derive a region that contains the MEE solution, which suggests that the MEE estimate can be very close to the true value of the unknown parameter even in presence of arbitrarily large outliers in both input and output variables. Theoretical prediction is verified by an illustrative example.
引用
收藏
页码:731 / 737
页数:7
相关论文
共 26 条
[1]   Stochastic gradient algorithm under (h,φ)-entropy criterion [J].
Chen, B. ;
Hu, J. ;
Pu, L. ;
Sun, Z. .
CIRCUITS SYSTEMS AND SIGNAL PROCESSING, 2007, 26 (06) :941-960
[2]  
Chen B, 2013, ELSEV INSIGHT, P1
[3]   Insights into Entropy as a Measure of Multivariate Variability [J].
Chen, Badong ;
Wang, Jianji ;
Zhao, Haiquan ;
Principe, Jose C. .
ENTROPY, 2016, 18 (05)
[4]   Kernel minimum error entropy algorithm [J].
Chen, Badong ;
Yuan, Zejian ;
Zheng, Nanning ;
Principe, Jose C. .
NEUROCOMPUTING, 2013, 121 :160-169
[5]   Survival Information Potential: A New Criterion for Adaptive System Training [J].
Chen, Badong ;
Zhu, Pingping ;
Principe, Jose C. .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2012, 60 (03) :1184-1194
[6]   Mean-Square Convergence Analysis of ADALINE Training With Minimum Error Entropy Criterion [J].
Chen, Badong ;
Zhu, Yu ;
Hu, Jinchun .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2010, 21 (07) :1168-1179
[7]  
de Sa J. P. M., 2013, INIMUM ERROR ENTROPY
[8]   Generalized information potential criterion for adaptive system training [J].
Erdogmus, D ;
Principe, JC .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2002, 13 (05) :1035-1044
[9]   An error-entropy minimization algorithm for supervised training of nonlinear adaptive systems [J].
Erdogmus, D ;
Principe, JC .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2002, 50 (07) :1780-1786
[10]   From linear adaptive filtering to nonlinear information processming [J].
Erdogmus, Deniz ;
Principe, Jose C. .
IEEE SIGNAL PROCESSING MAGAZINE, 2006, 23 (06) :14-33