A Note on the k-NN Density Estimate

被引:0
作者
Ding, Jie [1 ,2 ,3 ]
Zhu, Xinshan [4 ]
机构
[1] Yangzhou Univ, Sch Informat Engn, Yangzhou 225127, Jiangsu, Peoples R China
[2] Beihang Univ, State Key Lab Software Dev Environm, Beijing, Peoples R China
[3] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing, Jiangsu, Peoples R China
[4] Tianjin Univ, Sch Elect Engn & Automat, Tianjin 300072, Peoples R China
来源
INTELLIGENT DATA ENGINEERING AND AUTOMATED LEARNING - IDEAL 2016 | 2016年 / 9937卷
关键词
k-NN density estimate; Equivalence; L-2; convergence; CONSISTENCY; L1;
D O I
10.1007/978-3-319-46257-8_9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
k-NN (k Nearest Neighbour) density estimate as a nonparametric estimation method is widely used in machine learning or data analysis. The convergence problem of k-NN approach has been intensively investigated. In particular, the equivalence of convergence in weak or strong sense (i.e. in probability sense or in almost surely sense) has been respectively developed. In this note, we will show that the k-NN estimator converges in probability is equivalent to converge in the L-2 sense. Moreover, some relevant asymptotic results about the expectations of k-NN estimator will be established.
引用
收藏
页码:79 / 88
页数:10
相关论文
共 8 条