Identifying predictive hubs to condense the training set of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k$$\end{document}-nearest neighbour classifiers

被引:0
|
作者
Ludwig Lausser
Christoph Müssel
Alexander Melkozerov
Hans A. Kestler
机构
[1] University of Ulm,Research Group Bioinformatics and Systems Biology, Institute of Neural Information Processing
[2] Tomsk State University of Control Systems and Radioelectronics,Department of Television and Control
关键词
-Nearest neighbour; Classification; Genetic algorithm; Predictive hubs;
D O I
10.1007/s00180-012-0379-0
中图分类号
学科分类号
摘要
The \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k$$\end{document}-Nearest Neighbour classifier is widely used and popular due to its inherent simplicity and the avoidance of model assumptions. Although the approach has been shown to yield a near-optimal classification performance for an infinite number of samples, a selection of the most decisive data points can improve the classification accuracy considerably in real settings with a limited number of samples. At the same time, a selection of a subset of representative training samples reduces the required amount of storage and computational resources. We devised a new approach that selects a representative training subset on the basis of an evolutionary optimization procedure. This method chooses those training samples that have a strong influence on the correct prediction of other training samples, in particular those that have uncertain labels. The performance of the algorithm is evaluated on different data sets. Additionally, we provide graphical examples of the selection procedure.
引用
收藏
页码:81 / 95
页数:14
相关论文
共 45 条