Input information maximization for improving self-organizing maps

被引:1
|
作者
Kamimura, Ryotaro [1 ,2 ]
机构
[1] Tokai Univ, IT Educ Ctr, Isehara, Kanagawa 2591292, Japan
[2] Tokai Univ, Sch Sci & Technol, Isehara, Kanagawa 2591292, Japan
关键词
SOM; Variable selection; Information-theoretic; Information enhancement; MAGNIFICATION CONTROL; NEURAL-NETWORK; NEIGHBORHOOD PRESERVATION; TOPOLOGY PRESERVATION; MUTUAL INFORMATION; FEATURE-SELECTION; ENTROPY; CLASSIFIER;
D O I
10.1007/s10489-014-0525-1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we propose a new type of information-theoretic method called "input information maximization" to improve the performance of self-organizing maps. We consider outputs from input neurons by focusing on winning neurons. The outputs are based on the difference between input neurons and the corresponding winning neurons. Then, we compute the uncertainty of input neurons by normalizing the outputs. Input information is defined as a decrease in the uncertainty of input neurons from a maximum and observed value. When input information increases, fewer input neurons tend to be activated. In the maximum state, only one neuron is on, and all others are off. We applied the method to two data sets, namely, the Senate and voting attitude data sets. In both, experimental results confirmed that when input information increased, quantization and topographic errors decreased. In addition, clearer class structure could be extracted by increasing input information. In comparison to our previous methods to detect the importance of input neurons, the present method turned out to be good at producing faithful representations with much more simplified computational procedures.
引用
收藏
页码:421 / 438
页数:18
相关论文
共 50 条