A Clustering Method Based on the Maximum Entropy Principle

被引:36
作者
Aldana-Bobadilla, Edwin [1 ]
Kuri-Morales, Angel [2 ]
机构
[1] Univ Nacl Autonoma Mexico, Inst Invest Matemat Aplicadas & Sistemas, Mexico City 04510, DF, Mexico
[2] Inst Tecnol Autonomo Mexico, Mexico City 01080, DF, Mexico
关键词
clustering; Shannon's entropy; genetic algorithms; INFORMATION; OPTIMIZATION; NUMBER; VALIDATION; ALGORITHM;
D O I
10.3390/e17010151
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Clustering is an unsupervised process to determine which unlabeled objects in a set share interesting properties. The objects are grouped into k subsets (clusters) whose elements optimize a proximity measure. Methods based on information theory have proven to be feasible alternatives. They are based on the assumption that a cluster is one subset with the minimal possible degree of "disorder". They attempt to minimize the entropy of each cluster. We propose a clustering method based on the maximum entropy principle. Such a method explores the space of all possible probability distributions of the data to find one that maximizes the entropy subject to extra conditions based on prior information about the clusters. The prior information is based on the assumption that the elements of a cluster are "similar" to each other in accordance with some statistical measure. As a consequence of such a principle, those distributions of high entropy that satisfy the conditions are favored over others. Searching the space to find the optimal distribution of object in the clusters represents a hard combinatorial problem, which disallows the use of traditional optimization techniques. Genetic algorithms are a good alternative to solve this problem. We benchmark our method relative to the best theoretical performance, which is given by the Bayes classifier when data are normally distributed, and a multilayer perceptron network, which offers the best practical performance when data are not normal. In general, a supervised classification method will outperform a non-supervised one, since, in the first case, the elements of the classes are known a priori. In what follows, we show that our method's effectiveness is comparable to a supervised one. This clearly exhibits the superiority of our method.
引用
收藏
页码:151 / 180
页数:30
相关论文
共 77 条
[61]  
Robert C. P., 1999, Monte Carlo Statistical Methods, V2
[62]  
Rokach L., 2005, DATA MIN KNOWL DISC, DOI DOI 10.1007/0-387-25465-X15
[63]   SILHOUETTES - A GRAPHICAL AID TO THE INTERPRETATION AND VALIDATION OF CLUSTER-ANALYSIS [J].
ROUSSEEUW, PJ .
JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS, 1987, 20 :53-65
[64]   CONVERGENCE ANALYSIS OF CANONICAL GENETIC ALGORITHMS [J].
RUDOLPH, G .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1994, 5 (01) :96-101
[65]  
Shampine LawrenceF., 1997, Fundamentals of Numerical Computing
[66]   A MATHEMATICAL THEORY OF COMMUNICATION [J].
SHANNON, CE .
BELL SYSTEM TECHNICAL JOURNAL, 1948, 27 (04) :623-656
[67]   An ant colony approach for clustering [J].
Shelokar, PS ;
Jayaraman, VK ;
Kulkarni, BD .
ANALYTICA CHIMICA ACTA, 2004, 509 (02) :187-195
[68]   Local Search Based Evolutionary Multi-Objective Optimization Algorithm for Constrained and Unconstrained Problems [J].
Sindhya, Karthik ;
Sinha, Ankur ;
Deb, Kalyanmoy ;
Miettinen, Kaisa .
2009 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION, VOLS 1-5, 2009, :2919-+
[69]   Information-based clustering [J].
Slonim, N ;
Atwal, GS ;
Tkacik, G ;
Bialek, W .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2005, 102 (51) :18297-18302
[70]  
Snyman J. A., 2005, PRACTICAL MATH OPTIM