EDMD: An Entropy based Dissimilarity measure to cluster Mixed-categorical Data

被引:4
作者
Kar, Amit Kumar [1 ]
Akhter, Mohammad Maksood [1 ]
Mishra, Amaresh Chandra [2 ]
Mohanty, Sraban Kumar [1 ]
机构
[1] PDPM Indian Inst Informat Technol Design & Mfg, Comp Sci & Engn, Jabalpur 482005, India
[2] PDPM Indian Inst Informat Technol Design & Mfg, Nat Sci, Jabalpur 482005, India
关键词
Proximity measure; Mixed categorical data; Ordinal attributes; Nominal attributes; Entropy; Dissimilarity measure; ALGORITHM; ATTRIBUTE; DISTANCE;
D O I
10.1016/j.patcog.2024.110674
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The effectiveness of clustering techniques is significantly influenced by proximity measures irrespective of type of data and categorical data is no exception. Most of the existing proximity measures for categorical data assume that all attributes contribute equally to the distance measurement which is not true. Usually, frequency or probability-based approaches are better equipped in principle to counter this issue by appropriately weighting the attributes based on the intra-attribute statistical information. However, owing to the qualitative nature of categorical features, the intra-attribute disorder is not captured effectively by the popularly used continuum form of entropy known as Shannon or information entropy. If the categorical data contains ordinal features, then the problem multiplies because the existing measures treat all attributes as nominal. To address these issues, we propose a new Entropy-based Dissimilarity measure for Mixed categorical Data (EDMD) composed of both nominal and ordinal attributes. EDMD treats both nominal and ordinal attributes separately to capture the intrinsic information from the values of two different attribute types. We apply Boltzmann's definition of entropy, which is based on the principle of counting microstates, to exploit the intra-attribute statistical information of nominal attributes while preserving the order relationships among ordinal values in distance formulation. Additionally, the statistical significance of different attributes of the data towards dissimilarity computation is taken care of through attribute weighting. The proposed measure is free from any user-defined or domain-specific parameters and there is no prior assumption about the distribution of the data sets. Experimental results demonstrate the efficacy of EDMD in terms of cluster quality, accuracy, cluster discrimination ability, and execution time to handle mixed categorical data sets of different characteristics.
引用
收藏
页数:15
相关论文
共 39 条
[11]  
Gambaryan P., 1964, Izvestiya Akademii Nauk Armyanskoi SSR, V17, P47
[12]   Differential entropy and dynamics of uncertainty [J].
Garbaczewski, P .
JOURNAL OF STATISTICAL PHYSICS, 2006, 123 (02) :315-355
[13]   Clustering nominal data using unsupervised binary decision trees: Comparisons with the state of the art methods [J].
Ghattas, Badih ;
Michel, Pierre ;
Boyer, Laurent .
PATTERN RECOGNITION, 2017, 67 :177-185
[14]  
Glazer A.M., 2001, STAT MECH SURVIVAL G, Vfirst
[15]   A NEW SIMILARITY INDEX BASED ON PROBABILITY [J].
GOODALL, DW .
BIOMETRICS, 1966, 22 (04) :882-&
[16]   Rock: A robust clustering algorithm for categorical attributes [J].
Guha, S ;
Rastogi, R ;
Shim, K .
INFORMATION SYSTEMS, 2000, 25 (05) :345-366
[17]   ERROR DETECTING AND ERROR CORRECTING CODES [J].
HAMMING, RW .
BELL SYSTEM TECHNICAL JOURNAL, 1950, 29 (02) :147-160
[18]   Automated variable weighting in k-means type clustering [J].
Huang, JZX ;
Ng, MK ;
Rong, HQ ;
Li, ZC .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2005, 27 (05) :657-668
[19]   Extensions to the k-means algorithm for clustering large data sets with categorical values [J].
Huang, ZX .
DATA MINING AND KNOWLEDGE DISCOVERY, 1998, 2 (03) :283-304
[20]  
Jia H., 2022, 2022 5 INT C DAT SCI, P1, DOI [10.1109/DSIT55514.2022.9943828, DOI 10.1109/DSIT55514.2022.9943828]