An expressive dissimilarity measure for relational clustering using neighbourhood trees

被引:0
作者
Sebastijan Dumančić
Hendrik Blockeel
机构
[1] KU Leuven,Department of Computer Science
来源
Machine Learning | 2017年 / 106卷
关键词
Relational learning; Clustering; Similarity of structured objects;
D O I
暂无
中图分类号
学科分类号
摘要
Clustering is an underspecified task: there are no universal criteria for what makes a good clustering. This is especially true for relational data, where similarity can be based on the features of individuals, the relationships between them, or a mix of both. Existing methods for relational clustering have strong and often implicit biases in this respect. In this paper, we introduce a novel dissimilarity measure for relational data. It is the first approach to incorporate a wide variety of types of similarity, including similarity of attributes, similarity of relational context, and proximity in a hypergraph. We experimentally evaluate the proposed dissimilarity measure on both clustering and classification tasks using data sets of very different types. Considering the quality of the obtained clustering, the experiments demonstrate that (a) using this dissimilarity in standard clustering methods consistently gives good results, whereas other measures work well only on data sets that match their bias; and (b) on most data sets, the novel dissimilarity outperforms even the best among the existing ones. On the classification tasks, the proposed method outperforms the competitors on the majority of data sets, often by a large margin. Moreover, we show that learning the appropriate bias in an unsupervised way is a very challenging task, and that the existing methods offer a marginal gain compared to the proposed similarity method, and can even hurt performance. Finally, we show that the asymptotic complexity of the proposed dissimilarity measure is similar to the existing state-of-the-art approaches. The results confirm that the proposed dissimilarity measure is indeed versatile enough to capture relevant information, regardless of whether that comes from the attributes of vertices, their proximity, or connectedness of vertices, even without parameter tuning.
引用
收藏
页码:1523 / 1545
页数:22
相关论文
共 44 条
[1]  
Dzeroski S(2004)Multi-relational data mining 2004: Workshop report SIGKDD Explorations 6 140-141
[2]  
Blockeel H(2002)Why so many clustering algorithms: A position paper SIGKDD Explorations Newsletter 4 65-75
[3]  
Estivill-Castro V(2014)klog: A language for logical and relational learning with kernels Artificial Intelligence 217 117-143
[4]  
Frasconi P(2011)Multiple kernel learning algorithms Journal of Machine Learning Research 12 2211-2268
[5]  
Costa F(1984)The measurement of classification agreement: An adjustment to the rand statistic for chance agreement Educational and Psychological Measurement 44 33-37
[6]  
De Raedt L(1994)Inductive logic programming: Theory and methods The Journal of Logic Programming 19 629-679
[7]  
De Grave K(2011)Scikit-learn: Machine learning in Python Journal of Machine Learning Research 12 2825-2830
[8]  
Gonen M(2006)Distribution-based aggregation for relational learning with identifier attributes Machine Learning 62 65-105
[9]  
Alpaydin E(1971)Objective criteria for the evaluation of clustering methods Journal of the American Statistical Association 66 846-850
[10]  
Morey LC(2008)Collective classification in network data AI Magazine 29 93-106