A kernel semi-supervised distance metric learning with relative distance: Integration with a MOO approach

被引:10
作者
Sanodiya, Rakesh Kumar [1 ]
Saha, Sriparna [1 ]
Mathew, Jimson [1 ]
机构
[1] Indian Inst Technol Patna, Dept Comp Sci & Engn, Patna 801103, Bihar, India
关键词
Semi supervised classification; Multi objective optimization; Bregman projection; Clustering; Metric learning; FEATURE-SELECTION; GENETIC ALGORITHM;
D O I
10.1016/j.eswa.2018.12.051
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Metric learning, which aims to determine an appropriate distance function to measure the similarity and dissimilarity between data points accurately, is one of the most popular methods to enhance the performance of many machine learning methods such as K-means clustering and K nearest neighbor classifier algorithms. These algorithms may not perform well because of the use of normal Euclidean distance function that ignores any statistical regularities that might be estimated from a large training set of labeled examples. In many real-world applications, the Euclidean distance may not be fit to capture the intrinsic similarity and dissimilarity between the data points. Compared to existing metric learning algorithms, which use large amount of labeled data in the form of must-link (ML) and cannot-link constraints as side information where the granularity of true clustering is unknown, our proposed approach uses few labeled data in the form of relative-distance constraints such as equality constraints, C-eq, and inequality constraints, C-neq. For satisfying such constraints, we need to project the initial Euclidean distance matrix by using Bregman projection on the convex subset of constraints in such a way that all the constraints are satisfied. Since Bregman projection is not orthogonal, means while satisfying the current constraint previously satisfied constraints may get unsatisfied, we need to select a proper subset of constraints for learning better distance function. The multi-objective framework is utilized for selecting a good subset of constraints which can help in getting the proper labeling of the data set. The selected subset of constraints is used for adjusting the initial kernel-matrix. K-means clustering technique is applied to the adjusted kernel matrix to label the data set. In order to evaluate the quality of obtained labeling, different external and internal cluster validity indices are deployed. The values of these indices are simultaneously optimized using the search capability of MOO with the aim of selecting the appropriate subset of constraints. The proposed approach is evaluated on UCI Human Activity Recognition using Smartphone Dataset v1.0 along with nine other popular data sets. Results show that our approach outperforms the state of the art semi-supervised metric learning algorithms with respect to different internal and external cluster validity indices. (C) 2018 Published by Elsevier Ltd.
引用
收藏
页码:233 / 248
页数:16
相关论文
共 55 条
  • [1] Abundez I. M., 2011, P 2011 AM C SAN FRAN, P29
  • [2] Breast cancer diagnosis using GA feature selection and Rotation Forest
    Alickovic, Emina
    Subasi, Abdulhamit
    [J]. NEURAL COMPUTING & APPLICATIONS, 2017, 28 (04) : 753 - 763
  • [3] A new semi-supervised clustering technique using multi-objective optimization
    Alok, Abhay Kumar
    Saha, Sriparna
    Ekbal, Asif
    [J]. APPLIED INTELLIGENCE, 2015, 43 (03) : 633 - 661
  • [4] Amid E., 2016, ARXIV161200086
  • [5] A Kernel-Learning Approach to Semi-supervised Clustering with Relative Distance Comparisons
    Amid, Ehsan
    Gionis, Aristides
    Ukkonen, Antti
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2015, PT I, 2015, 9284 : 219 - 234
  • [6] Semi-Supervised Kernel Mean Shift Clustering
    Anand, Saket
    Mittal, Sushil
    Tuzel, Oncel
    Meer, Peter
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2014, 36 (06) : 1201 - 1215
  • [7] [Anonymous], 2013, P ESANN
  • [8] [Anonymous], 2008, Proceedings of the Fourteenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, DOI DOI 10.1145/1401890.1401918
  • [9] [Anonymous], 2002, ICML
  • [10] [Anonymous], 1998, SPRINGER INT SER ENG