Metric learning for multi-instance classification with collapsed bags

被引:0
作者
Li, Dewei [1 ,2 ]
Xu, Dongkuan [1 ,2 ]
Tang, Jingjing [1 ,2 ]
Tian, Yingjie [2 ,3 ]
机构
[1] Univ Chinese Acad Sci, Sch Math Sci, Beijing 100049, Peoples R China
[2] Chinese Acad Sci, Res Ctr Fictitious Econ & Data Sci, Beijing 100190, Peoples R China
[3] Chinese Acad Sci, Key Lab Big Data Min & Knowledge Management, Beijing 100190, Peoples R China
来源
2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2017年
基金
北京市自然科学基金; 中国国家自然科学基金;
关键词
Metric learning; Multi-instance; Clustering; Kernel;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As a kind of popular problem in machine learning, multi-instance task has been researched by means of many classical methods, such as kNN, SVM, etc. For kNN classification, its performance on traditional task can be boosted by metric learning, which seeks for a data-dependent metric to make similar examples closer and separate dissimilar examples by a margin. It is a challenge to define distance between bags in multi-instance problem, let alone learning appropriate metric for the problem. In this paper, we propose a new approach for multi-instance classification, with the idea of metric learning embedded. A new kind of distance is used to measure the similarity between bags. To weaken redundant information from bags and reduce computation complexity, k-means method is implemented to get collapsed bags by replacing each instance with its corresponding cluster centroid. The aim of metric learning is to expand inter-class bag distance and shrink intra-class bag distance, leading to the construction of an optimization problem with maximal relative distance. Kernel function can be introduced into the model to extract nonlinear information from the inputs. Gradient descent is utilized to solve the problem effectively. Numerical experiments on both artificial datasets and benchmark datasets demonstrated that the method can obtain competitive performance comparative to kNN and the state-of-the-art method in multi-instance classification.
引用
收藏
页码:372 / 379
页数:8
相关论文
共 34 条
[1]  
Andrews Stuart, 2002, Proceedings of the 15th International Conference on Neural Information Processing Systems. NIPS'02, P561
[2]  
[Anonymous], ADV NEURAL INFORM PR
[3]  
[Anonymous], 2012, P 20 ACM INT C MULT
[4]  
[Anonymous], ECCV
[5]  
[Anonymous], 2007, P 18 ANN ACM SIAM S
[6]  
[Anonymous], P ADV NEURAL INFORM
[7]  
[Anonymous], 2002, NIPS
[8]  
Boyd S, 2004, CONVEX OPTIMIZATION
[9]   Robust multiple-instance learning ensembles using random subspace instance selection [J].
Carbonneau, Marc-Andre ;
Granger, Eric ;
Raymond, Alexandre J. ;
Gagnon, Ghyslain .
PATTERN RECOGNITION, 2016, 58 :83-99
[10]   MILES: Multiple-Instance Learning via Embedded instance Selection [J].
Chen, Yixin ;
Bi, Jinbo ;
Wang, James Z. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2006, 28 (12) :1931-1947