Reverse nearest neighbors Bhattacharyya bound linear discriminant analysis for multimodal classification

被引:27
作者
Guo, Yan-Ru [1 ]
Bai, Yan-Qin [1 ]
Li, Chun-Na [2 ]
Shao, Yuan-Hai [2 ]
Ye, Ya-Fen [3 ]
Jiang, Cheng-zi [4 ]
机构
[1] Shanghai Univ, Dept Math, Shanghai 200444, Peoples R China
[2] Hainan Univ, Management Sch, Haikou 570228, Hainan, Peoples R China
[3] Zhejiang Univ Technol, Sch Econ, Hangzhou 310023, Peoples R China
[4] Griffith Univ, Griffith Business Sch, Parklands Dr, Southport, Qld 4215, Australia
基金
中国国家自然科学基金;
关键词
Bhattacharyya error bound; Linear discriminant analysis; Multimodal data; Reverse nearest neighbor; L1-NORM; LDA;
D O I
10.1016/j.engappai.2020.104033
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, an effective improvement of linear discriminant analysis (LDA) called L2-norm linear discriminant analysis via the Bhattacharyya error bound estimation (L2BLDA) was proposed in its adaptability and nonsingularity. However, L2BLDA assumes all samples from the same class are independently identically distributed (i.i.d.). In real world, this assumption sometimes fails. To solve this problem, in this paper, reverse nearest neighbor (RNN) technique is imbedded into L2BLDA and a novel linear discriminant analysis named RNNL2BLDA is proposed. Rather than using classes to construct within-class and between-class scatters, RNNL2BLDA divides each class into subclasses by using RNN technique, and then defines the scatter matrices on these classes that may contain several subclasses. This makes RNNL2BLDA get rid of the i.i.d.assumption in L2BLDA and applicable to multimodal data, which have mixture of Gaussian distributions. In addition, by setting a threshold in RNN, RNNL2BLDA achieves robustness. RNNL2BLDA can be solved through a simple standard generalized eigenvalue problem. Experimental results on an artificial data set, some benchmark data sets as well as two human face databases demonstrate the effectiveness of the proposed method.
引用
收藏
页数:14
相关论文
共 55 条
[1]   Maximum Margin Metric Learning over Discriminative Nullspace for Person Re-identification [J].
Ali, T. M. Feroz ;
Chaudhuri, Subhasis .
COMPUTER VISION - ECCV 2018, PT XIII, 2018, 11217 :123-141
[2]  
Barbieri F., 2018, Trans Int Soc Music Inf Retr, V1, P21, DOI [DOI 10.5334/TISMIR.10, 10.5334/tismir.10]
[3]   Integration and learning in supervision of flexible assembly systems [J].
CamarinhaMatos, LM ;
Lopes, LS ;
Barata, J .
IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, 1996, 12 (02) :202-219
[4]   Multi-View Nonparametric Discriminant Analysis for Image Retrieval and Recognition [J].
Cao, Guanqun ;
Iosifidis, Alexandros ;
Gabbouj, Moncef .
IEEE SIGNAL PROCESSING LETTERS, 2017, 24 (10) :1537-1541
[5]   An Improved Linear Discriminant Analysis with L1-norm for Robust Feature Extraction [J].
Chen, Xiaobo ;
Yang, Jian ;
Jin, Zhong .
2014 22ND INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2014, :1585-1590
[6]  
Deng NY, 2012, SUPPORT VECTOR MACHI, DOI DOI 10.1201/B14297
[7]   Two-dimensional discriminant analysis based on Schatten p-norm for image feature extraction [J].
Du, Haishun ;
Zhao, Zhaolong ;
Wang, Sheng ;
Hu, Qingpu .
JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2017, 45 :87-94
[8]  
Fukunaga K., 2013, INTRO STAT PATTERN R
[9]   Reverse k-nearest neighbor search in the presence of obstacles [J].
Gao, Yunjun ;
Liu, Qing ;
Miao, Xiaoye ;
Yang, Jiacheng .
INFORMATION SCIENCES, 2016, 330 :274-292
[10]   Mixture Subclass Discriminant Analysis [J].
Gkalelis, Nikolaos ;
Mezaris, Vasileios ;
Kompatsiaris, Ioannis .
IEEE SIGNAL PROCESSING LETTERS, 2011, 18 (05) :319-322