Reverse nearest neighbors Bhattacharyya bound linear discriminant analysis for multimodal classification

被引:27
作者
Guo, Yan-Ru [1 ]
Bai, Yan-Qin [1 ]
Li, Chun-Na [2 ]
Shao, Yuan-Hai [2 ]
Ye, Ya-Fen [3 ]
Jiang, Cheng-zi [4 ]
机构
[1] Shanghai Univ, Dept Math, Shanghai 200444, Peoples R China
[2] Hainan Univ, Management Sch, Haikou 570228, Hainan, Peoples R China
[3] Zhejiang Univ Technol, Sch Econ, Hangzhou 310023, Peoples R China
[4] Griffith Univ, Griffith Business Sch, Parklands Dr, Southport, Qld 4215, Australia
基金
中国国家自然科学基金;
关键词
Bhattacharyya error bound; Linear discriminant analysis; Multimodal data; Reverse nearest neighbor; L1-NORM; LDA;
D O I
10.1016/j.engappai.2020.104033
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, an effective improvement of linear discriminant analysis (LDA) called L2-norm linear discriminant analysis via the Bhattacharyya error bound estimation (L2BLDA) was proposed in its adaptability and nonsingularity. However, L2BLDA assumes all samples from the same class are independently identically distributed (i.i.d.). In real world, this assumption sometimes fails. To solve this problem, in this paper, reverse nearest neighbor (RNN) technique is imbedded into L2BLDA and a novel linear discriminant analysis named RNNL2BLDA is proposed. Rather than using classes to construct within-class and between-class scatters, RNNL2BLDA divides each class into subclasses by using RNN technique, and then defines the scatter matrices on these classes that may contain several subclasses. This makes RNNL2BLDA get rid of the i.i.d.assumption in L2BLDA and applicable to multimodal data, which have mixture of Gaussian distributions. In addition, by setting a threshold in RNN, RNNL2BLDA achieves robustness. RNNL2BLDA can be solved through a simple standard generalized eigenvalue problem. Experimental results on an artificial data set, some benchmark data sets as well as two human face databases demonstrate the effectiveness of the proposed method.
引用
收藏
页数:14
相关论文
共 55 条
[11]  
Haykin S., 2010, Neural networks and learning machines, 3/E
[12]  
Kaur A., 2015, INT J ADV RES COMPUT, P308
[13]   Robust Discriminant Regression for Feature Extraction [J].
Lai, Zhihui ;
Mo, Dongmei ;
Wong, Wai Keung ;
Xu, Yong ;
Miao, Duoqian ;
Zhang, David .
IEEE TRANSACTIONS ON CYBERNETICS, 2018, 48 (08) :2472-2484
[14]  
Li C., 2019, KNOWL-BASED SYST
[15]  
Li C.N., 2018, ARXIV180107426
[16]   Robust bilateral Lp-norm two-dimensional linear discriminant analysis [J].
Li, Chun-Na ;
Shao, Yuan-Hai ;
Wang, Zhen ;
Deng, Nai-Yang .
INFORMATION SCIENCES, 2019, 500 :274-297
[17]   Robust and Sparse Linear Discriminant Analysis via an Alternating Direction Method of Multipliers [J].
Li, Chun-Na ;
Shao, Yuan-Hai ;
Yin, Wotao ;
Liu, Ming-Zeng .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (03) :915-926
[18]   Sparse L1-norm two dimensional linear discriminant analysis via the generalized elastic net regularization [J].
Li, Chun-Na ;
Shang, Meng-Qi ;
Shao, Yuan-Hai ;
Xu, Yan ;
Liu, Li-Ming ;
Wang, Zhen .
NEUROCOMPUTING, 2019, 337 :80-96
[19]   Robust L1-norm two-dimensional linear discriminant analysis [J].
Li, Chun-Na ;
Shao, Yuan-Hai ;
Deng, Nai-Yang .
NEURAL NETWORKS, 2015, 65 :92-104
[20]   Incremental Kernel Null Space Discriminant Analysis for Novelty Detection [J].
Liu, Juncheng ;
Lian, Zhouhui ;
Wang, Yi ;
Xiao, Jianguo .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :4123-4131