Face recognition by sparse discriminant analysis via joint L2,1-norm minimization

被引:104
|
作者
Shi, Xiaoshuang [1 ]
Yang, Yujiu [1 ]
Guo, Zhenhua [1 ]
Lai, Zhihui [2 ,3 ]
机构
[1] Tsinghua Univ, Grad Sch Shenzhen, Shenzhen Key Lab Broadband Network & Multimedia, Shenzhen 518055, Peoples R China
[2] Harbin Inst Technol, Shenzhen Grad Sch, Biocomp Res Ctr, Shenzhen 518052, Peoples R China
[3] Shenzhen Univ, Coll Comp Sci & Software Engn, Shenzhen 518055, Guangdong, Peoples R China
基金
中国博士后科学基金;
关键词
L-2; L-1-norm; Fisher linear discriminant analysis; Sparse discriminant analysis; REGRESSION; SELECTION;
D O I
10.1016/j.patcog.2014.01.007
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, joint feature selection and subspace learning, which can perform feature selection and subspace learning simultaneously, is proposed and has encouraging ability on face recognition. In the literature, a framework of utilizing L-2,L-1-norm penalty term has also been presented, but some important algorithms cannot be covered, such as Fisher Linear Discriminant Analysis and Sparse Discriminant Analysis. Therefore, in this paper, we add L-2,L-1-norm penalty term on FLDA and propose a feasible solution by transforming its nonlinear model into linear regression type. In addition, we modify the optimization model of SDA by replacing elastic net with L-2,L-1-norm penalty term and present its optimization method. Experiments on three standard face databases illustrate FLDA and SDA via L-2,L-1-norm penalty term can significantly improve their recognition performance, and obtain inspiring results with low computation cost and for low-dimension feature. (C) 2014 Elsevier Ltd. All rights reserved.
引用
收藏
页码:2447 / 2453
页数:7
相关论文
共 50 条
  • [41] The l2,1-Norm Stacked Robust Autoencoders for Domain Adaptation
    Jiang, Wenhao
    Gao, Hongchang
    Chung, Fu-lai
    Huang, Heng
    THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 1723 - 1729
  • [42] Revisiting L2,1-Norm Robustness With Vector Outlier Regularization
    Jiang, Bo
    Ding, Chris
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (12) : 5624 - 5629
  • [43] Joint nuclear- and l2,1-norm regularized heterogeneous tensor decomposition for robust classification
    Jing, Peiguang
    Li, Yaxin
    Li, Xinhui
    Wu, Yuting
    Su, Yuting
    NEUROCOMPUTING, 2021, 464 (464) : 317 - 329
  • [44] Applications of Spectral Gradient Algorithm for Solving Matrix l2,1-Norm Minimization Problems in Machine Learning
    Xiao, Yunhai
    Wang, Qiuyu
    Liu, Lihong
    PLOS ONE, 2016, 11 (11):
  • [45] Concept Factorization by Joint Locality-constrained and l2,1-norm Regularization for Image Representation
    Jiang, Wei
    Zhang, Jie
    Zhang, Yongqing
    JOURNAL OF MULTIPLE-VALUED LOGIC AND SOFT COMPUTING, 2018, 31 (1-2) : 85 - 103
  • [46] A Unified Approach to Weighted L2,1 Minimization for Joint Sparse Recovery
    Ma, Binqiang
    Zhang, Aodi
    Xiang, Dongyang
    PROCEEDINGS OF THE 2ND INTERNATIONAL CONFERENCE ON TEACHING AND COMPUTATIONAL SCIENCE, 2014, : 68 - 71
  • [47] A Linearly Involved Generalized Moreau Enhancement of l2,1-Norm with Application to Weighted Group Sparse Classification
    Chen, Yang
    Yamagishi, Masao
    Yamada, Isao
    ALGORITHMS, 2021, 14 (11)
  • [48] Discriminative Kernel Transfer Learning via l2,1- Norm Minimization
    Zhang, Lei
    Jha, Sunil Kr.
    Liu, Tao
    Pei, Guangshu
    2016 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2016, : 2220 - 2227
  • [49] Robust Principal Component Analysis via Joint l2,1-Norms Minimization
    Yi, Shuangyan
    He, Zhenyu
    Yang, Wei-guo
    2017 INTERNATIONAL CONFERENCE ON SECURITY, PATTERN ANALYSIS, AND CYBERNETICS (SPAC), 2017, : 13 - 18
  • [50] Random Fourier extreme learning machine with l2,1-norm regularization
    Zhou, Sihang
    Liu, Xinwang
    Liu, Qiang
    Wang, Siqi
    Zhu, Chengzhang
    Yin, Jianping
    NEUROCOMPUTING, 2016, 174 : 143 - 153