L1-Norm Low-Rank Matrix Factorization by Variational Bayesian Method

被引:87
作者
Zhao, Qian [1 ,2 ]
Meng, Deyu [1 ]
Xu, Zongben [1 ]
Zuo, Wangmeng [3 ]
Yan, Yan [4 ]
机构
[1] Xi An Jiao Tong Univ, Sch Math & Stat, Inst Informat & Syst Sci, Xian 710049, Peoples R China
[2] Beijing Ctr Math & Informat Interdisciplinary Sci, Beijing 100048, Peoples R China
[3] Harbin Inst Technol, Sch Comp Sci & Technol, Harbin 150001, Peoples R China
[4] Univ Trento, Dept Informat Engn & Comp Sci, I-38123 Trento, Italy
基金
中国国家自然科学基金;
关键词
Background subtraction; face reconstruction; low-rank matrix factorization (LRMF); outlier detection; robustness; variational inference; PRINCIPAL COMPONENT ANALYSIS; FACE RECOGNITION; ROBUST; ALGORITHM;
D O I
10.1109/TNNLS.2014.2387376
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The L-1-norm low-rank matrix factorization (LRMF) has been attracting much attention due to its wide applications to computer vision and pattern recognition. In this paper, we construct a new hierarchical Bayesian generative model for the L-1-norm LRMF problem and design a mean-field variational method to automatically infer all the parameters involved in the model by closed-form equations. The variational Bayesian inference in the proposed method can be understood as solving a weighted LRMF problem with different weights on matrix elements based on their significance and with L-2-regularization penalties on parameters. Throughout the inference process of our method, the weights imposed on the matrix elements can be adaptively fitted so that the adverse influence of noises and outliers embedded in data can be largely suppressed, and the parameters can be appropriately regularized so that the generalization capability of the problem can be statistically guaranteed. The robustness and the efficiency of the proposed method are substantiated by a series of synthetic and real data experiments, as compared with the state-of-the-art L-1-norm LRMF methods. Especially, attributed to the intrinsic generalization capability of the Bayesian methodology, our method can always predict better on the unobserved ground truth data than existing methods.
引用
收藏
页码:825 / 839
页数:15
相关论文
共 50 条
  • [1] ANDREWS DF, 1974, J ROY STAT SOC B MET, V36, P99
  • [2] [Anonymous], 2010, Adv. Neural Inf. Process. Syst
  • [3] [Anonymous], 2006, P 23 INT C MACH LEAR, DOI DOI 10.1145/1143844.1143880
  • [4] Sparse Bayesian Methods for Low-Rank Matrix Estimation
    Babacan, S. Derin
    Luessi, Martin
    Molina, Rafael
    Katsaggelos, Aggelos K.
    [J]. IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2012, 60 (08) : 3964 - 3977
  • [5] NEURAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS - LEARNING FROM EXAMPLES WITHOUT LOCAL MINIMA
    BALDI, P
    HORNIK, K
    [J]. NEURAL NETWORKS, 1989, 2 (01) : 53 - 58
  • [6] Lambertian reflectance and linear subspaces
    Basri, R
    Jacobs, DW
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2003, 25 (02) : 218 - 233
  • [7] Bishop Christopher, 2006, Pattern Recognition and Machine Learning, DOI 10.1117/1.2819119
  • [8] Bishop CM, 1999, IEE CONF PUBL, P509, DOI 10.1049/cp:19991160
  • [9] EigenTracking: Robust matching and tracking of articulated objects using a view-based representation
    Black, MJ
    Jepson, AD
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 1998, 26 (01) : 63 - 84
  • [10] Blei D. M., 2010, ICML, P439