Generalized Nonconvex Nonsmooth Low-Rank Matrix Recovery Framework With Feasible Algorithm Designs and Convergence Analysis

被引:28
作者
Zhang, Hengmin [1 ,2 ]
Qian, Feng [1 ,3 ]
Shi, Peng [4 ,5 ]
Du, Wenli [1 ,3 ]
Tang, Yang [1 ,3 ]
Qian, Jianjun [6 ,7 ]
Gong, Chen [6 ,7 ]
Yang, Jian [6 ,7 ]
机构
[1] East China Univ Sci & Technol, Sch Informat Sci & Engn, Key Lab Adv Smart Mfg Energy Chem Proc, Minist Educ, Shanghai 200237, Peoples R China
[2] Univ Macau, Dept Comp & Informat Sci, Macau, Peoples R China
[3] Tongji Univ, Shanghai Inst Intelligent Sci & Technol, Shanghai 200092, Peoples R China
[4] Univ Adelaide, Sch Elect & Elect Engn, Adelaide, SA 5005, Australia
[5] Victoria Univ, Coll Engn & Sci, Melbourne, Vic 8001, Australia
[6] Nanjing Univ Sci & Technol, PCA Lab, Nanjing 210094, Peoples R China
[7] Nanjing Univ Sci & Technol, Key Lab Intelligent Percept & Syst High Dimens In, Minist Educ, Nanjing 210094, Peoples R China
基金
中国博士后科学基金;
关键词
Convergence; Optimization; Minimization; Linear programming; Matrix decomposition; Convex functions; Learning systems; Algorithm designs; convergence analysis; low-rank matrix recovery; multiple variables; nonconvex alternating direction method of multiplier (ADMM); ALTERNATING DIRECTION METHOD; ROBUST SUBSPACE SEGMENTATION; VARIABLE SELECTION; NORM; MULTIPLIERS;
D O I
10.1109/TNNLS.2022.3183970
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Decomposing data matrix into low-rank plus additive matrices is a commonly used strategy in pattern recognition and machine learning. This article mainly studies the alternating direction method of multiplier (ADMM) with two dual variables, which is used to optimize the generalized nonconvex nonsmooth low-rank matrix recovery problems. Furthermore, the minimization framework with a feasible optimization procedure is designed along with the theoretical analysis, where the variable sequences generated by the proposed ADMM can be proved to be bounded. Most importantly, it can be concluded from the Bolzano-Weierstrass theorem that there must exist a subsequence converging to a critical point, which satisfies the Karush-Kuhn-Tucher (KKT) conditions. Meanwhile, we further ensure the local and global convergence properties of the generated sequence relying on constructing the potential objective function Particularly, the detailed convergence analysis would be regarded as one of the core contributions besides the algorithm designs and the model generality. Finally, the numerical simulations and the real-world applications are both provided to verify the consistence of the theoretical results, and we also validate the superiority in performance over several mostly related solvers to the tasks of image inpainting and subspace clustering.
引用
收藏
页码:5342 / 5353
页数:12
相关论文
共 59 条
[1]   Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods [J].
Attouch, Hedy ;
Bolte, Jerome ;
Svaiter, Benar Fux .
MATHEMATICAL PROGRAMMING, 2013, 137 (1-2) :91-129
[2]  
Bartle R. G., 2000, Introduction to Real Analysis
[3]   A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems [J].
Beck, Amir ;
Teboulle, Marc .
SIAM JOURNAL ON IMAGING SCIENCES, 2009, 2 (01) :183-202
[4]   Proximal alternating linearized minimization for nonconvex and nonsmooth problems [J].
Bolte, Jerome ;
Sabach, Shoham ;
Teboulle, Marc .
MATHEMATICAL PROGRAMMING, 2014, 146 (1-2) :459-494
[5]  
Border K., 2001, SUPERGRADIENT CONCAV
[6]   ADMM for monotone operators: convergence analysis and rates [J].
Bot, Radu Ioan ;
Csetnek, Ernoe Robert .
ADVANCES IN COMPUTATIONAL MATHEMATICS, 2019, 45 (01) :327-359
[7]   Decomposition into low-rank plus additive matrices for background/foreground separation: A review for a comparative evaluation with a large-scale dataset [J].
Bouwmans, Thierry ;
Sobral, Andrews ;
Javed, Sajid ;
Jung, Soon Ki ;
Zahzah, El-Hadi .
COMPUTER SCIENCE REVIEW, 2017, 23 :1-71
[8]  
Boyd S., 2011, Distributed Optimization and Statistical Learning Via the Alternating Direction Method of Multipliers, DOI DOI 10.1561/2200000016
[9]   Robust Principal Component Analysis? [J].
Candes, Emmanuel J. ;
Li, Xiaodong ;
Ma, Yi ;
Wright, John .
JOURNAL OF THE ACM, 2011, 58 (03)
[10]   The Power of Convex Relaxation: Near-Optimal Matrix Completion [J].
Candes, Emmanuel J. ;
Tao, Terence .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2010, 56 (05) :2053-2080