Learning amore compact representation for low-rank tensor completion

被引:0
作者
Li, Xi-Zhuo [1 ,2 ,3 ]
Jiang, Tai-Xiang [1 ,2 ,3 ]
Yang, Liqiao [1 ,2 ,3 ]
Liu, Guisong [1 ,2 ,3 ]
机构
[1] Southwestern Univ Finance & Econ, Sch Comp & Artificial Intelligence, Chengdu, Sichuan, Peoples R China
[2] Kash Inst Elect & Informat Ind, Kashi, Peoples R China
[3] Southwestern Univ Finance & Econ, Engn Res Ctr Intelligent Finance, Minist Educ, Chengdu, Sichuan, Peoples R China
基金
中国国家自然科学基金;
关键词
Tensor completion; Tensor singular value decomposition; Nonlinear transform; Convolution; Multi-dimensional image; REMOTE-SENSING IMAGES; MODEL; FACTORIZATION; MATRIX;
D O I
10.1016/j.neucom.2024.129036
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transform-based tensor nuclear norm (TNN) methods have gained considerable attention for their effectiveness in addressing tensor recovery challenges. The integration of deep neural networks as nonlinear transforms has been shown to significantly enhance their performance. Minimizing transform-based TNN is equivalent to minimizing the l(1) norm of singular values in the transformed domain, which can be interpreted as finding a sparse representation with respect to the bases supported by singular vectors. This work aims to advance deep transform-based TNN methods by identifying amore compact representation through learnable bases, ultimately improving recovery accuracy. We specifically employ convolutional kernels as these learnable bases, demonstrating their capability to generate more compact representation, i.e., sparser coefficients of real-world tensor data compared to singular vectors. Our proposed model consists of two key components: a transform component, implemented through fully connected neural networks (FCNs), and a convolutional component that replaces traditional singular matrices. Then, this model is optimized using the ADAM algorithm directly on the incomplete tensor in a zero-shot manner, meaning all learnable parameters within the FCNs and convolution kernels are inferred solely from the observed data. Experimental results indicate that our method, with this straightforward yet effective modification, outperforms state-of-the-art approaches on video and multispectral image recovery tasks.
引用
收藏
页数:12
相关论文
共 48 条
[1]   Image reconstruction using superpixel clustering and tensor completion [J].
Asante-Mensah, Maame G. ;
Phan, Anh Huy ;
Ahmadi-Asl, Salman ;
Al Aghbari, Zaher ;
Cichocki, Andrzej .
SIGNAL PROCESSING, 2023, 212
[2]  
Ba J, 2014, ACS SYM SER
[3]   Third-order tensors as linear operators on a space of matrices [J].
Braman, Karen .
LINEAR ALGEBRA AND ITS APPLICATIONS, 2010, 433 (07) :1241-1253
[4]   Exact Matrix Completion via Convex Optimization [J].
Candes, Emmanuel ;
Recht, Benjamin .
COMMUNICATIONS OF THE ACM, 2012, 55 (06) :111-119
[5]   Robust Principal Component Analysis? [J].
Candes, Emmanuel J. ;
Li, Xiaodong ;
Ma, Yi ;
Wright, John .
JOURNAL OF THE ACM, 2011, 58 (03)
[6]  
CARROLL JD, 1980, PSYCHOMETRIKA, V45, P3
[7]   ANALYSIS OF INDIVIDUAL DIFFERENCES IN MULTIDIMENSIONAL SCALING VIA AN N-WAY GENERALIZATION OF ECKART-YOUNG DECOMPOSITION [J].
CARROLL, JD ;
CHANG, JJ .
PSYCHOMETRIKA, 1970, 35 (03) :283-&
[8]   Tensor completion via joint reweighted tensor Q-nuclear norm for visual data recovery [J].
Cheng, Xiaoyang ;
Kong, Weichao ;
Luo, Xin ;
Qin, Wenjin ;
Zhang, Feng ;
Wang, Jianjun .
SIGNAL PROCESSING, 2024, 219
[9]   The fusion of panchromatic and multispectral remote sensing images via tensor-based sparse modeling and hyper-Laplacian prior [J].
Deng, Liang-Jian ;
Feng, Minyu ;
Tai, Xue-Cheng .
INFORMATION FUSION, 2019, 52 :76-89
[10]   Tensor completion and low-n-rank tensor recovery via convex optimization [J].
Gandy, Silvia ;
Recht, Benjamin ;
Yamada, Isao .
INVERSE PROBLEMS, 2011, 27 (02)