Robust Tensor Completion With Side Information

被引:0
作者
Wang, Yao [1 ]
Yi, Qianxin [1 ]
Yang, Yiyang [1 ]
Gao, Shanxing [2 ]
Tang, Shaojie [3 ]
Wang, Di [1 ]
机构
[1] Xi An Jiao Tong Univ, Ctr Intelligent Decis Making & Machine Learning, Sch Management, Xian 710049, Peoples R China
[2] Xi An Jiao Tong Univ, Sch Management, Dept Mkt, Xian 710049, Peoples R China
[3] Univ Buffalo, Dept Management Sci & Syst, Buffalo, NY 14222 USA
基金
中国国家自然科学基金;
关键词
Tensors; Complexity theory; Motion pictures; Recommender systems; Principal component analysis; Noise; Reviews; Predictive models; Noise level; Data models; Robust tensor completion; side information; transformed t-SVD; link prediction; recommender systems; MATRIX FACTORIZATION; RECOVERY; MODELS;
D O I
10.1109/TKDE.2025.3566441
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Although robust tensor completion has been extensively studied, the effect of incorporating side information has not been explored. In this article, we fill this gap by developing a novel high-order robust tensor completion model that incorporates both latent and explicit side information. We base our model on the transformed t-product because the corresponding tensor tubal rank can characterize the inherent low-rank structure of a tensor. We study the effect of side information on sample complexity and prove that our model needs fewer observations than other tensor recovery methods when side information is perfect. This theoretically shows that informative side information is beneficial for learning. Extensive experimental results on synthetic and real data further demonstrate the superiority of the proposed method over several popular alternatives. In particular, we evaluate the performance of our solution based on two important applications, namely, link prediction in signed networks and rating prediction in recommender systems. We show that the proposed model, which manages to exploit side information in learning, outperforms other methods in the learning of such low-rank tensor data. Furthermore, when dealing with varying dimensions, we also design an online robust tensor completion with side information algorithm and validate its effectiveness using a real-world traffic dataset in the supplementary material.
引用
收藏
页码:4805 / 4819
页数:15
相关论文
共 67 条
[1]   Realistic Re-evaluation of Knowledge Graph Completion Methods: An Experimental Study [J].
Akrami, Farahnaz ;
Saeef, Mohammed Samiul ;
Zhang, Qingheng ;
Hu, Wei ;
Li, Chengkai .
SIGMOD'20: PROCEEDINGS OF THE 2020 ACM SIGMOD INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA, 2020, :1995-2010
[2]  
Bartlett P. L., 2003, Journal of Machine Learning Research, V3, P463, DOI 10.1162/153244303321897690
[3]  
Bertsekas D.P., 1982, CONSTRAINED OPTIMIZA
[4]   Tensor completion with noisy side information [J].
Bertsimas, Dimitris ;
Pawlowski, Colin .
MACHINE LEARNING, 2023, 112 (10) :3945-3976
[5]   Improving Sales Forecasting Accuracy: A Tensor Factorization Approach with Demand Awareness [J].
Bi, Xuan ;
Adomavicius, Gediminas ;
Li, William ;
Qu, Annie .
INFORMS JOURNAL ON COMPUTING, 2022, 34 (03) :1644-1660
[6]   Tensors in Statistics [J].
Bi, Xuan ;
Tang, Xiwei ;
Yuan, Yubai ;
Zhang, Yanqing ;
Qu, Annie .
ANNUAL REVIEW OF STATISTICS AND ITS APPLICATION, VOL 8, 2021, 2021, 8 :345-368
[7]   Distributed optimization and statistical learning via the alternating direction method of multipliers [J].
Boyd S. ;
Parikh N. ;
Chu E. ;
Peleato B. ;
Eckstein J. .
Foundations and Trends in Machine Learning, 2010, 3 (01) :1-122
[8]   Nonconvex Low-Rank Tensor Completion from Noisy Data [J].
Cai, Changxiao ;
Li, Gen ;
Poor, H. Vincent ;
Chen, Yuxin .
OPERATIONS RESEARCH, 2022, 70 (02) :1219-1237
[9]   ANALYSIS OF INDIVIDUAL DIFFERENCES IN MULTIDIMENSIONAL SCALING VIA AN N-WAY GENERALIZATION OF ECKART-YOUNG DECOMPOSITION [J].
CARROLL, JD ;
CHANG, JJ .
PSYCHOMETRIKA, 1970, 35 (03) :283-&
[10]   The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent [J].
Chen, Caihua ;
He, Bingsheng ;
Ye, Yinyu ;
Yuan, Xiaoming .
MATHEMATICAL PROGRAMMING, 2016, 155 (1-2) :57-79