Accelerated non-negative tensor completion via integer programming

被引:0
作者
Pan, Wenhao [1 ,2 ]
Aswani, Anil [3 ]
Chen, Chen [4 ]
机构
[1] Univ Calif Berkeley, Dept Stat, Berkeley, CA 94720 USA
[2] Univ Calif Berkeley, Dept Elect Engn & Comp Sci, Berkeley, CA 94720 USA
[3] Univ Calif Berkeley, Ind Engn & Operat Res, Berkeley, CA USA
[4] Ohio State Univ, Integrated Syst Engn, Columbus, OH USA
关键词
tensor completion; integer programming; conditional gradient method; acceleration; benchmarking; ALGORITHMS;
D O I
10.3389/fams.2023.1153184
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
The problem of tensor completion has applications in healthcare, computer vision, and other domains. However, past approaches to tensor completion have faced a tension in that they either have polynomial-time computation but require exponentially more samples than the information-theoretic rate, or they use fewer samples but require solving NP-hard problems for which there are no known practical algorithms. A recent approach, based on integer programming, resolves this tension for non-negative tensor completion. It achieves the information-theoretic sample complexity rate and deploys the blended conditional gradients algorithm, which requires a linear (in numerical tolerance) number of oracle steps to converge to the global optimum. The tradeoff in this approach is that, in the worst case, the oracle step requires solving an integer linear program. Despite this theoretical limitation, numerical experiments show that this algorithm can, on certain instances, scale up to 100 million entries while running on a personal computer. The goal of this study is to further enhance this algorithm, with the intention to expand both the breadth and scale of instances that can be solved. We explore several variants that can maintain the same theoretical guarantees as the algorithm but offer potentially faster computation. We consider different data structures, acceleration of gradient descent steps, and the use of the blended pairwise conditional gradients algorithm. We describe the original approach and these variants, and conduct numerical experiments in order to explore various tradeoffs in these algorithmic design choices.
引用
收藏
页数:9
相关论文
共 28 条
  • [1] LOW-RANK APPROXIMATION AND COMPLETION OF POSITIVE TENSORS
    Aswani, Anil
    [J]. SIAM JOURNAL ON MATRIX ANALYSIS AND APPLICATIONS, 2016, 37 (03) : 1337 - 1364
  • [2] Barak B., 2016, C LEARN THEOR, P417
  • [3] FrankWolfe.jl: A High-Performance and Flexible Toolbox for Frank-Wolfe Algorithms and Conditional Gradients
    Besancon, Mathieu
    Carderera, Alejandro
    Pokutta, Sebastian
    [J]. INFORMS JOURNAL ON COMPUTING, 2022, 34 (05) : 2611 - 2620
  • [4] Braun G., 2019, INT C MACH LEARN, P735
  • [5] Bugg C., 2022, Adv. Neural Inf. Process. Syst., V35, P10008
  • [6] The Convex Geometry of Linear Inverse Problems
    Chandrasekaran, Venkat
    Recht, Benjamin
    Parrilo, Pablo A.
    Willsky, Alan S.
    [J]. FOUNDATIONS OF COMPUTATIONAL MATHEMATICS, 2012, 12 (06) : 805 - 849
  • [7] Tensor completion and low-n-rank tensor recovery via convex optimization
    Gandy, Silvia
    Recht, Benjamin
    Yamada, Isao
    [J]. INVERSE PROBLEMS, 2011, 27 (02)
  • [8] Gurobi Optimization L., 2022, Gurobi optimizer reference manual
  • [9] Hansen P., 1979, Discrete Optimisation, P53
  • [10] Array programming with NumPy
    Harris, Charles R.
    Millman, K. Jarrod
    van der Walt, Stefan J.
    Gommers, Ralf
    Virtanen, Pauli
    Cournapeau, David
    Wieser, Eric
    Taylor, Julian
    Berg, Sebastian
    Smith, Nathaniel J.
    Kern, Robert
    Picus, Matti
    Hoyer, Stephan
    van Kerkwijk, Marten H.
    Brett, Matthew
    Haldane, Allan
    del Rio, Jaime Fernandez
    Wiebe, Mark
    Peterson, Pearu
    Gerard-Marchant, Pierre
    Sheppard, Kevin
    Reddy, Tyler
    Weckesser, Warren
    Abbasi, Hameer
    Gohlke, Christoph
    Oliphant, Travis E.
    [J]. NATURE, 2020, 585 (7825) : 357 - 362