Matrix factorization with neural networks

被引:6
作者
Camilli, Francesco [1 ]
Mezard, Marc [2 ]
机构
[1] Abdus Salaam Int Ctr Theoret Phys, Quantitat Life Sci, I-34151 Trieste, Italy
[2] Bocconi Univ, Dept Comp Sci, I-20100 Milan, Italy
关键词
SPARSE REPRESENTATION; LARGEST EIGENVALUE; ALGORITHMS;
D O I
10.1103/PhysRevE.107.064308
中图分类号
O35 [流体力学]; O53 [等离子体物理学];
学科分类号
070204 ; 080103 ; 080704 ;
摘要
Matrix factorization is an important mathematical problem encountered in the context of dictionary learning, recommendation systems, and machine learning. We introduce a decimation scheme that maps it to neural network models of associative memory and provide a detailed theoretical analysis of its performance, showing that decimation is able to factorize extensive-rank matrices and to denoise them efficiently. In the case of binary prior on the signal components, we introduce a decimation algorithm based on a ground-state search of the neural network, which shows performances that match the theoretical prediction.
引用
收藏
页数:12
相关论文
共 33 条
  • [1] ACKLEY DH, 1985, COGNITIVE SCI, V9, P147
  • [2] Alaoui A. E., 2020, ANN STAT, V48, P863
  • [3] SPIN-GLASS MODELS OF NEURAL NETWORKS
    AMIT, DJ
    GUTFREUND, H
    [J]. PHYSICAL REVIEW A, 1985, 32 (02): : 1007 - 1018
  • [4] STORING INFINITE NUMBERS OF PATTERNS IN A SPIN-GLASS MODEL OF NEURAL NETWORKS
    AMIT, DJ
    GUTFREUND, H
    SOMPOLINSKY, H
    [J]. PHYSICAL REVIEW LETTERS, 1985, 55 (14) : 1530 - 1533
  • [5] Phase transition of the largest eigenvalue for nonnull complex sample covariance matrices
    Baik, J
    Ben Arous, G
    Péché, S
    [J]. ANNALS OF PROBABILITY, 2005, 33 (05) : 1643 - 1697
  • [6] Statistical limits of dictionary learning: Random matrix theory and the spectral replica method
    Barbier, Jean
    Macris, Nicolas
    [J]. PHYSICAL REVIEW E, 2022, 106 (02)
  • [7] The adaptive interpolation method for proving replica formulas. Applications to the Curie-Weiss and Wigner spike models
    Barbier, Jean
    Macris, Nicolas
    [J]. JOURNAL OF PHYSICS A-MATHEMATICAL AND THEORETICAL, 2019, 52 (29)
  • [8] The eigenvalues and eigenvectors of finite, low rank perturbations of large random matrices
    Benaych-Georges, Florent
    Nadakuditi, Raj Rao
    [J]. ADVANCES IN MATHEMATICS, 2011, 227 (01) : 494 - 521
  • [9] Supervised perceptron learning vs unsupervised Hebbian unlearning: Approaching optimal memory retrieval in Hopfield-like networks
    Benedetti, Marco
    Ventura, Enrico
    Marinari, Enzo
    Ruocco, Giancarlo
    Zamponi, Francesco
    [J]. JOURNAL OF CHEMICAL PHYSICS, 2022, 156 (10)
  • [10] Representation Learning: A Review and New Perspectives
    Bengio, Yoshua
    Courville, Aaron
    Vincent, Pascal
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (08) : 1798 - 1828