Resonator Networks, 2: Factorization Performance and Capacity Compared to Optimization-Based Methods

被引:24
作者
Kent, Spencer J. [1 ,2 ]
Frady, E. Paxon [1 ,3 ,4 ]
Sommer, Friedrich T. [1 ,3 ,4 ]
Olshausen, Bruno A. [1 ,3 ,5 ]
机构
[1] Univ Calif Berkeley, Redwood Ctr Theoret Neurosci, Berkeley, CA 94720 USA
[2] Univ Calif Berkeley, Elect Engn & Comp Sci, Berkeley, CA 94720 USA
[3] Univ Calif Berkeley, Helen Wills Neurosci Inst, Berkeley, CA 94720 USA
[4] Intel Labs, Neuromorph Comp Lab, San Francisco, CA 94111 USA
[5] Univ Calif Berkeley, Sch Optometry, Berkeley, CA 94720 USA
基金
美国国家科学基金会;
关键词
COLLECTIVE COMPUTATIONAL PROPERTIES; NEURAL-NETWORKS; THRESHOLDING ALGORITHM; TENSOR DECOMPOSITIONS; HOPFIELD MODEL; CONVERGENCE; UNIQUENESS; STORAGE; ARRAYS;
D O I
10.1162/neco_a_01329
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We develop theoretical foundations of resonator networks, a new type of recurrent neural network introduced in Frady, Kent, Olshausen, and Sommer (2020), a companion article in this issue, to solve a highdimensional vector factorization problem arising in Vector Symbolic Architectures. Given a composite vector formed by the Hadamard product between a discrete set of high-dimensional vectors, a resonator network can efficiently decompose the composite into these factors. We compare the performance of resonator networks against optimization-based methods, including Alternating Least Squares and several gradient-based algorithms, showing that resonator networks are superior in several important ways. This advantage is achieved by leveraging a combination of nonlinear dynamics and searching in superposition, by which estimates of the correct solution are formed from a weighted superposition of all possible solutions. While the alternative methods also search in superposition, the dynamics of resonator networks allow them to strike a more effective balance between exploring the solution space and exploiting local information to drive the network toward probable solutions. Resonator networks are not guaranteed to converge, but within a particular regime they almost always do. In exchange for relaxing the guarantee of global convergence, resonator networks are dramatically more effective at finding factorizations than all alternative approaches considered.
引用
收藏
页码:2332 / +
页数:57
相关论文
共 59 条
[1]   STATISTICAL NEURODYNAMICS OF ASSOCIATIVE MEMORY [J].
AMARI, S ;
MAGINU, K .
NEURAL NETWORKS, 1988, 1 (01) :63-73
[2]   INFORMATION-STORAGE IN NEURAL NETWORKS WITH LOW-LEVELS OF ACTIVITY [J].
AMIT, DJ ;
GUTFREUND, H ;
SOMPOLINSKY, H .
PHYSICAL REVIEW A, 1987, 35 (05) :2293-2303
[3]   STORING INFINITE NUMBERS OF PATTERNS IN A SPIN-GLASS MODEL OF NEURAL NETWORKS [J].
AMIT, DJ ;
GUTFREUND, H ;
SOMPOLINSKY, H .
PHYSICAL REVIEW LETTERS, 1985, 55 (14) :1530-1533
[4]  
Anandkumar A, 2014, J MACH LEARN RES, V15, P2773
[5]  
[Anonymous], 2016, Trends Optim., DOI DOI 10.1561/2400000013
[6]  
[Anonymous], 1927, J. Math. Phys., DOI [10.1002/sapm192761164, 10.1002/sapm192761164https://onlinelibrary.wiley.com/doi/pdf/10.1002/sapm192761164]
[7]  
[Anonymous], 2016, Concentration inequalities. A nonasymptotic theory of independence
[8]  
[Anonymous], 2002, Map-seeking circuits in visual cognition: A computational mechanism for biological and machine vision
[9]   Recognition under transformation using superposition ordering property [J].
Arathorn, DW .
ELECTRONICS LETTERS, 2001, 37 (03) :164-166
[10]  
Arora S., 2012, THEORY COMPUTING, V8, P12