Spiking representation learning for associative memories

被引:2
作者
Ravichandran, Naresh [1 ]
Lansner, Anders [1 ,2 ]
Herman, Pawel [1 ,3 ,4 ]
机构
[1] KTH Royal Inst Technol, Sch Elect Engn & Comp Sci, Dept Computat Sci & Technol, Computat Cognit Brain Sci Grp, Stockholm, Sweden
[2] Stockholm Univ, Dept Math, Stockholm, Sweden
[3] KTH Royal Inst Technol, Digital Futures, Stockholm, Sweden
[4] Swedish E Sci Res Ctr SeRC, Stockholm, Sweden
基金
瑞典研究理事会;
关键词
spiking neural networks; associative memory; attractor dynamics; Hebbian learning; structural plasticity; BCPNN; representation learning; unsupervised learning; STRUCTURAL PLASTICITY; SILENT SYNAPSES; NEURAL-NETWORKS; NEURONAL CIRCUITS; MODELS; DYNAMICS; ORGANIZATION; COMPLETION; PRINCIPLES; SEPARATION;
D O I
10.3389/fnins.2024.1439414
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Networks of interconnected neurons communicating through spiking signals offer the bedrock of neural computations. Our brain's spiking neural networks have the computational capacity to achieve complex pattern recognition and cognitive functions effortlessly. However, solving real-world problems with artificial spiking neural networks (SNNs) has proved to be difficult for a variety of reasons. Crucially, scaling SNNs to large networks and processing large-scale real-world datasets have been challenging, especially when compared to their non-spiking deep learning counterparts. The critical operation that is needed of SNNs is the ability to learn distributed representations from data and use these representations for perceptual, cognitive and memory operations. In this work, we introduce a novel SNN that performs unsupervised representation learning and associative memory operations leveraging Hebbian synaptic and activity-dependent structural plasticity coupled with neuron-units modelled as Poisson spike generators with sparse firing (similar to 1 Hz mean and similar to 100 Hz maximum firing rate). Crucially, the architecture of our model derives from the neocortical columnar organization and combines feedforward projections for learning hidden representations and recurrent projections for forming associative memories. We evaluated the model on properties relevant for attractor-based associative memories such as pattern completion, perceptual rivalry, distortion resistance, and prototype extraction.
引用
收藏
页数:21
相关论文
共 113 条
[1]   INFORMATION-STORAGE IN NEURAL NETWORKS WITH LOW-LEVELS OF ACTIVITY [J].
AMIT, DJ ;
GUTFREUND, H ;
SOMPOLINSKY, H .
PHYSICAL REVIEW A, 1987, 35 (05) :2293-2303
[2]  
AMIT DJ, 1989, MODELING BRAIN FUNCT
[3]  
Anderson JR, 1973, Human Associative Memory (book)
[4]   STRUCTURAL-CHANGES ACCOMPANYING MEMORY STORAGE [J].
BAILEY, CH ;
KANDEL, ER .
ANNUAL REVIEW OF PHYSIOLOGY, 1993, 55 :397-426
[5]  
Bartlett FC., 1995, Remembering: A Study in Experimental and Social Psychology
[6]   Canonical Microcircuits for Predictive Coding [J].
Bastos, Andre M. ;
Usrey, W. Martin ;
Adams, Rick A. ;
Mangun, George R. ;
Fries, Pascal ;
Friston, Karl J. .
NEURON, 2012, 76 (04) :695-711
[7]   AN INFORMATION MAXIMIZATION APPROACH TO BLIND SEPARATION AND BLIND DECONVOLUTION [J].
BELL, AJ ;
SEJNOWSKI, TJ .
NEURAL COMPUTATION, 1995, 7 (06) :1129-1159
[8]   Representation Learning: A Review and New Perspectives [J].
Bengio, Yoshua ;
Courville, Aaron ;
Vincent, Pascal .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (08) :1798-1828
[9]   Going in circles is the way forward: the role of recurrence in visual inference [J].
Bergen, Ruben S. van ;
Kriegeskorte, Nikolaus .
CURRENT OPINION IN NEUROBIOLOGY, 2020, 65 :176-193
[10]   Visual competition [J].
Blake, R ;
Logothetis, NK .
NATURE REVIEWS NEUROSCIENCE, 2002, 3 (01) :13-23