NMDA-driven dendritic modulation enables multitask representation learning in hierarchical sensory processing pathways

被引:4
|
作者
Wybo, Willem A. M. [1 ,2 ,3 ]
Tsai, Matthias C. [4 ]
Tran, Viet Anh Khoa [1 ,2 ,3 ,5 ]
Illing, Bernd [6 ]
Jordan, Jakob [4 ]
Morrison, Abigail [1 ,2 ,3 ,5 ]
Senn, Walter [4 ]
机构
[1] Julich Res Ctr, Inst Neurosci & Med INM, DE-52428 Julich, Germany
[2] Julich Res Ctr, Inst Adv Simulat IAS 6, D-52428 Julich, Germany
[3] Julich Res Ctr, JARA Inst Brain Struct Funct Relationships INM 10, D-52428 Julich, Germany
[4] Univ Bern, Dept Physiol, CH-3012 Bern, Switzerland
[5] Rhein Westfal TH Aachen, Fac 1, Dept Comp Sci 3, DE-52074 Aachen, Germany
[6] Ecole Polytech Fed Lausanne, Lab Computat Neurosci, CH-1015 Lausanne, Switzerland
关键词
dendritic computation; contextual adaptation; multitask learning; contrastive learning; self-supervised learning; PRINCIPAL COMPONENTS; SYNAPTIC INHIBITION; PYRAMIDAL NEURON; LAYER; 5; TOP; FEEDFORWARD; FEEDBACK; SPIKES; GAIN; CONNECTIONS;
D O I
10.1073/pnas.2300558120
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
While sensory representations in the brain depend on context, it remains unclear how such modulations are implemented at the biophysical level, and how processing layers further in the hierarchy can extract useful features for each possible contex-tual state. Here, we demonstrate that dendritic N-Methyl-D-Aspartate spikes can, within physiological constraints, implement contextual modulation of feedforward processing. Such neuron-specific modulations exploit prior knowledge, encoded in stable feedforward weights, to achieve transfer learning across contexts. In a network of biophysically realistic neuron models with context-independent feedforward weights, we show that modulatory inputs to dendritic branches can solve linearly nonseparable learning problems with a Hebbian, error-modulated learning rule. We also demonstrate that local prediction of whether representations originate either from different inputs, or from different contextual modulations of the same input, results in representation learning of hierarchical feedforward weights across processing layers that accommodate a multitude of contexts.
引用
收藏
页数:12
相关论文
empty
未找到相关数据