The neural and cognitive architecture for learning from a small sample

被引:21
作者
Cortese, Aurelio [1 ]
De Martino, Benedetto [2 ,3 ]
Kawato, Mitsuo [1 ,4 ]
机构
[1] ATR Inst Int, Computat Neurosci Labs, Kyoto, Japan
[2] UCL, Inst Cognit Neurosci, Alexandra House,17-19 Queen Sq, London WC1N 3AR, England
[3] UCL, Wellcome Ctr Human Neuroimaging, London WC1N 3BG, England
[4] RIKEN, Ctr Adv Intelligence Project, ATR Inst Int, Kyoto, Japan
基金
英国惠康基金;
关键词
REPRESENTATIONS; HIPPOCAMPUS; CONFIDENCE; SUBSTRATE; INFERENCE; NETWORKS; DYNAMICS; MEMORY; REWARD; MODEL;
D O I
10.1016/j.conb.2019.02.011
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Artificial intelligence algorithms are capable of fantastic exploits, yet they are still grossly inefficient compared with the brain's ability to learn from few exemplars or solve problems that have not been explicitly defined. What is the secret that the evolution of human intelligence has unlocked? Generalization is one answer, but there is more to it. The brain does not directly solve difficult problems, it is able to recast them into new and more tractable problems. Here, we propose a model whereby higher cognitive functions profoundly interact with reinforcement learning to drastically reduce the degrees of freedom of the search space, simplifying complex problems, and fostering more efficient learning.
引用
收藏
页码:133 / 141
页数:9
相关论文
共 67 条
  • [1] [Anonymous], 170908568 ARXIV CSLG
  • [2] [Anonymous], DARPA ROBOTICS CHALL
  • [3] [Anonymous], 180310760 ARXIV CSLG
  • [4] [Anonymous], 2015, Nature, DOI [10.1038/nature14539, DOI 10.1038/NATURE14539]
  • [5] [Anonymous], NAT NEUROSCI
  • [6] [Anonymous], 170703389 ARXIV STAT
  • [7] [Anonymous], 14105401 ARXIV CSNE
  • [8] [Anonymous], 180303067 ARXIV
  • [9] [Anonymous], 2015, COMPUTER SCI
  • [10] [Anonymous], SCIENCE