What AI, Neuroscience, and Cognitive Science Can Learn from Each Other: An Embedded Perspective

被引:1
作者
Achler, Tsvi [1 ]
机构
[1] Optimizing Mind Inc, Palo Alto, CA 94306 USA
关键词
Catastrophic forgetting; Independent and identically distributed; Regulatory feedback; Salience; Biased competition; Rehearsing; NEURAL-NETWORKS; VISUAL-SEARCH; MODEL; ASYMMETRIES; SYSTEMS;
D O I
10.1007/s12559-023-10194-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Scientists studying in the fields of AI and neuroscience can learn much from each other, but unfortunately, since about the 1950s, it has been mostly one-sided: neuroscientists have learned from AI, but less so the other way. I argue this is holding back both brain understanding and progress in AI. Current AI ("neural network"/deep learning algorithms) and the brain are very different from each other. The brain does not seem to use trial-and-error-type learning algorithms such as backpropagation to modify weights and more importantly does not require the cumbersome rehearsal needed for trial-and-error implementation. The brain can learn information in a modular and true "one-shot" fashion as the information is encountered while the AI cannot. Instead of backpropagation and rehearsal, there is evidence that the brain regulates its inputs during recognition using regulatory feedback: form the outputs back to inputs-the same inputs that activate the outputs. This is observed through evidence from the fields of neuroscience and cognitive psychology but is not present in current algorithms. Thus, the brain provides an abundance of evidence about its underlying algorithms and while computer science tools and analysis are essential, algorithms guided by computer science should not be standardized into neuroscience theories.
引用
收藏
页码:2428 / 2436
页数:9
相关论文
共 54 条
  • [1] Estimating the reproducibility of psychological science
    Aarts, Alexander A.
    Anderson, Joanna E.
    Anderson, Christopher J.
    Attridge, Peter R.
    Attwood, Angela
    Axt, Jordan
    Babel, Molly
    Bahnik, Stepan
    Baranski, Erica
    Barnett-Cowan, Michael
    Bartmess, Elizabeth
    Beer, Jennifer
    Bell, Raoul
    Bentley, Heather
    Beyan, Leah
    Binion, Grace
    Borsboom, Denny
    Bosch, Annick
    Bosco, Frank A.
    Bowman, Sara D.
    Brandt, Mark J.
    Braswell, Erin
    Brohmer, Hilmar
    Brown, Benjamin T.
    Brown, Kristina
    Bruening, Jovita
    Calhoun-Sauls, Ann
    Callahan, Shannon P.
    Chagnon, Elizabeth
    Chandler, Jesse
    Chartier, Christopher R.
    Cheung, Felix
    Christopherson, Cody D.
    Cillessen, Linda
    Clay, Russ
    Cleary, Hayley
    Cloud, Mark D.
    Cohn, Michael
    Cohoon, Johanna
    Columbus, Simon
    Cordes, Andreas
    Costantini, Giulio
    Alvarez, Leslie D. Cramblet
    Cremata, Ed
    Crusius, Jan
    DeCoster, Jamie
    DeGaetano, Michelle A.
    Della Penna, Nicolas
    den Bezemer, Bobby
    Deserno, Marie K.
    [J]. SCIENCE, 2015, 349 (6251)
  • [2] Input shunt networks
    Achler, T
    [J]. NEUROCOMPUTING, 2002, 44 : 249 - 255
  • [3] Achler T, 2020, ARE ASSUMPTIONS BRAI
  • [4] Achler T, 2016, NEURAL PHENOMENA FOC
  • [5] Symbolic neural networks for cognitive capacities
    Achler, Tsvi
    [J]. BIOLOGICALLY INSPIRED COGNITIVE ARCHITECTURES, 2014, 9 : 71 - 81
  • [6] [Anonymous], WIKIPEDIA
  • [7] Fast reinforcement learning with generalized policy updates
    Barreto, Andre
    Hou, Shaobo
    Borsa, Diana
    Silver, David
    Precup, Doina
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2020, 117 (48) : 30079 - 30087
  • [8] Reinforcement Learning, Fast and Slow
    Botvinick, Matthew
    Ritter, Sam
    Wang, Jane X.
    Kurth-Nelson, Zeb
    Blundell, Charles
    Hassabis, Demis
    [J]. TRENDS IN COGNITIVE SCIENCES, 2019, 23 (05) : 408 - 422
  • [9] Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons
    Brunel, N
    [J]. JOURNAL OF COMPUTATIONAL NEUROSCIENCE, 2000, 8 (03) : 183 - 208
  • [10] Towards Strong AI
    Butz, Martin V.
    [J]. KUNSTLICHE INTELLIGENZ, 2021, 35 (01): : 91 - 101