From internal models toward metacognitive AI

被引:0
作者
Mitsuo Kawato
Aurelio Cortese
机构
[1] Computational Neuroscience Laboratory,ATR Brain Information Communication Research Group
来源
Biological Cybernetics | 2021年 / 115卷
关键词
Internal models; Forward and inverse models; Cerebellum; Prefrontal cortex; Metacognition; Consciousness; Artificial intelligence; Hierarchical reinforcement learning;
D O I
暂无
中图分类号
学科分类号
摘要
In several papers published in Biological Cybernetics in the 1980s and 1990s, Kawato and colleagues proposed computational models explaining how internal models are acquired in the cerebellum. These models were later supported by neurophysiological experiments using monkeys and neuroimaging experiments involving humans. These early studies influenced neuroscience from basic, sensory-motor control to higher cognitive functions. One of the most perplexing enigmas related to internal models is to understand the neural mechanisms that enable animals to learn large-dimensional problems with so few trials. Consciousness and metacognition—the ability to monitor one’s own thoughts, may be part of the solution to this enigma. Based on literature reviews of the past 20 years, here we propose a computational neuroscience model of metacognition. The model comprises a modular hierarchical reinforcement-learning architecture of parallel and layered, generative-inverse model pairs. In the prefrontal cortex, a distributed executive network called the “cognitive reality monitoring network” (CRMN) orchestrates conscious involvement of generative-inverse model pairs in perception and action. Based on mismatches between computations by generative and inverse models, as well as reward prediction errors, CRMN computes a “responsibility signal” that gates selection and learning of pairs in perception, action, and reinforcement learning. A high responsibility signal is given to the pairs that best capture the external world, that are competent in movements (small mismatch), and that are capable of reinforcement learning (small reward-prediction error). CRMN selects pairs with higher responsibility signals as objects of metacognition, and consciousness is determined by the entropy of responsibility signals across all pairs. This model could lead to new-generation AI, which exhibits metacognition, consciousness, dimension reduction, selection of modules and corresponding representations, and learning from small samples. It may also lead to the development of a new scientific paradigm that enables the causal study of consciousness by combining CRMN and decoded neurofeedback.
引用
收藏
页码:415 / 430
页数:15
相关论文
共 50 条
[31]   Metacognitive Domains Are Not Aligned along a Dimension of Internal-External Information Source [J].
Polina Arbuzova ;
Lisa K. Maurer ;
Elisa Filevich .
Psychonomic Bulletin & Review, 2023, 30 :1125-1135
[32]   Toward Human-AI Interfaces to Support Explainability and Causability in Medical AI [J].
Holzinger, Andreas ;
Mueller, Heimo .
COMPUTER, 2021, 54 (10) :78-86
[33]   Metacognitive Awareness and Attitudes toward Foreign Language Learning in the EFL Context of Turkey [J].
Pour Feiz, Jafar .
INTERNATIONAL CONFERENCE ON TEACHING AND LEARNING ENGLISH AS AN ADDITIONAL LANGUAGE, GLOBELT 2016, 2016, 232 :459-470
[34]   TOWARD ACCESSING SPATIAL STRUCTURE FROM BUILDING INFORMATION MODELS [J].
Schultz, Carl ;
Bhatt, Mehul .
28TH URBAN DATA MANAGEMENT SYMPOSIUM, 2011, 38-4 (C21) :25-30
[35]   Toward the novel AI tasks in infection biology [J].
Yakimovich, Artur .
MSPHERE, 2024, 9 (02)
[36]   Toward a Smart City Using Tentacular AI [J].
Sen, Atriya ;
Bringsjord, Selmer ;
Govindarajulu, Naveen Sundar ;
Mayol, Paul ;
Ghosh, Rikhiya ;
Srivastava, Biplav ;
Talamadupula, Kartik .
AMBIENT INTELLIGENCE, AMI 2018, 2018, 11249 :106-112
[37]   Toward an AI Era: Advances in Electronic Skins [J].
Fu, Xuemei ;
Cheng, Wen ;
Wan, Guanxiang ;
Yang, Zijie ;
Tee, Benjamin C. K. .
CHEMICAL REVIEWS, 2024, 124 (17) :9899-9948
[38]   Toward Clinical Generative AI: Conceptual Framework [J].
Bragazzi, Nicola Luigi ;
Garbarino, Sergio .
JMIR AI, 2024, 3
[39]   Is data material? Toward an environmental sociology of AI [J].
Pieper, Maximilian .
AI & SOCIETY, 2025,
[40]   Toward an Ethics of AI Assistants: an Initial Framework [J].
Danaher J. .
Philosophy & Technology, 2018, 31 (4) :629-653