Disentangled representations for causal cognition

被引:1
作者
Torresan, Filippo [1 ]
Baltieri, Manuel [1 ,2 ]
机构
[1] Univ Sussex, Brighton BN1 9RH, E Sussex, England
[2] Araya Inc, Chiyoda City, Tokyo 1010025, Japan
关键词
Causal cognition; Animal cognition; Causal reinforcement learning; Disentangled representations; Disentanglement; CALEDONIAN CROWS; GREAT APES; TOOL-USE; ARTIFICIAL-INTELLIGENCE; CAPUCHIN MONKEYS; YOUNG-CHILDREN; MODELS; SOLVE; PREDICTION; EMULATION;
D O I
10.1016/j.plrev.2024.10.003
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Complex adaptive agents consistently achieve their goals by solving problems that seem to require an understanding of causal information, information pertaining to the causal relationships that exist among elements of combined agent-environment systems. Causal cognition studies and describes the main characteristics of causal learning and reasoning in human and nonhuman animals, offering a conceptual framework to discuss cognitive performances based on the level of apparent causal understanding of a task. Despite the use of formal intervention- based models of causality, including causal Bayesian networks, psychological and behavioural research on causal cognition does not yet offer a computational account that operationalises how agents acquire a causal understanding of the world seemingly from scratch, i.e. without a-priori knowledge of relevant features of the environment. Research on causality in machine and reinforcement learning, especially involving disentanglement as a candidate process to build causal representations, represents on the other hand a concrete attempt at designing artificial agents that can learn about causality, shedding light on the inner workings of natural causal cognition. In this work, we connect these two areas of research to build a unifying framework for causal cognition that will offer a computational perspective on studies of animal cognition, and provide insights in the development of new algorithms for causal reinforcement learning in AI.
引用
收藏
页码:343 / 381
页数:39
相关论文
共 344 条
[1]  
Abbeel P., 2004, P 21 INT C MACH LEAR, P1
[2]  
Abel D, 2023, NeurIPS, V36, P50377
[3]  
Achille A, 2018, Arxiv, DOI [arXiv:1808.06508, arXiv:1808.06508, 10.48550/ARXIV.1808.06508, DOI 10.48550/ARXIV.1808.06508]
[4]  
Ahmed O, 2020, Arxiv, DOI arXiv:2010.04296
[5]  
Ahuja K, 2021, ADV NEUR IN, V34
[6]  
Alemi A., 2017, P INT C LEARN REPR I
[7]  
Andrychowicz Marcin, 2017, Advances in neural information processing systems, V30
[8]  
Annadani Y., 2021, arXiv
[9]  
Arjovsky M, 2020, Arxiv, DOI [arXiv:1907.02893, 10.48550/arXiv.1907.02893]
[10]   Deep Reinforcement Learning A brief survey [J].
Arulkumaran, Kai ;
Deisenroth, Marc Peter ;
Brundage, Miles ;
Bharath, Anil Anthony .
IEEE SIGNAL PROCESSING MAGAZINE, 2017, 34 (06) :26-38