Learning in Hybrid Active Inference Models

被引:0
作者
Collis, Poppy [1 ]
Singh, Ryan [1 ,2 ]
Kinghorn, Paul F. [1 ]
Buckley, Christopher L. [1 ,2 ]
机构
[1] Univ Sussex, Sch Engn & Informat, Brighton, E Sussex, England
[2] VERSES Res Lab, Los Angeles, CA USA
来源
ACTIVE INFERENCE, IWAI 2024 | 2025年 / 2193卷
关键词
hybrid state-space models; decision-making; piecewise affine systems;
D O I
10.1007/978-3-031-77138-5_4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
An open problem in artificial intelligence is how systems can flexibly learn discrete abstractions that are useful for solving inherently continuous problems. Previous work in computational neuroscience has considered this functional integration of discrete and continuous variables during decision-making under the formalism of active inference [13,29]. However, their focus is on the expressive physical implementation of categorical decisions and the hierarchical mixed generative model is assumed to be known. As a consequence, it is unclear how this framework might be extended to the learning of appropriate coarse-grained variables for a given task. In light of this, we present a novel hierarchical hybrid active inference agent in which a high-level discrete active inference planner sits above a low-level continuous active inference controller. We make use of recent work in recurrent switching linear dynamical systems (rSLDS) which learn meaningful discrete representations of complex continuous dynamics via piecewise linear decomposition [22]. The representations learnt by the rSLDS inform the structure of the hybrid decision-making agent and allow us to (1) lift decision-making into the discrete domain enabling us to exploit information-theoretic exploration bonuses (2) specify temporally-abstracted sub-goals in a method reminiscent of the options framework [34] and (3) 'cache' the approximate solutions to low-level problems in the discrete planner. We apply our model to the sparse Continuous Mountain Car task, demonstrating fast system identification via enhanced exploration and successful planning through the delineation of abstract sub-goals.
引用
收藏
页码:49 / 71
页数:23
相关论文
共 37 条
[1]   Model-Based Reinforcement Learning via Stochastic Hybrid Models [J].
Abdulsamad, Hany ;
Peters, Jan .
IEEE OPEN JOURNAL OF CONTROL SYSTEMS, 2023, 2 :155-170
[2]  
Abdulsamad H, 2020, PR MACH LEARN RES, V120, P904
[3]   The explicit linear quadratic regulator for constrained systems [J].
Bemporad, A ;
Morari, M ;
Dua, V ;
Pistikopoulos, EN .
AUTOMATICA, 2002, 38 (01) :3-20
[4]  
Bemporad A, 2000, P AMER CONTR CONF, P1190, DOI 10.1109/ACC.2000.876688
[5]  
Block A., 2023, Provable guarantees for generative behavior cloning: Bridging lowlevel stability and high-level behavior
[6]   An MPC/hybrid system approach to traction control [J].
Borrelli, Francesco ;
Bemporad, Alberto ;
Fodor, Michael ;
Hrovat, Davor .
IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, 2006, 14 (03) :541-552
[7]  
Coulom R, 2007, LECT NOTES COMPUT SC, V4630, P72
[8]   Active inference on discrete state-spaces: A synthesis [J].
Da Costa, Lancelot ;
Parr, Thomas ;
Sajid, Noor ;
Veselic, Sebastijan ;
Neacsu, Victorita ;
Friston, Karl .
JOURNAL OF MATHEMATICAL PSYCHOLOGY, 2020, 99
[9]   Probabilistic inference for determining options in reinforcement learning [J].
Daniel, Christian ;
van Hoof, Herke ;
Peters, Jan ;
Neumann, Gerhard .
MACHINE LEARNING, 2016, 104 (2-3) :337-357
[10]  
Dayan P., 1992, Advances in Neural Information Processing Systems, V5, P271