Stochastic Optimal Control as Approximate Input Inference

被引:0
作者
Watson, Joe [1 ]
Abdulsamad, Hany [1 ]
Peters, Jan [2 ]
机构
[1] Tech Univ Darmstadt, Dept Comp Sci, Darmstadt, Germany
[2] Max Planck Inst Intelligent Systems Tubingen, Robot Learning Grp, Tubingen, Germany
来源
CONFERENCE ON ROBOT LEARNING, VOL 100 | 2019年 / 100卷
基金
欧盟地平线“2020”;
关键词
Stochastic Optimal Control; Approximate Inference; OPTIMAL FEEDBACK-CONTROL;
D O I
暂无
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Optimal control of stochastic nonlinear dynamical systems is a major challenge in the domain of robot learning. Given the intractability of the global control problem, state-of-the-art algorithms focus on approximate sequential optimization techniques, that heavily rely on heuristics for regularization in order to achieve stable convergence. By building upon the duality between inference and control, we develop the view of Optimal Control as Input Estimation, devising a probabilistic stochastic optimal control formulation that iteratively infers the optimal input distributions by minimizing an upper bound of the control cost. Inference is performed through Expectation Maximization and message passing on a probabilistic graphical model of the dynamical system, and time-varying linear Gaussian feedback controllers are extracted from the joint state-action distribution. This perspective incorporates uncertainty quantification, effective initialization through priors, and the principled regularization inherent to the Bayesian treatment. Moreover, it can be shown that for deterministic linearized systems, our framework derives the maximum entropy linear quadratic optimal control law. We provide a complete and detailed derivation of our probabilistic approach and highlight its advantages in comparison to other deterministic and probabilistic solvers.
引用
收藏
页数:20
相关论文
共 51 条
[1]  
Anderson Brian DO, 2012, Optimal filtering
[2]  
Aoki M., 1967, Optimization of stochastic systems: topics in discrete-time systems, V32
[3]  
Attias H., 2003, INT WORKSHOP ARTIFIC, V4, P9
[4]   STOCHASTIC DYNAMIC-PROGRAMMING - CAUTION AND PROBING [J].
BARSHALOM, Y .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 1981, 26 (05) :1184-1195
[5]   THE ITERATED KALMAN SMOOTHER AS A GAUSS-NEWTON METHOD [J].
BELL, BM .
SIAM JOURNAL ON OPTIMIZATION, 1994, 4 (03) :626-636
[6]  
Bengio Y., 2009, P 26 ANN INT C MACH, P41, DOI [DOI 10.1145/1553374.1553380.EVENT-PLACE, 10.1145/1553374.1553380, DOI 10.1145/1553374.15533802,5]
[7]  
Bishop Christopher M., 2006, Pattern recognition and machine learning, V4
[8]  
Bruderer L., 2015, PhD thesis
[9]  
Bryson A.E., 2018, Applied optimal control: optimization, estimation and control
[10]  
Deisenroth M. P., 2013, Foundations and Trends R in Robotics, V2, P1