Model-Free Robust Optimal Feedback Mechanisms of Biological Motor Control

被引:24
|
作者
Bian, Tao [1 ]
Wolpert, Daniel M. [2 ,3 ]
Jiang, Zhong-Ping [1 ]
机构
[1] NYU, Control & Networks Lab, Dept Elect & Comp Engn, Tandon Sch Engn, Brooklyn, NY 11201 USA
[2] Columbia Univ, Dept Neurosci, Zuckerman Mind Brain Behav Inst, New York, NY 10027 USA
[3] Univ Cambridge, Dept Engn, Cambridge CB2 1PZ, England
基金
英国惠康基金; 美国国家科学基金会;
关键词
ADAPTIVE OPTIMAL-CONTROL; CONTINUOUS-TIME; ARM MOVEMENTS; ADAPTATION; VARIABILITY; SYSTEMS; STABILITY; MEMORY; REWARD; SIGNAL;
D O I
10.1162/neco_a_01260
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sensorimotor tasks that humans perform are often affected by different sources of uncertainty. Nevertheless, the central nervous system (CNS) can gracefully coordinate our movements. Most learning frameworks rely on the internal model principle, which requires a precise internal representation in the CNS to predict the outcomes of our motor commands. However, learning a perfect internal model in a complex environment over a short period of time is a nontrivial problem. Indeed, achieving proficient motor skills may require years of training for some difficult tasks. Internal models alone may not be adequate to explain the motor adaptation behavior during the early phase of learning. Recent studies investigating the active regulation of motor variability, the presence of suboptimal inference, and model-free learning have challenged some of the traditional viewpoints on the sensorimotor learning mechanism. As a result, it may be necessary to develop a computational framework that can account for these new phenomena. Here, we develop a novel theory of motor learning, based on model-free adaptive optimal control, which can bypass some of the difficulties in existing theories. This new theory is based on our recently developed adaptive dynamic programming (ADP) and robust ADP (RADP) methods and is especially useful for accounting for motor learning behavior when an internal model is inaccurate or unavailable. Our preliminary computational results are in line with experimental observations reported in the literature and can account for some phenomena that are inexplicable using existing models.
引用
收藏
页码:562 / 595
页数:34
相关论文
共 50 条
  • [31] Robust Model-Free Adaptive Iterative Learning Control for Vibration Suppression Based on Evidential Reasoning
    Bai, Liang
    Feng, Yun-Wen
    Li, Ning
    Xue, Xiao-Feng
    MICROMACHINES, 2019, 10 (03)
  • [32] The model-free adaptive cross-coupled control for two-dimensional linear motor
    Zhang, Baolin
    Cao, Rongmin
    Hou, Zhongsheng
    TRANSACTIONS OF THE INSTITUTE OF MEASUREMENT AND CONTROL, 2020, 42 (05) : 1059 - 1069
  • [33] Dynamic Output Feedback Robust MPC With one Free Control Move for LPV Model With Bounded Disturbance
    Ding, Baocang
    Wang, Pengjun
    Hu, Jianchen
    ASIAN JOURNAL OF CONTROL, 2018, 20 (02) : 755 - 767
  • [34] Model-free control of affine chaotic systems
    Qi, GY
    Chen, ZQ
    Yuan, ZZ
    PHYSICS LETTERS A, 2005, 344 (2-4) : 189 - 202
  • [35] Model-Free Linear Noncausal Optimal Control of Wave Energy Converters via Reinforcement Learning
    Zhan, Siyuan
    Ringwood, John V.
    IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, 2024, 32 (06) : 2164 - 2177
  • [36] Policy Gradient Adaptive Dynamic Programming for Model-Free Multi-Objective Optimal Control
    Zhang, Hao
    Li, Yan
    Wang, Zhuping
    Ding, Yi
    Yan, Huaicheng
    IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2024, 11 (04) : 1060 - 1062
  • [37] Model-Free Adaptive Optimal Control for Unknown Nonlinear Multiplayer Nonzero-Sum Game
    Wei, Qinglai
    Zhu, Liao
    Song, Ruizhuo
    Zhang, Pinjia
    Liu, Derong
    Xiao, Jun
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (02) : 879 - 892
  • [38] A Proposed Postural Control Theory Synthesizing Optimal Feedback Control Theory, Postural Motor Learning, and Cerebellar Supervision Learning
    Morelli, Nathan
    Hoch, Matthew
    PERCEPTUAL AND MOTOR SKILLS, 2020, 127 (06) : 1118 - 1133
  • [39] An Improved Reinforcement Learning Based Heuristic Dynamic Programming Algorithm for Model-Free Optimal Control
    Li, Jia
    Yuan, Zhaolin
    Ban, Xiaojuan
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2020, PT II, 2020, 12397 : 282 - 294
  • [40] Model-free Impedance Control for Safe Human-Robot Interaction
    Li, Yanan
    Ge, Shuzhi Sam
    Yang, Chenguang
    Li, Xinyang
    Tee, Keng Peng
    2011 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2011,