SayTap: Language to Quadrupedal Locomotion

被引:0
作者
Tang, Yujin [1 ]
Yu, Wenhao [1 ]
Tan, Jie [1 ]
Zen, Heiga [1 ]
Faust, Aleksandra [1 ]
Harada, Tatsuya [2 ]
机构
[1] Google DeepMind, London, England
[2] Univ Tokyo, Tokyo, Japan
来源
CONFERENCE ON ROBOT LEARNING, VOL 229 | 2023年 / 229卷
关键词
Large language model (LLM); Quadrupedal robots; Locomotion; BLIND;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large language models (LLMs) have demonstrated the potential to perform high-level planning. Yet, it remains a challenge for LLMs to comprehend low-level commands, such as joint angle targets or motor torques. This paper proposes an approach to use foot contact patterns as an interface that bridges human commands in natural language and a locomotion controller that outputs these low-level commands. This results in an interactive system for quadrupedal robots that allows the users to craft diverse locomotion behaviors flexibly. We contribute an LLM prompt design, a reward function, and a method to expose the controller to the feasible distribution of contact patterns. The results are a controller capable of achieving diverse locomotion patterns that can be transferred to real robot hardware. Compared with other design choices, the proposed approach enjoys more than 50% success rate in predicting the correct contact patterns and can solve 10 more tasks out of a total of 30 tasks. (https://saytap.github.io
引用
收藏
页数:15
相关论文
共 49 条
[1]  
Ahn Michael, 2022, arXiv
[2]  
[Anonymous], 2023, US
[3]  
[Anonymous], 2020, PMLR, DOI DOI 10.1145/3313831.3376367
[4]   CPG-RL: Learning Central Pattern Generators for Quadruped Locomotion [J].
Bellegarda, Guillaume ;
Ijspeert, Auke .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (04) :12547-12554
[5]  
Borenstein J, 1997, IEEE INT CONF ROBOT, P1283, DOI 10.1109/ROBOT.1997.614314
[6]  
Brohan A., 2022, arXiv
[7]  
Bucker A., 2022, ARXIV
[8]  
Caluwaerts K., 2023, arXiv
[9]  
Chai JY, 2018, PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P2
[10]  
Chen M., 2021, Evaluating large language models trained on code