Vision-Based Reactive Temporal Logic Motion Planning for Quadruped Robots in Unstructured Dynamic Environments

被引:8
作者
Zhou, Zhangli [1 ]
Chen, Ziyang [1 ]
Cai, Mingyu [2 ]
Li, Zhijun [1 ]
Kan, Zhen [1 ]
Su, Chun-Yi [3 ]
机构
[1] Univ Sci & Technol China, Dept Automat, Hefei 230026, Peoples R China
[2] Univ Calif Riverside, Dept Mech Engn, Riverside, CA 92521 USA
[3] Taizhou Univ, Sch Intelligent Mfg, Taizhou 318000, Peoples R China
基金
中国国家自然科学基金;
关键词
Computer vision; formal methods in automation and robotics; linear temporal logic (LTL); online motion planning; quadruped robot; LTL;
D O I
10.1109/TIE.2023.3299048
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Temporal logic-based motion planning has been extensively studied to address complex robotic tasks. However, existing works primarily focus on static environments or assume the robot has full observations of the environment. This limits their practical applications since real-world environments are often dynamic, and robots may suffer from partial observations. To tackle these issues, this study proposes a framework for vision-based reactive temporal logic motion planning (V-RTLMP) for robots integrated with LiDAR sensing. The V-RTLMP is designed to perform high-level linear temporal logic (LTL) tasks in unstructured dynamic environments. The framework comprises two modules: offline preplanning and online reactive planning. Given LTL specifications, the preplanning phase generates a reference trajectory over the continuous workspace via sampling-based methods using prior environmental knowledge. The online reactive module dynamically adjusts the robot trajectory based on real-time visual perception to adapt to environmental changes. Extensive numerical simulations and real-world experiments using a quadruped robot demonstrate the effectiveness of the proposed vision-based reactive motion planning.
引用
收藏
页码:5983 / 5992
页数:10
相关论文
empty
未找到相关数据