AirCapRL: Autonomous Aerial Human Motion Capture Using Deep Reinforcement Learning

被引:16
|
作者
Tallamraju, Rahul [1 ]
Saini, Nitin [1 ]
Bonetto, Elia [1 ]
Pabst, Michael [1 ]
Liu, Yu Tang [1 ]
Black, Michael J. [1 ]
Ahmad, Aamir [1 ,2 ]
机构
[1] Max Planck Inst Intelligent Syst, Tubingen, Germany
[2] Univ Stuttgart, Dept Aerosp Engn & Geodesy, Stuttgart, Germany
关键词
Reinforecment learning; aerial systems; perception and autonomy; multi-robot systems; visual tracking;
D O I
10.1109/LRA.2020.3013906
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
In this letter, we introduce a deep reinforcement learning (DRL) based multi-robot formation controller for the task of autonomous aerial human motion capture (MoCap). We focus on vision-based MoCap, where the objective is to estimate the trajectory of body pose, and shape of a single moving person using multiple micro aerial vehicles. State-of-the-art solutions to this problem are based on classical control methods, which depend on hand-crafted system, and observation models. Such models are difficult to derive, and generalize across different systems. Moreover, the non-linearities, and non-convexities of these models lead to sub-optimal controls. In our work, we formulate this problem as a sequential decision making task to achieve the vision-based motion capture objectives, and solve it using a deep neural network-based RL method. We leverage proximal policy optimization (PPO) to train a stochastic decentralized control policy for formation control. The neural network is trained in a parallelized setup in synthetic environments. We performed extensive simulation experiments to validate our approach. Finally, real-robot experiments demonstrate that our policies generalize to real world conditions.
引用
收藏
页码:6678 / 6685
页数:8
相关论文
共 50 条
  • [1] Visualization of Deep Reinforcement Autonomous Aerial Mobility Learning Simulations
    Lee, Gusang
    Yun, Won Joon
    Jung, Soyi
    Kim, Joongheon
    Kim, Jae-Hyun
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (IEEE INFOCOM WKSHPS 2021), 2021,
  • [2] Autonomous control of unmanned aerial vehicle for chemical detection using deep reinforcement learning
    Byun, Hyung Joon
    Nam, Hyunwoo
    ELECTRONICS LETTERS, 2022, 58 (11) : 423 - 425
  • [3] Markerless Outdoor Human Motion Capture Using Multiple Autonomous Micro Aerial Vehicles
    Saini, Nitin
    Price, Eric
    Tallamraju, Rahul
    Enficiaud, Raffi
    Ludwig, Roman
    Martinovic, Igor
    Ahmad, Aamir
    Black, Michael J.
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 823 - 832
  • [4] Survey of Deep Reinforcement Learning for Motion Planning of Autonomous Vehicles
    Aradi, Szilard
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (02) : 740 - 759
  • [5] Autonomous Motion Control Using Deep Reinforcement Learning for Exploration Robot on Rough Terrain
    Wang, Zijie
    Ji, Yonghoon
    Fujii, Hiromitsu
    Kono, Hitoshi
    2022 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII 2022), 2022, : 1021 - 1025
  • [6] Recognition of human motion with deep reinforcement learning
    Seok W.
    Park C.
    IEIE Transactions on Smart Processing and Computing, 2018, 7 (03): : 245 - 250
  • [7] Air Learning: a deep reinforcement learning gym for autonomous aerial robot visual navigation
    Srivatsan Krishnan
    Behzad Boroujerdian
    William Fu
    Aleksandra Faust
    Vijay Janapa Reddi
    Machine Learning, 2021, 110 : 2501 - 2540
  • [8] Air Learning: a deep reinforcement learning gym for autonomous aerial robot visual navigation
    Krishnan, Srivatsan
    Boroujerdian, Behzad
    Fu, William
    Faust, Aleksandra
    Reddi, Vijay Janapa
    MACHINE LEARNING, 2021, 110 (09) : 2501 - 2540
  • [9] Human Motion Posture Detection Algorithm Using Deep Reinforcement Learning
    Qi, Limin
    Han, Yong
    MOBILE INFORMATION SYSTEMS, 2021, 2021
  • [10] Estimation on Human Motion Posture Using Improved Deep Reinforcement Learning
    Ma, Wenjing
    Zhao, Jianguang
    Zhu, Guangquan
    Journal of Computers (Taiwan), 2023, 34 (04) : 97 - 110