Vision-Based Formation Control for an Outdoor UAV Swarm With Hierarchical Architecture

被引:5
作者
Ma, Liqun [1 ,2 ]
Meng, Dongyuan [1 ]
Huang, Xu [1 ]
Zhao, Shuaihe [1 ,2 ]
机构
[1] Aerosp Shenzhou Aerial Vehicle Co Ltd, Tianjin 300457, Peoples R China
[2] Beijing Inst Technol, Sch Automat, Beijing 100081, Peoples R China
关键词
~Unmanned aerial vehicles; formation control; visual localization; deep learning; multi-object tracking; UNMANNED AERIAL VEHICLES; DRONE SWARMS; LOCALIZATION; TRACKING;
D O I
10.1109/ACCESS.2023.3296603
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Formation control of a UAV swarm is challenging in outdoor GNSS-denied environments due to the difficulties in accomplishing relative positioning among the UAVs. This study proposes a visionbased formation control strategy that could be implemented in the absence of an external positioning system. The hierarchical architecture has been constructed for the UAV swarm using the modified leader-follower strategy. The leader UAV derives and broadcasts the locations of the follower UAVs, while the follower UAVs calculate their control inputs to achieve the desired swarm formation. The vision-based localization of the UAVs is accomplished using state-of-the-art deep learning algorithms like YOLOv7 and DeepSORT. The RflySim-based simulation has been conducted to verify the feasibility of the conceptualization, and the validation has been made with a real flight test using a swarm comprised of five quadrotors. Results show the robustness of the vision-based UAV positioning framework with a localization error within 0.3 m. Moreover, the formation control without the GNSS is achieved with a monocular camera and the entrylevel AI platforms implemented onboard, which could be promoted to UAV swarm applications for broader scenarios.
引用
收藏
页码:75134 / 75151
页数:18
相关论文
共 55 条
  • [1] Balamurugan G., 2016, 2016 International Conference on Signal Processing, Communication, Power and Embedded System (SCOPES), P198, DOI 10.1109/SCOPES.2016.7955787
  • [2] A review of swarm robotics tasks
    Bayindir, Levent
    [J]. NEUROCOMPUTING, 2016, 172 : 292 - 321
  • [3] Bewley A, 2016, IEEE IMAGE PROC, P3464, DOI 10.1109/ICIP.2016.7533003
  • [4] Deep learning in computer vision: A critical review of emerging techniques and application scenarios
    Chai, Junyi
    Zeng, Hao
    Li, Anming
    Ngai, Eric W. T.
    [J]. MACHINE LEARNING WITH APPLICATIONS, 2021, 6
  • [5] Review of Unmanned Aerial Vehicle Swarm Communication Architectures and Routing Protocols
    Chen, Xi
    Tang, Jun
    Lao, Songyang
    [J]. APPLIED SCIENCES-BASEL, 2020, 10 (10):
  • [6] Deep learning in video multi-object tracking: A survey
    Ciaparrone, Gioele
    Luque Sanchez, Francisco
    Tabik, Siham
    Troiano, Luigi
    Tagliaferri, Roberto
    Herrera, Francisco
    [J]. NEUROCOMPUTING, 2020, 381 : 61 - 88
  • [7] A review on absolute visual localization for UAV
    Couturier, Andy
    Akhloufi, Moulay A.
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2021, 135
  • [8] RFlySim: Automatic test platform for UAV autopilot systems with FPGA-based hardware-in-the-loop simulations
    Dai, Xunhua
    Ke, Chenxu
    Quan, Quan
    Cai, Kai-Yuan
    [J]. AEROSPACE SCIENCE AND TECHNOLOGY, 2021, 114
  • [9] A design and simulation of a target detection, tracking and localisation system for UAVs
    Daramouskas, I
    Patrinopoulou, N.
    Meimetis, D.
    Lappas, V
    Kostopoulos, V
    [J]. 2022 30TH MEDITERRANEAN CONFERENCE ON CONTROL AND AUTOMATION (MED), 2022, : 382 - 388
  • [10] Dias D, 2016, IEEE INT CONF ROBOT, P1181, DOI 10.1109/ICRA.2016.7487248