Adversarial Attacks and Defenses for Deep-Learning-Based Unmanned Aerial Vehicles

被引:122
作者
Tian, Jiwei [1 ]
Wang, Buhong [2 ]
Guo, Rongxiao [2 ]
Wang, Zhen [2 ]
Cao, Kunrui [3 ]
Wang, Xiaodong [4 ]
机构
[1] Air Force Engn Univ, ATC Nav Coll, Xian 710038, Shaanxi, Peoples R China
[2] Air Force Engn Univ, Informat & Nav Coll, Xian 710077, Shaanxi, Peoples R China
[3] Natl Univ Def Technol, Sch Informat & Commun, Xian 710106, Peoples R China
[4] Xiamen Univ, Tan Kan Kee Coll, Zhangzhou 361005, Fujian, Peoples R China
基金
中国国家自然科学基金;
关键词
Navigation; Internet of Things; Training; Cameras; Security; Deep learning; Task analysis; Adversarial example; adversarial training; deep learning (DL); defensive distillation; unmanned aerial vehicle (UAV); EXAMPLES; ALGORITHMS; SYSTEMS;
D O I
10.1109/JIOT.2021.3111024
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The introduction of deep learning (DL) technology can improve the performance of cyber-physical systems (CPSs) in many ways. However, this also brings new security issues. To tackle these challenges, this article explores the vulnerabilities of DL-based unmanned aerial vehicles (UAVs), which are typical CPSs. Although many research works have been reported previously on adversarial attacks of DL models, only few of them are concerned about safety-critical CPSs, especially regression models in such systems. In this article, we analyze the problem of adversarial attacks against DL-based UAVs and propose two adversarial attack methods against regression models in UAVs. The experiments demonstrate that the proposed nontargeted and targeted attack methods both can craft imperceptible adversarial images and pose a considerable threat to the navigation and control of UAVs. To address this problem, adversarial training and defensive distillation methods are further investigated and evaluated, increasing the robustness of DL models in UAVs. To our knowledge, this is the first study on adversarial attacks and defenses against DL-based UAVs, which calls for more attention to the security and safety of such safety-critical applications.
引用
收藏
页码:22399 / 22409
页数:11
相关论文
共 67 条
[1]   Advanced Stealth Man-in-The-Middle Attack in WPA2 Encrypted Wi-Fi Networks [J].
Agarwal, Mayank ;
Biswas, Santosh ;
Nandi, Sukumar .
IEEE COMMUNICATIONS LETTERS, 2015, 19 (04) :581-584
[2]   Perturbation Analysis of Learning Algorithms: Generation of Adversarial Examples From Classification to Regression [J].
Balda, Emilio Rafael ;
Behboodi, Arash ;
Mathar, Rudolf .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2019, 67 (23) :6078-6091
[3]  
Bhatnagar R., 2015, INT J RES ENG TECHNO, V3, P23
[4]   Wild patterns: Ten years after the rise of adversarial machine learning [J].
Biggio, Battista ;
Roli, Fabio .
PATTERN RECOGNITION, 2018, 84 :317-331
[5]  
Boloor A, 2019, Arxiv, DOI arXiv:1910.01907
[6]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[7]  
Carlini N, 2017, PROCEEDINGS OF THE 10TH ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, AISEC 2017, P3, DOI 10.1145/3128572.3140444
[8]   A Review of Deep Learning Methods and Applications for Unmanned Aerial Vehicles [J].
Carrio, Adrian ;
Sampedro, Carlos ;
Rodriguez-Ramos, Alejandro ;
Campoy, Pascual .
JOURNAL OF SENSORS, 2017, 2017
[9]  
Chen Y., 2018, PROC IEEE INT C COMM, P1
[10]   Unmanned aerial vehicles using machine learning for autonomous flight; state-of-the-art [J].
Choi, Su Yeon ;
Cha, Dowan .
ADVANCED ROBOTICS, 2019, 33 (06) :265-277