A Reinforcement Learning Approach for Global Navigation Satellite System Spoofing Attack Detection in Autonomous Vehicles

被引:10
作者
Dasgupta, Sagar [1 ]
Ghosh, Tonmoy [2 ]
Rahman, Mizanur [1 ]
机构
[1] Univ Alabama, Dept Civil Construct & Environm Engn, Tuscaloosa, AL 35487 USA
[2] Univ Alabama, Dept Elect & Comp Engn, Tuscaloosa, AL USA
基金
美国国家科学基金会;
关键词
data and data science; geographic information science; geographic information systems; cybersecurity; operations; planning and analysis; environmental analysis and ecology; reinforcement learning; IN-VEHICLE; INTRUSION DETECTION;
D O I
10.1177/03611981221095509
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
A resilient positioning, navigation, and timing (PNT) system is a necessity for the robust navigation of autonomous vehicles (AVs). A global navigation satellite system (GNSS) provides satellite-based PNT services. However, a spoofer can tamper the authentic GNSS signal and could transmit wrong position information to an AV. Therefore, an AV must have the capability of real-time detection of spoofing attacks related to PNT receivers, whereby it will help the end-user (the AV in this case) to navigate safely even if the GNSS is compromised. This paper aims to develop a deep reinforcement learning (RL)-based turn-by-turn spoofing attack detection method using low-cost in-vehicle sensor data. We have utilized the Honda Research Institute Driving Dataset to create attack and non-attack datasets to develop a deep RL model and have evaluated the performance of the deep RL-based attack detection model. We find that the accuracy of the deep RL model ranges from 99.99% to 100%, and the recall value is 100%. Furthermore, the precision ranges from 93.44% to 100%, and the f1 score ranges from 96.61% to 100%. Overall, the analyses reveal that the RL model is effective in turn-by-turn spoofing attack detection.
引用
收藏
页码:318 / 330
页数:13
相关论文
共 50 条
[41]   Deep Reinforcement Learning for Autonomous Vehicles Collaboration at Unsignalized Intersections [J].
Zheng, Jian ;
Zhu, Kun ;
Wang, Ran .
2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, :1115-1120
[42]   Using Physiological Metrics to Improve Reinforcement Learning for Autonomous Vehicles [J].
Fleicher, Michael ;
Musicant, Oren ;
Azaria, Amos .
2022 IEEE 34TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2022, :1223-1230
[43]   Optimal motion planning by reinforcement learning in autonomous mobile vehicles [J].
Gomez, M. ;
Gonzalez, R. V. ;
Martinez-Marin, T. ;
Meziat, D. ;
Sanchez, S. .
ROBOTICA, 2012, 30 :159-170
[44]   Survey of Deep Reinforcement Learning for Motion Planning of Autonomous Vehicles [J].
Aradi, Szilard .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (02) :740-759
[45]   Autonomous Vehicles Roundup Strategy by Reinforcement Learning with Prediction Trajectory [J].
Ni, Jiayang ;
Ma, Rubing ;
Zhong, Hua ;
Wang, Bo .
2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, :3370-3375
[46]   Incremental Learning for Autonomous Navigation of Mobile Robots based on Deep Reinforcement Learning [J].
Manh Luong ;
Cuong Pham .
Journal of Intelligent & Robotic Systems, 2021, 101
[47]   Incremental Learning for Autonomous Navigation of Mobile Robots based on Deep Reinforcement Learning [J].
Manh Luong ;
Cuong Pham .
JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2021, 101 (01)
[48]   Reinforcement Learning-based Adversarial Attack Generation Examples in Connected and Autonomous Vehicles: A Case Study on Vehicular Platoons [J].
Vyas, Shashank Dhananjay ;
Dey, Satadru .
IFAC PAPERSONLINE, 2024, 58 (28) :78-83
[49]   Reinforcement Learning Based Active Attack Detection and Blockchain Technique to Protect the Data from the Passive Attack in the Autonomous Mobile Network [J].
C. Sivasankar ;
T. Kumanan .
Wireless Personal Communications, 2023, 131 :2697-2714
[50]   Reinforcement learning control approach for autonomous microgrids [J].
Mahmoud, M. S. ;
Abouheaf, M. ;
Sharaf, A. .
INTERNATIONAL JOURNAL OF MODELLING AND SIMULATION, 2021, 41 (01) :1-10