Play the Imitation Game: Model Extraction Attack against Autonomous Driving Localization

被引:2
|
作者
Zhang, Qifan [1 ]
Shen, Junjie [1 ]
Tan, Mingtian [2 ]
Zhou, Zhe [2 ]
Li, Zhou [1 ]
Chen, Qi Alfred [1 ]
Zhang, Haipeng [3 ]
机构
[1] Univ Calif Irvine, Irvine, CA 92717 USA
[2] Fudan Univ, Shanghai, Peoples R China
[3] ShanghaiTech Univ, Shanghai, Peoples R China
关键词
autonomous driving; localization; model extraction; KALMAN FILTER; IDENTIFICATION;
D O I
10.1145/3564625.3567977
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The security of the Autonomous Driving (AD) system has been gaining researchers' and public's attention recently. Given that AD companies have invested a huge amount of resources in developing their AD models, e.g., localization models, these models, especially their parameters, are important intellectual property and deserve strong protection. In this work, we examine whether the confidentiality of production-grade Multi-Sensor Fusion (MSF) models, in particular, Error-State Kalman Filter (ESKF), can be stolen from an outside adversary. We propose a new model extraction attack called TaskMaster that can infer the secret ESKF parameters under black-box assumption. In essence, TaskMaster trains a substitutional ESKF model to recover the parameters, by observing the input and output to the targeted AD system. To precisely recover the parameters, we combine a set of techniques, like gradient-based optimization, search-space reduction and multi-stage optimization. The evaluation result on real-world vehicle sensor dataset shows that TaskMaster is practical. For example, with 25 seconds AD sensor data for training, the substitutional ESKF model reaches centimeter-level accuracy, comparing with the ground-truth model.
引用
收藏
页码:56 / 70
页数:15
相关论文
共 35 条
  • [1] Towards Autonomous Driving Model Resistant to Adversarial Attack
    Shibly, Kabid Hassan
    Hossain, Md Delwar
    Inoue, Hiroyuki
    Taenaka, Yuzo
    Kadobayashi, Youki
    APPLIED ARTIFICIAL INTELLIGENCE, 2023, 37 (01)
  • [2] Hierarchical Model-Based Imitation Learning for Planning in Autonomous Driving
    Bronstein, Eli
    Palatucci, Mark
    Notz, Dominik
    White, Brandyn
    Kuefler, Alex
    Lu, Yiren
    Paul, Supratik
    Nikdel, Payam
    Mougin, Paul
    Chen, Hongge
    Fu, Justin
    Abrams, Austin
    Shah, Punit
    Racah, Evan
    Frenkel, Benjamin
    Whiteson, Shimon
    Anguelov, Dragomir
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 8652 - 8659
  • [3] Locality-Based Action-Poisoning Attack against the Continuous Control of an Autonomous Driving Model
    An, Yoonsoo
    Yang, Wonseok
    Choi, Daeseon
    PROCESSES, 2024, 12 (02)
  • [4] Autonomous Driving Model Defense Study on Hijacking Adversarial Attack
    Shibly, Kabid Hassan
    Hossain, Md Delwar
    Inoue, Hiroyuki
    Taenaka, Yuzo
    Kadobayashi, Youki
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT IV, 2022, 13532 : 546 - 557
  • [5] Backdoor Attack Against Deep Learning-Based Autonomous Driving with Fogging
    Liu, Jianming
    Luo, Li
    Wang, Xueyan
    ARTIFICIAL INTELLIGENCE AND ROBOTICS, ISAIR 2022, PT II, 2022, 1701 : 247 - 256
  • [6] BadLiDet: A Simple Backdoor Attack against LiDAR Object Detection in Autonomous Driving
    Li, Shuai
    Wen, Yu
    Wang, Huiying
    Cheng, Xu
    2023 IEEE 22ND INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, BIGDATASE, CSE, EUC, ISCI 2023, 2024, : 99 - 108
  • [7] Anomaly Detection and Secure Position Estimation Against GPS Spoofing Attack: A Security-Critical Study of Localization in Autonomous Driving
    Chen, Qingming
    Li, Guoqiang
    Liu, Peng
    Wang, Zhenpo
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2025, 74 (01) : 87 - 99
  • [8] GAME: Generative-Based Adaptive Model Extraction Attack
    Xie, Yi
    Huang, Mengdie
    Zhang, Xiaoyu
    Dong, Changyu
    Susilo, Willy
    Chen, Xiaofeng
    COMPUTER SECURITY - ESORICS 2022, PT I, 2022, 13554 : 570 - 588
  • [9] Invisible DNN Watermarking Against Model Extraction Attack
    Xi, Zuping
    Qu, Zuomin
    Lu, Wei
    Luo, Xiangyang
    Cao, Xiaochun
    IEEE TRANSACTIONS ON CYBERNETICS, 2025, 55 (02) : 800 - 811
  • [10] Bandit-based data poisoning attack against federated learning for autonomous driving models
    Wang, Shuo
    Li, Qianmu
    Cui, Zhiyong
    Hou, Jun
    Huang, Chanying
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 227