Practical black-box adversarial attack on open-set recognition: Towards robust autonomous driving

被引:1
作者
Wang, Yanfei [1 ]
Zhang, Kai [1 ]
Lu, Kejie [2 ]
Xiong, Yun [3 ,4 ]
Wen, Mi [1 ]
机构
[1] Shanghai Univ Elect Power, Coll Comp Sci & Technol, Shanghai, Peoples R China
[2] Univ Puerto Rico, Dept Comp Sci & Engn, Mayaguez, PR USA
[3] Fudan Univ, Coll Comp Sci & Technol, Shanghai, Peoples R China
[4] Shanghai Key Lab Data Sci, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
Black-box attack; Open-set recognition; Autonomous driving system; Adversarial example; Image classification;
D O I
10.1007/s12083-022-01390-9
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As an important method of image classification, Open-Set Recognition (OSR) has been gradually deployed in autonomous driving systems (ADSs) for detecting the surrounding environment with unknown objects. To date, many researchers have demonstrated that the existing OSR classifiers are heavily threatened by adversarial input images. Nevertheless, most existing attack approaches are based on white-box attacks, assuming that information of the target OSR model is known by the attackers. Hence, these attack models cannot effectively attack ADSs that keep models and data confidential. To facilitate the design of future generations of robust OSR classifiers for safer ADSs, we introduce a practical black-box adversarial attack. First, we simulate a real-world open-set environment by reasonable dataset division. Second, we train a substitute model, in which, to improve the transferability of the adversarial data, we combine dynamic convolution into the substitute model. Finally, we use the substitute model to generate adversarial data to attack the target model. To the best of the authors' knowledge, the proposed attack model is the first to utilize dynamic convolution to improve the transferability of adversarial data. To evaluate the proposed attack model, we conduct extensive experiments on four publicly available datasets. The numerical results show that, compared to the white-box attack approaches, the proposed black-box attack approach has a similar attack capability. Specifically, using the German Traffic Sign Recognition Benchmark dataset, our model can decrease the classification accuracy of known classes from 99.8% to 9.81% and can decrease the AUC of detecting unknown classes from 97.7% to 48.8%.
引用
收藏
页码:295 / 311
页数:17
相关论文
共 46 条
[1]  
Abdul Aleem Kadar, 2013, SINGLE SIDED DEAFNES, P2420
[2]   Square Attack: A Query-Efficient Black-Box Adversarial Attack via Random Search [J].
Andriushchenko, Maksym ;
Croce, Francesco ;
Flammarion, Nicolas ;
Hein, Matthias .
COMPUTER VISION - ECCV 2020, PT XXIII, 2020, 12368 :484-501
[3]  
[Anonymous], P INT C LEARN REPR
[4]  
Babiker MohamedA. A., 2019, 2019 INT C COMP CONT, P1, DOI [DOI 10.1109/ICCCEEE46830.2019.9070826, 10.1109/ICCCEEE46830.2019.9070826, 10.1109/ICCCEEE46830.2019. 9070826]
[5]   Self-driving cars: A survey [J].
Badue, Claudine ;
Guidolini, Ranik ;
Carneiro, Raphael Vivacqua ;
Azevedo, Pedro ;
Cardoso, Vinicius B. ;
Forechi, Avelino ;
Jesus, Luan ;
Berriel, Rodrigo ;
Paixao, Thiago M. ;
Mutz, Filipe ;
Veronese, Lucas de Paula ;
Oliveira-Santos, Thiago ;
De Souza, Alberto F. .
EXPERT SYSTEMS WITH APPLICATIONS, 2021, 165
[6]   Towards Open Set Deep Networks [J].
Bendale, Abhijit ;
Boult, Terrance E. .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :1563-1572
[7]  
Brendel Wieland, 2017, ICLR
[8]   YOLOv4-5D: An Effective and Efficient Object Detector for Autonomous Driving [J].
Cai, Yingfeng ;
Luan, Tianyu ;
Gao, Hongbo ;
Wang, Hai ;
Chen, Long ;
Li, Yicheng ;
Sotelo, Miguel Angel ;
Li, Zhixiong .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2021, 70
[9]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[10]  
Chen PY, 2017, PROCEEDINGS OF THE 10TH ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, AISEC 2017, P15, DOI 10.1145/3128572.3140448