Practical black-box adversarial attack on open-set recognition: Towards robust autonomous driving

被引:0
作者
Yanfei Wang
Kai Zhang
Kejie Lu
Yun Xiong
Mi Wen
机构
[1] Shanghai University of Electric Power,College of Computer Science and Technology
[2] University of Puerto Rico at Mayagüez,Department of Computer Science and Engineering
[3] Fudan University,College of Computer Science and Technology
[4] Shanghai Key Laboratory of Data Science,undefined
来源
Peer-to-Peer Networking and Applications | 2023年 / 16卷
关键词
Black-box attack; Open-set recognition; Autonomous driving system; Adversarial example; Image classification;
D O I
暂无
中图分类号
学科分类号
摘要
As an important method of image classification, Open-Set Recognition (OSR) has been gradually deployed in autonomous driving systems (ADSs) for detecting the surrounding environment with unknown objects. To date, many researchers have demonstrated that the existing OSR classifiers are heavily threatened by adversarial input images. Nevertheless, most existing attack approaches are based on white-box attacks, assuming that information of the target OSR model is known by the attackers. Hence, these attack models cannot effectively attack ADSs that keep models and data confidential. To facilitate the design of future generations of robust OSR classifiers for safer ADSs, we introduce a practical black-box adversarial attack. First, we simulate a real-world open-set environment by reasonable dataset division. Second, we train a substitute model, in which, to improve the transferability of the adversarial data, we combine dynamic convolution into the substitute model. Finally, we use the substitute model to generate adversarial data to attack the target model. To the best of the authors' knowledge, the proposed attack model is the first to utilize dynamic convolution to improve the transferability of adversarial data. To evaluate the proposed attack model, we conduct extensive experiments on four publicly available datasets. The numerical results show that, compared to the white-box attack approaches, the proposed black-box attack approach has a similar attack capability. Specifically, using the German Traffic Sign Recognition Benchmark dataset, our model can decrease the classification accuracy of known classes from 99.8% to 9.81% and can decrease the AUC of detecting unknown classes from 97.7% to 48.8%.
引用
收藏
页码:295 / 311
页数:16
相关论文
共 102 条
[1]  
Badue C(2021)Self-driving cars: A survey Expert Syst Appl 165 113816-1440
[2]  
Guidolini R(2020)Deep learning for large-scale traffic-sign detection and recognition IEEE Trans Intell Transp Syst 21 1427-96
[3]  
Carneiro RV(2020)Traffic light detection in autonomous driving systems IEEE Consum Electron Mag 9 90-1772
[4]  
Azevedo P(2013)Toward open set recognition IEEE Trans Pattern Anal Mach Intell 35 1757-1053
[5]  
Cardoso VB(2020)Open set driver activity recognition IEEE Intell Veh Symp - 1048-1572
[6]  
Forechi A(2016)Towards open set deep networks Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit - 1563-1474
[7]  
Jesus L(2021)Backdoors hidden in facial features: a novel invisible backdoor attack against face recognition systems Peer Peer Netw Appl 14 1458-1227
[8]  
Berriel R(2020)Qeba: Query-efficient boundary-based blackbox attack Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit - 1218-11036
[9]  
Paixao TM(2020)Dynamic convolution: Attention over convolution kernels Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit - 11027-778
[10]  
Mutz F(1869)Autonomous car using cnn deep learning algorithm J Phys: Conf Ser 012071 2021-788