Self-Supervised Adversarial Training of Monocular Depth Estimation Against Physical-World Attacks

被引:0
作者
Cheng, Zhiyuan [1 ]
Han, Cheng [2 ]
Liang, James [2 ]
Wang, Qifan [3 ]
Zhang, Xiangyu [1 ]
Liu, Dongfang [2 ]
机构
[1] Purdue Univ, W Lafayette, IN 47907 USA
[2] Rochester Inst Technol, Rochester, NY 14623 USA
[3] Meta AI, Menlo Pk, CA 94025 USA
来源
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE | 2024年 / 46卷 / 12期
关键词
Adversarial training; and adversarial robustness; monocular depth estimation; self-supervised learning;
D O I
10.70590/ice.2024.01.01; 10.1109/TPAMI.2024.3412632
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Monocular Depth Estimation (MDE) plays a vital role in applications such as autonomous driving. However, various attacks target MDE models, with physical attacks posing significant threats to system security. Traditional adversarial training methods, which require ground-truth labels, are not directly applicable to MDE models that lack ground-truth depth. Some self-supervised model hardening techniques (e.g., contrastive learning) overlook the domain knowledge of MDE, resulting in suboptimal performance. In this work, we introduce a novel self-supervised adversarial training approach for MDE models, leveraging view synthesis without the need for ground-truth depth. We enhance adversarial robustness against real-world attacks by incorporating L-0-norm-bounded perturbation during training. We evaluate our method against supervised learning-based and contrastive learning-based approaches specifically designed for MDE. Our experiments with two representative MDE networks demonstrate improved robustness against various adversarial attacks, with minimal impact on benign performance.
引用
收藏
页码:9084 / 9101
页数:18
相关论文
共 70 条
[1]   Square Attack: A Query-Efficient Black-Box Adversarial Attack via Random Search [J].
Andriushchenko, Maksym ;
Croce, Francesco ;
Flammarion, Nicolas ;
Hein, Matthias .
COMPUTER VISION - ECCV 2020, PT XXIII, 2020, 12368 :484-501
[2]   On the Robustness of Semantic Segmentation Models to Adversarial Attacks [J].
Arnab, Anurag ;
Miksik, Ondrej ;
Torr, Philip H. S. .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :888-897
[3]  
Athalye A, 2018, PR MACH LEARN RES, V80
[4]  
Bian JW, 2019, ADV NEUR IN, V32
[5]   APRICOT: A Dataset of Physical Adversarial Attacks on Object Detection [J].
Braunegg, A. ;
Chakraborty, Amartya ;
Krumdick, Michael ;
Lape, Nicole ;
Leary, Sara ;
Manville, Keith ;
Merkhofer, Elizabeth ;
Strickhart, Laura ;
Walmer, Matthew .
COMPUTER VISION - ECCV 2020, PT XXI, 2020, 12366 :35-50
[6]  
Brown TB, 2018, Arxiv, DOI arXiv:1712.09665
[7]   Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion based Perception in Autonomous Driving Under Physical-World Attacks [J].
Cao, Yulong ;
Wang, Ningfei ;
Xiao, Chaowei ;
Yang, Dawei ;
Fang, Jin ;
Yang, Ruigang ;
Chen, Qi Alfred ;
Liu, Mingyan ;
Li, Bo .
2021 IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP, 2021, :176-194
[8]   Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving [J].
Cao, Yulong ;
Xiao, Chaowei ;
Cyr, Benjamin ;
Zhou, Yimeng ;
Park, Won ;
Rampazzi, Sara ;
Chen, Qi Alfred ;
Fu, Kevin ;
Mao, Z. Morley .
PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, :2267-2281
[9]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[10]  
Carmon Y, 2019, ADV NEUR IN, V32