Vulnerability of Person Re-Identification Models to Metric Adversarial Attacks

被引:9
作者
Bouniot, Quentin [1 ]
Audigier, Romaric [1 ]
Loesch, Angelique [1 ]
机构
[1] CEA, LIST, Vis & Learning Lab Scene Anal, PC 184, F-91191 Gif Sur Yvette, France
来源
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020) | 2020年
关键词
D O I
10.1109/CVPRW50498.2020.00405
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Person re-identification (re-ID) is a key problem in smart supervision of camera networks. Over the past years, models using deep learning have become state of the art. However, it has been shown that deep neural networks are flawed with adversarial examples, i.e. human-imperceptible perturbations. Extensively studied for the task of image closed-set classification, this problem can also appear in the case of open-set retrieval tasks. Indeed, recent work has shown that we can also generate adversarial examples for metric learning systems such as re-ID ones. These models remain vulnerable: when faced with adversarial examples, they fail to correctly recognize a person, which represents a security breach. These attacks are all the more dangerous as they are impossible to detect for a human operator. Attacking a metric consists in altering the distances between the feature of an attacked image and those of reference images, i.e. guides. In this article, we investigate different possible attacks depending on the number and type of guides available. From this metric attack family, two particularly effective attacks stand out. The first one, called Self Metric Attack, is a strong attack that does not need any image apart from the attacked image. The second one, called Furthest-Negative Attack, makes full use of a set of images. Attacks are evaluated on commonly used datasets: Market1501 and DukeMTMC. Finally, we propose an efficient extension of adversarial training protocol adapted to metric learning as a defense that increases the robustness of re-ID models.(1)
引用
收藏
页码:3450 / 3459
页数:10
相关论文
共 30 条
  • [11] Kurakin Alexey, 2017, INT C LEARN REPR ICL
  • [12] Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser
    Liao, Fangzhou
    Liang, Ming
    Dong, Yinpeng
    Pang, Tianyu
    Hu, Xiaolin
    Zhu, Jun
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 1778 - 1787
  • [13] Who's Afraid of Adversarial Queries? The Impact of Image Modifications on Content-based Image Retrieval
    Liu, Zhuoran
    Zhao, Zhengyu
    Larson, Martha
    [J]. ICMR'19: PROCEEDINGS OF THE 2019 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, 2019, : 306 - 314
  • [14] Madry A., 2019, Towards deep learning models resistant to adversarial attacks., P1
  • [15] Simple Black-Box Adversarial Attacks on Deep Neural Networks
    Narodytska, Nina
    Kasiviswanathan, Shiva
    [J]. 2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, : 1310 - 1318
  • [16] Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
    Papernot, Nicolas
    McDaniel, Patrick
    Wu, Xi
    Jha, Somesh
    Swami, Ananthram
    [J]. 2016 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2016, : 582 - 597
  • [17] The Limitations of Deep Learning in Adversarial Settings
    Papernot, Nicolas
    McDaniel, Patrick
    Jha, Somesh
    Fredrikson, Matt
    Celik, Z. Berkay
    Swami, Ananthram
    [J]. 1ST IEEE EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY, 2016, : 372 - 387
  • [18] Performance Measures and a Data Set for Multi-target, Multi-camera Tracking
    Ristani, Ergys
    Solera, Francesco
    Zou, Roger
    Cucchiara, Rita
    Tomasi, Carlo
    [J]. COMPUTER VISION - ECCV 2016 WORKSHOPS, PT II, 2016, 9914 : 17 - 35
  • [19] Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses
    Rony, Jerome
    Hafemann, Luiz G.
    Oliveira, Luiz S.
    Ben Ayed, Ismail
    Sabourin, Robert
    Granger, Eric
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 4317 - 4325
  • [20] Schroff F, 2015, PROC CVPR IEEE, P815, DOI 10.1109/CVPR.2015.7298682