On the Robustness of Metric Learning: An Adversarial Perspective

被引:9
作者
Huai, Mengdi [1 ]
Zheng, Tianhang [2 ]
Miao, Chenglin [3 ]
Yao, Liuyi [4 ]
Zhang, Aidong [1 ]
机构
[1] Univ Virginia, Dept Comp Sci, 85 Engineers Way, Charlottesville, VA 22904 USA
[2] Univ Toronto, Dept Elect & Comp Engn, 10 Kings Coll Rd, Toronto, ON M5S 3G8, Canada
[3] Univ Georgia, Dept Comp Sci, Boyd Grad Studies Res Ctr, DW Brooks Dr, Athens, GA 30602 USA
[4] Alibaba Grp, 969 West Wen Yi Rd, Hangzhou 311121, Zhejiang, Peoples R China
基金
美国国家科学基金会;
关键词
Metric learning; robustness; adversarial perturbations;
D O I
10.1145/3502726
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Metric learning aims at automatically learning a distance metric from data so that the precise similarity between data instances can be faithfully reflected, and its importance has long been recognized in many fields. An implicit assumption in existing metric learning works is that the learned models are performed in a reliable and secure environment. However, the increasingly critical role of metric learning makes it susceptible to a risk of being malicious attacked. To well understand the performance of metric learning models in adversarial environments, in this article, we study the robustness of metric learning to adversarial perturbations, which are also known as the imperceptible changes to the input data that are crafted by an attacker to fool a well-learned model. However, different from traditional classification models, metric learning models take instance pairs rather than individual instances as input, and the perturbation on one instance may not necessarily affect the prediction result for an instance pair, which makes it more difficult to study the robustness of metric learning. To address this challenge, in this article, we first provide a definition of pairwise robustness for metric learning, and then propose a novel projected gradient descent-based attack method (called AckMetric) to evaluate the robustness of metric learning models. To further explore the capability of the attacker to change the prediction results, we also propose a theoretical framework to derive the upper bound of the pairwise adversarial loss. Finally, we incorporate the derived bound into the training process of metric learning and design a novel defense method to make the learned models more robust. Extensive experiments on real-world datasets demonstrate the effectiveness of the proposed methods.
引用
收藏
页数:25
相关论文
共 73 条
[31]  
Kurakin A., 2016, INT C LEARN REPR
[32]  
Law MT, 2017, PR MACH LEARN RES, V70
[33]   Gradient-based learning applied to document recognition [J].
Lecun, Y ;
Bottou, L ;
Bengio, Y ;
Haffner, P .
PROCEEDINGS OF THE IEEE, 1998, 86 (11) :2278-2324
[34]   Deep Variational Metric Learning [J].
Lin, Xudong ;
Duan, Yueqi ;
Dong, Qiyuan ;
Lu, Jiwen ;
Zhou, Jie .
COMPUTER VISION - ECCV 2018, PT 15, 2018, 11219 :714-729
[35]   Suitability of Dysphonia Measurements for Telemonitoring of Parkinson's Disease [J].
Little, Max A. ;
McSharry, Patrick E. ;
Hunter, Eric J. ;
Spielman, Jennifer ;
Ramig, Lorraine O. .
IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2009, 56 (04) :1015-1022
[36]  
Liu XY, 2019, PROCEEDINGS OF 2019 IEEE 3RD INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC 2019), P287, DOI [10.1109/itnec.2019.8729439, 10.1109/ITNEC.2019.8729439]
[37]  
Luo Yong, 2018, ARXIV181003944
[38]  
Madry A., 2018, ICLR
[39]  
Mao CZ, 2019, ADV NEUR IN, V32
[40]   Universal adversarial perturbations [J].
Moosavi-Dezfooli, Seyed-Mohsen ;
Fawzi, Alhussein ;
Fawzi, Omar ;
Frossard, Pascal .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :86-94