On the Robustness of Metric Learning: An Adversarial Perspective

被引:9
作者
Huai, Mengdi [1 ]
Zheng, Tianhang [2 ]
Miao, Chenglin [3 ]
Yao, Liuyi [4 ]
Zhang, Aidong [1 ]
机构
[1] Univ Virginia, Dept Comp Sci, 85 Engineers Way, Charlottesville, VA 22904 USA
[2] Univ Toronto, Dept Elect & Comp Engn, 10 Kings Coll Rd, Toronto, ON M5S 3G8, Canada
[3] Univ Georgia, Dept Comp Sci, Boyd Grad Studies Res Ctr, DW Brooks Dr, Athens, GA 30602 USA
[4] Alibaba Grp, 969 West Wen Yi Rd, Hangzhou 311121, Zhejiang, Peoples R China
基金
美国国家科学基金会;
关键词
Metric learning; robustness; adversarial perturbations;
D O I
10.1145/3502726
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Metric learning aims at automatically learning a distance metric from data so that the precise similarity between data instances can be faithfully reflected, and its importance has long been recognized in many fields. An implicit assumption in existing metric learning works is that the learned models are performed in a reliable and secure environment. However, the increasingly critical role of metric learning makes it susceptible to a risk of being malicious attacked. To well understand the performance of metric learning models in adversarial environments, in this article, we study the robustness of metric learning to adversarial perturbations, which are also known as the imperceptible changes to the input data that are crafted by an attacker to fool a well-learned model. However, different from traditional classification models, metric learning models take instance pairs rather than individual instances as input, and the perturbation on one instance may not necessarily affect the prediction result for an instance pair, which makes it more difficult to study the robustness of metric learning. To address this challenge, in this article, we first provide a definition of pairwise robustness for metric learning, and then propose a novel projected gradient descent-based attack method (called AckMetric) to evaluate the robustness of metric learning models. To further explore the capability of the attacker to change the prediction results, we also propose a theoretical framework to derive the upper bound of the pairwise adversarial loss. Finally, we incorporate the derived bound into the training process of metric learning and design a novel defense method to make the learned models more robust. Extensive experiments on real-world datasets demonstrate the effectiveness of the proposed methods.
引用
收藏
页数:25
相关论文
共 73 条
[1]   Metric transfer learning via geometric knowledge embedding [J].
Ahmadvand, Mahya ;
Tahmoresnezhad, Jafar .
APPLIED INTELLIGENCE, 2021, 51 (02) :921-934
[2]  
Bastani O, 2016, ADV NEUR IN, V29
[3]  
Cai QZ, 2018, PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P3740
[4]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[5]  
Chen S, 2018, PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P2021
[6]  
Davis Jason V., 2007, P 24 INT C MACH LEAR, P209, DOI [10.1145/1273496.1273523, DOI 10.1145/1273496.1273523]
[7]   Deep Adversarial Metric Learning [J].
Duan, Yueqi ;
Zheng, Wenzhao ;
Lin, Xudong ;
Lu, Jiwen ;
Zhou, Jie .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :2780-2789
[8]   The Group Loss for Deep Metric Learning [J].
Elezi, Ismail ;
Vascon, Sebastiano ;
Torcinovich, Alessandro ;
Pelillo, Marcello ;
Leal-Taixe, Laura .
COMPUTER VISION - ECCV 2020, PT VII, 2020, 12352 :277-294
[9]  
Gao XY, 2014, AAAI CONF ARTIF INTE, P1206
[10]  
Goodfellow I. J., 2015, ICLR