Generalization analysis of adversarial pairwise learning

被引:0
作者
Wen, Wen [1 ]
Li, Han [1 ,2 ]
Wu, Rui [5 ]
Wu, Lingjuan [1 ]
Chen, Hong [1 ,2 ,3 ,4 ]
机构
[1] Huazhong Agr Univ, Coll Informat, Wuhan 430070, Peoples R China
[2] Minist Educ, Engn Res Ctr Intelligent Technol Agr, Wuhan 430070, Peoples R China
[3] Huazhong Agr Univ, Shenzhen Inst Nutr & Hlth, Shenzhen 518000, Peoples R China
[4] Chinese Acad Agr Sci, Shenzhen Branch, Guangdong Lab Lingnan Modern Agr, Genome Anal Lab,Minist Agr,Agr Genom Inst Shenzhen, Shenzhen 518000, Peoples R China
[5] Horizon Robot, Beijing 100190, Peoples R China
关键词
Adversarial pairwise learning; Perturbation attacks; Error analysis; Generalization bounds; RATES;
D O I
10.1016/j.neunet.2024.106955
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial pairwise learning has become the predominant method to enhance the discrimination ability of models against adversarial attacks, achieving tremendous success in various application fields. Despite excellent empirical performance, adversarial robustness and generalization of adversarial pairwise learning remain poorly understood from the theoretical perspective. This paper moves towards this by establishing the high- probability generalization bounds. Our bounds generally apply to various models and pairwise learning tasks. We give application examples involving explicit bounds of adversarial bipartite ranking and adversarial metric learning to illustrate how the theoretical results can be extended. Furthermore, we develop the optimistic generalization bound at order c(n-1) on the sample size n by leveraging local Rademacher complexity. Our analysis provides meaningful theoretical guidance for improving adversarial robustness through feature size and regularization. Experimental results validate theoretical findings.
引用
收藏
页数:15
相关论文
共 59 条
  • [1] IMPROVEMENTS TO THE CLASSIFICATION PERFORMANCE OF RDA
    AEBERHARD, S
    COOMANS, D
    DEVEL, O
    [J]. JOURNAL OF CHEMOMETRICS, 1993, 7 (02) : 99 - 115
  • [2] [Anonymous], 1995, NATURE STAT LEARNING
  • [3] Decision tree pairwise metric learning against adversarial attacks
    Appiah, Benjamin
    Qin, Zhiguang
    Abra, Ayidzoe Mighty
    Kanpogninge, Ansuura JohnBosco Aristotle
    [J]. COMPUTERS & SECURITY, 2021, 106
  • [4] Adversarial Robustness Across Representation Spaces
    Awasthi, Pranjal
    Yu, George
    Ferng, Chun-Sung
    Tomkins, Andrew
    Juan, Da-Cheng
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 7604 - 7612
  • [5] Bartlett P. L., 2003, Journal of Machine Learning Research, V3, P463, DOI 10.1162/153244303321897690
  • [6] Bartlett Peter L., 2017, Advances in neural information processing systems, V30
  • [7] Local Rademacher complexities
    Bartlett, PL
    Bousquet, O
    Mendelson, S
    [J]. ANNALS OF STATISTICS, 2005, 33 (04) : 1497 - 1537
  • [8] Fairness in Recommendation Ranking through Pairwise Comparisons
    Beutel, Alex
    Chen, Jilin
    Doshi, Tulsee
    Qian, Hai
    Wei, Li
    Wu, Yi
    Heldt, Lukasz
    Zhao, Zhe
    Hong, Lichan
    Chi, Ed H.
    Goodrow, Cristos
    [J]. KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 2212 - 2220
  • [9] Vulnerability of Person Re-Identification Models to Metric Adversarial Attacks
    Bouniot, Quentin
    Audigier, Romaric
    Loesch, Angelique
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 3450 - 3459
  • [10] Stability and generalization
    Bousquet, O
    Elisseeff, A
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2002, 2 (03) : 499 - 526