Adversarial Robustness under Long-Tailed Distribution

被引:45
作者
Wu, Tong [1 ,5 ]
Liu, Ziwei [2 ]
Huang, Qingqiu [3 ]
Wang, Yu [4 ]
Lin, Dahua [1 ,5 ,6 ]
机构
[1] Chinese Univ Hong Kong, Hong Kong, Peoples R China
[2] Nanyang Technol Univ, S Lab, Singapore, Singapore
[3] Huawei, Shenzhen, Peoples R China
[4] Tsinghua Univ, Beijing, Peoples R China
[5] SenseTime CUHK Joint Lab, Hong Kong, Peoples R China
[6] Ctr Perceptual & Interact Intelligence, Hong Kong, Peoples R China
来源
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021 | 2021年
关键词
D O I
10.1109/CVPR46437.2021.00855
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial robustness has attracted extensive studies recently by revealing the vulnerability and intrinsic characteristics of deep networks. However; existing works on adversarial robustness mainly focus on balanced datasets, while real-world data usually exhibits a long-tailed distribution. To push adversarial robustness towards more realistic scenarios, in this work we investigate the adversarial vulnerability as well as defense under long-tailed distributions. In particular, we first reveal the negative impacts induced by imbalanced data on both recognition performance and adversarial robustness, uncovering the intrinsic challenges of this problem. We then perform a systematic study on existing long-tailed recognition methods in conjunction with the adversarial training framework. Several valuable observations are obtained: 1) natural accuracy is relatively easy to improve, 2) fake gain of robust accuracy exists under unreliable evaluation, and 3) boundary error limits the promotion of robustness. Inspired by these observations, we propose a clean yet effective framework, RoBal, which consists of two dedicated modules, a scale-invariant classifier and data re-balancing via both margin engineering at training stage and boundary adjustment during inference. Extensive experiments demonstrate the superiority of our approach over other state-of-the-art defense methods. To our best knowledge, we are the first to tackle adversarial robustness under long-tailed distributions, which we believe would be a significant step towards real-world robustness. Our code is available at: https://github. com/wutong16/Adversarial_Long-Tai1.
引用
收藏
页码:8655 / 8664
页数:10
相关论文
共 50 条
  • [41] Long-Tailed Class Incremental Learning
    Liu, Xialei
    Hu, Yu-Song
    Cao, Xu-Sheng
    Bagdanov, Andrew D.
    Li, Ke
    Cheng, Ming-Ming
    COMPUTER VISION - ECCV 2022, PT XXXIII, 2022, 13693 : 495 - 512
  • [42] ECOLOGY AND BEHAVIOR OF LONG-TAILED TIT
    GASTON, AJ
    IBIS, 1973, 115 (03) : 330 - 351
  • [43] PSEUDOTUBERCULOSIS IN RED LONG-TAILED MONKEYS
    DSIKIDSE, EK
    BALOEWA, EJ
    PEKERMAN, SM
    GORISLAWETS, JJ
    ZEITSCHRIFT FUR VERSUCHSTIERKUNDE, 1972, 14 (03): : 147 - +
  • [44] CONVOLUTIONS OF LONG-TAILED AND SUBEXPONENTIAL DISTRIBUTIONS
    Foss, Sergey
    Korshunov, Dmitry
    Zachary, Stan
    JOURNAL OF APPLIED PROBABILITY, 2009, 46 (03) : 756 - 767
  • [45] Is the long-tailed macaque at risk of extinction?
    Hilborn, Ray
    Smith, David R.
    AMERICAN JOURNAL OF PRIMATOLOGY, 2024, 86 (04)
  • [46] Deep Long-Tailed Learning: A Survey
    Zhang, Yifan
    Kang, Bingyi
    Hooi, Bryan
    Yan, Shuicheng
    Feng, Jiashi
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (09) : 10795 - 10816
  • [47] NESTING BIOLOGY OF LONG-TAILED MANAKIN
    FOSTER, MS
    WILSON BULLETIN, 1976, 88 (03): : 400 - 420
  • [48] Reviving undersampling for long-tailed learning
    Yu, Hao
    Du, Yingxiao
    Wu, Jianxin
    PATTERN RECOGNITION, 2025, 161
  • [49] Seed dispersal by long-tailed macaques
    Lucas, PW
    Corlett, RT
    AMERICAN JOURNAL OF PRIMATOLOGY, 1998, 45 (01) : 29 - 44
  • [50] MYSTERY PHOTOGRAPHS - LONG-TAILED DUCK
    OGILVIE, MA
    BRITISH BIRDS, 1982, 75 (01): : 30 - 32