Using Reed-Muller Codes for Classification with Rejection and Recovery

被引:0
作者
Fentham, Daniel [1 ]
Parker, David [2 ]
Ryan, Mark [1 ]
机构
[1] Univ Birmingham, Sch Comp Sci, Birmingham, W Midlands, England
[2] Univ Oxford, Dept Comp Sci, Oxford, England
来源
FOUNDATIONS AND PRACTICE OF SECURITY, PT I, FPS 2023 | 2024年 / 14551卷
关键词
Deep Neural Networks; Adversarial Examples; Classification-with-rejection; Error-correction codes; ML Security;
D O I
10.1007/978-3-031-57537-2_3
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
When deploying classifiers in the real world, users expect them to respond to inputs appropriately. However, traditional classifiers are not equipped to handle inputs which lie far from the distribution they were trained on. Malicious actors can exploit this defect by making adversarial perturbations designed to cause the classifier to give an incorrect output. Classification-with-rejection methods attempt to solve this problem by allowing networks to refuse to classify an input in which they have low confidence. This works well for strongly adversarial examples, but also leads to the rejection of weakly perturbed images, which intuitively could be correctly classified. To address these issues, we propose Reed-Muller Aggregation Networks (RMAggNet), a classifier inspired by Reed-Muller error-correction codes which can correct and reject inputs. This paper shows that RMAggNet can minimise incorrectness while maintaining good correctness over multiple adversarial attacks at different perturbation budgets by leveraging the ability to correct errors in the classification process. This provides an alternative classification with-rejection method which can reduce the amount of additional processing in situations where a small number of incorrect classifications are permissible.
引用
收藏
页码:36 / 52
页数:17
相关论文
共 31 条
  • [1] Athalye A, 2018, PR MACH LEARN RES, V80
  • [2] Brendel Wieland, 2018, INT C LEARNING REPRE
  • [3] Carlini N, 2019, Arxiv, DOI arXiv:1902.06705
  • [4] Charoenphakdee N, 2021, PR MACH LEARN RES, V139
  • [5] ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector
    Chen, Shang-Tse
    Cornelius, Cory
    Martin, Jason
    Chau, Duen Horng
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2018, PT I, 2019, 11051 : 52 - 68
  • [6] Chen XN, 2023, Arxiv, DOI [arXiv:2302.06675, DOI 10.48550/ARXIV.2302.06675]
  • [7] Cohen G, 2017, IEEE IJCNN, P2921, DOI 10.1109/IJCNN.2017.7966217
  • [8] Learning with Rejection
    Cortes, Corinna
    DeSalvo, Giulia
    Mohri, Mehryar
    [J]. ALGORITHMIC LEARNING THEORY, (ALT 2016), 2016, 9925 : 67 - 82
  • [9] Boosting Adversarial Attacks with Momentum
    Dong, Yinpeng
    Liao, Fangzhou
    Pang, Tianyu
    Su, Hang
    Zhu, Jun
    Hu, Xiaolin
    Li, Jianguo
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 9185 - 9193
  • [10] USING SIMULATED ANNEALING TO DESIGN GOOD CODES.
    El Gamal, Abbas A.
    Hemachandra, Lane A.
    Shperling, Itzhak
    Wei, Victor K.
    [J]. IEEE Transactions on Information Theory, 1987, IT-33 (01) : 116 - 123