Error-Correcting Output Codes with Ensemble Diversity for Robust Learning in Neural Networks

被引:0
|
作者
Song, Yang [1 ]
Kang, Qiyu [1 ]
Tay, Wee Peng [1 ]
机构
[1] Nanyang Technol Univ, 50 Nanyang Ave, Singapore 639798, Singapore
来源
THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE | 2021年 / 35卷
关键词
DESIGN;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Though deep learning has been applied successfully in many scenarios, malicious inputs with human-imperceptible perturbations can make it vulnerable in real applications. This paper proposes an error-correcting neural network (ECNN) that combines a set of binary classifiers to combat adversarial examples in the multi-class classification problem. To build an ECNN, we propose to design a code matrix so that the minimum Hamming distance between any two rows (i.e., two codewords) and the minimum shared information distance between any two columns (i.e., two partitions of class labels) are simultaneously maximized. Maximizing row distances can increase the system fault tolerance while maxi mizing column distances helps increase the diversity between binary classifiers. We propose an end-to-end training method for our ECNN, which allows further improvement of the diversity between binary classifiers. The end-to-end training renders our proposed ECNN different from the traditional error-correcting output code (ECOC) based methods that train binary classifiers independently. ECNN is complementary to other existing defense approaches such as adversarial training and can be applied in conjunction with them. We empirically demonstrate that our proposed ECNN is effective against the state-of-the-art white-box and black-box attacks on several datasets while maintaining good classification accuracy on normal examples.
引用
收藏
页码:9722 / 9729
页数:8
相关论文
共 50 条
  • [1] Error-correcting codes and neural networks
    Manin, Yuri I.
    SELECTA MATHEMATICA-NEW SERIES, 2018, 24 (01): : 521 - 530
  • [2] Error-correcting codes and neural networks
    Yuri I. Manin
    Selecta Mathematica, 2018, 24 : 521 - 530
  • [3] Active learning with error-correcting output codes
    Gu, Shilin
    Cai, Yang
    Shan, Jincheng
    Hou, Chenping
    NEUROCOMPUTING, 2019, 364 : 182 - 191
  • [4] Error-correcting output codes based ensemble feature extraction
    Zhong, Guoqiang
    Liu, Cheng-Lin
    PATTERN RECOGNITION, 2013, 46 (04) : 1091 - 1100
  • [5] Learning error-correcting output codes from data
    Alpaydin, E
    Mayoraz, E
    NINTH INTERNATIONAL CONFERENCE ON ARTIFICIAL NEURAL NETWORKS (ICANN99), VOLS 1 AND 2, 1999, (470): : 743 - 748
  • [6] Neural ensemble decoding for topological quantum error-correcting codes
    Sheth, Milap
    Jafarzadeh, Sara Zafar
    Gheorghiu, Vlad
    PHYSICAL REVIEW A, 2020, 101 (03)
  • [7] Quantum error-correcting output codes
    Windridge, David
    Mengoni, Riccardo
    Nagarajan, Rajagopal
    INTERNATIONAL JOURNAL OF QUANTUM INFORMATION, 2018, 16 (08)
  • [8] Deep Error-Correcting Output Codes
    Wang, Li-Na
    Wei, Hongxu
    Zheng, Yuchen
    Dong, Junyu
    Zhong, Guoqiang
    ALGORITHMS, 2023, 16 (12)
  • [9] Recoding Error-Correcting Output Codes
    Escalera, Sergio
    Pujol, Oriol
    Radeva, Petia
    MULTIPLE CLASSIFIER SYSTEMS, PROCEEDINGS, 2009, 5519 : 11 - +
  • [10] Efficient Error-correcting Output Codes for Adversarial Learning Robustness
    Wan, Li
    Alpcan, Tansu
    Viterbo, Emanuele
    Kuijper, Margreta
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 2345 - 2350