On the Defense Against Adversarial Examples Beyond the Visible Spectrum

被引:0
|
作者
Ortiz, Anthony [1 ]
Fuentes, Olac [1 ]
Rosario, Dalton [2 ]
Kiekintveld, Christopher [1 ]
机构
[1] Univ Texas El Paso, Dept Comp Sci, El Paso, TX 79968 USA
[2] US Army, Res Lab, Image Proc Branch, Adelphi, MD USA
来源
2018 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2018) | 2018年
关键词
Adversarial Examples; Adversarial Machine Learning; Multispectral Imagery; Defenses;
D O I
暂无
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Machine learning (ML) models based on RGB images are vulnerable to adversarial attacks, representing a potential cyber threat to the user. Adversarial examples are inputs maliciously constructed to induce errors by ML systems at test time. Recently, researchers also showed that such attacks can be successfully applied at test time to ML models based on multispectral imagery, suggesting this threat is likely to extend to the hyperspectral data space as well. Military communities across the world continue to grow their investment portfolios in multispectral and hyperspectral remote sensing, while expressing their interest in machine learning based systems. This paper aims at increasing the military community's awareness of the adversarial threat and also in proposing ML training strategies and resilient solutions for state of the art artificial neural networks. Specifically, the paper introduces an adversarial detection network that explores domain specific knowledge of material response in the shortwave infrared spectrum, and a framework that jointly integrates an automatic band selection method for multispectral imagery with adversarial training and adversarial spectral rule-based detection. Experiment results show the effectiveness of the approach in an automatic semantic segmentation task using Digital Globe's WorldView-3 satellite 16-band imagery.
引用
收藏
页码:553 / 558
页数:6
相关论文
共 50 条
  • [1] MoNet: Impressionism As A Defense Against Adversarial Examples
    Ge, Huangyi
    Chau, Sze Yiu
    Li, Ninghui
    2020 SECOND IEEE INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS AND APPLICATIONS (TPS-ISA 2020), 2020, : 246 - 255
  • [2] Hadamard's Defense Against Adversarial Examples
    Hoyos, Angello
    Ruiz, Ubaldo
    Chavez, Edgar
    IEEE ACCESS, 2021, 9 : 118324 - 118333
  • [3] Defense against adversarial examples based on wavelet domain analysis
    Sarvar, Armaghan
    Amirmazlaghani, Maryam
    APPLIED INTELLIGENCE, 2023, 53 (01) : 423 - 439
  • [4] Defense against adversarial examples based on wavelet domain analysis
    Armaghan Sarvar
    Maryam Amirmazlaghani
    Applied Intelligence, 2023, 53 : 423 - 439
  • [5] Towards Robust Ensemble Defense Against Adversarial Examples Attack
    Mani, Nag
    Moh, Melody
    Moh, Teng-Sheng
    2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [6] DeepMTD: Moving Target Defense for Deep Visual Sensing against Adversarial Examples
    Song, Qun
    Yan, Zhenyu
    Tan, Rui
    ACM TRANSACTIONS ON SENSOR NETWORKS, 2022, 18 (01)
  • [7] Moving Target Defense for Embedded Deep Visual Sensing against Adversarial Examples
    Song, Qun
    Yan, Zhenyu
    Tan, Rui
    PROCEEDINGS OF THE 17TH CONFERENCE ON EMBEDDED NETWORKED SENSOR SYSTEMS (SENSYS '19), 2019, : 124 - 137
  • [8] Markov Chain Based Efficient Defense Against Adversarial Examples in Computer Vision
    Zhou, Yue
    Hu, Xiaofang
    Wang, Lidan
    Duan, Shukai
    Chen, Yiran
    IEEE ACCESS, 2019, 7 : 5695 - 5706
  • [9] Deep neural rejection against adversarial examples
    Sotgiu, Angelo
    Demontis, Ambra
    Melis, Marco
    Biggio, Battista
    Fumera, Giorgio
    Feng, Xiaoyi
    Roli, Fabio
    EURASIP JOURNAL ON INFORMATION SECURITY, 2020, 2020 (01)
  • [10] Deep neural rejection against adversarial examples
    Angelo Sotgiu
    Ambra Demontis
    Marco Melis
    Battista Biggio
    Giorgio Fumera
    Xiaoyi Feng
    Fabio Roli
    EURASIP Journal on Information Security, 2020