On the Defense Against Adversarial Examples Beyond the Visible Spectrum

被引:0
|
作者
Ortiz, Anthony [1 ]
Fuentes, Olac [1 ]
Rosario, Dalton [2 ]
Kiekintveld, Christopher [1 ]
机构
[1] Univ Texas El Paso, Dept Comp Sci, El Paso, TX 79968 USA
[2] US Army, Res Lab, Image Proc Branch, Adelphi, MD USA
来源
2018 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2018) | 2018年
关键词
Adversarial Examples; Adversarial Machine Learning; Multispectral Imagery; Defenses;
D O I
暂无
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Machine learning (ML) models based on RGB images are vulnerable to adversarial attacks, representing a potential cyber threat to the user. Adversarial examples are inputs maliciously constructed to induce errors by ML systems at test time. Recently, researchers also showed that such attacks can be successfully applied at test time to ML models based on multispectral imagery, suggesting this threat is likely to extend to the hyperspectral data space as well. Military communities across the world continue to grow their investment portfolios in multispectral and hyperspectral remote sensing, while expressing their interest in machine learning based systems. This paper aims at increasing the military community's awareness of the adversarial threat and also in proposing ML training strategies and resilient solutions for state of the art artificial neural networks. Specifically, the paper introduces an adversarial detection network that explores domain specific knowledge of material response in the shortwave infrared spectrum, and a framework that jointly integrates an automatic band selection method for multispectral imagery with adversarial training and adversarial spectral rule-based detection. Experiment results show the effectiveness of the approach in an automatic semantic segmentation task using Digital Globe's WorldView-3 satellite 16-band imagery.
引用
收藏
页码:553 / 558
页数:6
相关论文
共 50 条
  • [31] Developing a Robust Defensive System against Adversarial Examples Using Generative Adversarial Networks
    Taheri, Shayan
    Khormali, Aminollah
    Salem, Milad
    Yuan, Jiann-Shiun
    BIG DATA AND COGNITIVE COMPUTING, 2020, 4 (02) : 1 - 15
  • [32] Minority Reports Defense: Defending Against Adversarial Patches
    McCoyd, Michael
    Park, Won
    Chen, Steven
    Shah, Neil
    Roggenkemper, Ryan
    Hwang, Minjune
    Liu, Jason Xinyu
    Wagner, David
    APPLIED CRYPTOGRAPHY AND NETWORK SECURITY WORKSHOPS, ACNS 2020, 2020, 12418 : 564 - 582
  • [33] Feature decoupling and interaction network for defending against adversarial examples
    Wang, Weidong
    Li, Zhi
    Liu, Shuaiwei
    Zhang, Li
    Yang, Jin
    Wang, Yi
    IMAGE AND VISION COMPUTING, 2024, 144
  • [34] Vulnerability Evaluation of Android Malware Detectors against Adversarial Examples
    Ijas, A. H.
    Vinod, P.
    Zemmari, Akka
    Harikrishnan
    Poulose, Godvin
    Jose, Don
    Mercaldo, Francesco
    Martinelli, Fabio
    Santone, Antonella
    KNOWLEDGE-BASED AND INTELLIGENT INFORMATION & ENGINEERING SYSTEMS (KSE 2021), 2021, 192 : 3320 - 3331
  • [35] Defending Network IDS against Adversarial Examples with Continual Learning
    Kozal, Jedrzej
    Zwolinska, Justyna
    Klonowski, Marek
    Wozniak, Michal
    2023 23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW 2023, 2023, : 60 - 69
  • [36] Defending against adversarial examples using perceptual image hashing
    Wu, Ke
    Wang, Zichi
    Zhang, Xinpeng
    Tang, Zhenjun
    JOURNAL OF ELECTRONIC IMAGING, 2023, 32 (02)
  • [37] On Brightness Agnostic Adversarial Examples Against Face Recognition Systems
    Singh, Inderjeet
    Momiyama, Satoru
    Kakizaki, Kazuya
    Araki, Toshinori
    PROCEEDINGS OF THE 20TH INTERNATIONAL CONFERENCE OF THE BIOMETRICS SPECIAL INTEREST GROUP (BIOSIG 2021), 2021, 315
  • [38] Adversarial Examples Against Deep Neural Network based Steganalysis
    Zhang, Yiwei
    Zhang, Weiming
    Chen, Kejiang
    Liu, Jiayang
    Liu, Yujia
    Yu, Nenghai
    PROCEEDINGS OF THE 6TH ACM WORKSHOP ON INFORMATION HIDING AND MULTIMEDIA SECURITY (IH&MMSEC'18), 2018, : 67 - 72
  • [39] DeT: Defending Against Adversarial Examples via Decreasing Transferability
    Li, Changjiang
    Weng, Haiqin
    Ji, Shouling
    Dong, Jianfeng
    He, Qinming
    CYBERSPACE SAFETY AND SECURITY, PT I, 2020, 11982 : 307 - 322
  • [40] Feature Distillation in Deep Attention Network Against Adversarial Examples
    Chen, Xin
    Weng, Jian
    Deng, Xiaoling
    Luo, Weiqi
    Lan, Yubin
    Tian, Qi
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (07) : 3691 - 3705