Safe Machine Learning and Defeating Adversarial Attacks

被引:23
作者
Rouhani, Bita Darvish [1 ]
Samragh, Mohammad [2 ]
Javidi, Tara [2 ]
Koushanfar, Farinaz [2 ]
机构
[1] Univ Calif San Diego, La Jolla, CA 92093 USA
[2] Univ Calif San Diego, Dept Elect & Comp Engn, La Jolla, CA 92093 USA
基金
美国国家科学基金会;
关键词
16;
D O I
10.1109/MSEC.2018.2888779
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Adversarial attacks have exposed the unreliability of machine-learning (ML) models for decision making in autonomous agents. This article discusses recent research for ML model assurance in the face of adversarial attacks.
引用
收藏
页码:31 / 38
页数:8
相关论文
共 16 条
  • [1] [Anonymous], 2015, ROBUST CONVOLUTIONAL
  • [2] [Anonymous], 2017, DEEP LEARNING MODELS
  • [3] [Anonymous], P IEEE S SEC PRIV
  • [4] [Anonymous], 2017, MagNet and "efficient defenses against adversarial attacks
  • [5] [Anonymous], 2017, P 2017 ACM SIGSAC C
  • [6] [Anonymous], 2014, DEEP NEURAL NETWORK
  • [7] [Anonymous], 2016, DEFENSIVE DISTILLATI
  • [8] Huang R., 2015, Learning with a strong adversary
  • [9] Miyato T., 2015, Distributional smoothing with virtual adversarial training
  • [10] Papernot N., 2016, Practical Black-Box Attacks against Machine Learning