Adversarial Examples to Fool Iris Recognition Systems

被引:1
|
作者
Soleymani, Sobban [1 ]
Dabouei, Ali [1 ]
Dawson, Jeremy [1 ]
Nasrabadi, Nasser M. [1 ]
机构
[1] West Virginia Univ, Morgantown, WV 26506 USA
来源
2019 INTERNATIONAL CONFERENCE ON BIOMETRICS (ICB) | 2019年
基金
美国国家科学基金会;
关键词
D O I
10.1109/icb45273.2019.8987389
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Adversarial examples have recently proven to be able to,fool deep learning methods by adding carefiilly crafted small perturbation to the input space image. In this paper; we study the possibility of generating adversarial examples for code -based iris recognition systems. Since generating adversarial examples requires back-propagation of the,adversarial loss, conventional falter,bank-based iris -code generation frameworks cannot be employed in such a setup. Therefore, to compensate for this shortcoming, we propose to train a deep auto -encoder surrogate network to mimic the conventional iris code generation procedure. This trained rogate network is then deployed to generate the adverial examples using the iterative gradient sign method algorithm [15]. We consider non -targeted and targeted attacks through three attack scenarios. Considering these attacks, we study the possibility offooling an iris recognition system in white -box and black-box frameworks.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Transferable adversarial examples can efficiently fool topic models
    Wang, Zhen
    Zheng, Yitao
    Zhu, Hai
    Yang, Chang
    Chen, Tianyi
    COMPUTERS & SECURITY, 2022, 118
  • [2] On Brightness Agnostic Adversarial Examples Against Face Recognition Systems
    Singh, Inderjeet
    Momiyama, Satoru
    Kakizaki, Kazuya
    Araki, Toshinori
    PROCEEDINGS OF THE 20TH INTERNATIONAL CONFERENCE OF THE BIOMETRICS SPECIAL INTEREST GROUP (BIOSIG 2021), 2021, 315
  • [3] Digital Watermark Perturbation for Adversarial Examples to Fool Deep Neural Networks
    Feng, Shiyu
    Feng, Feng
    Xu, Xiao
    Wang, Zheng
    Hu, Yining
    Xie, Lizhe
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [4] Powerful Physical Adversarial Examples Against Practical Face Recognition Systems
    Singh, Inderjeet
    Araki, Toshinori
    Kakizaki, Kazuya
    2022 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS (WACVW 2022), 2022, : 301 - 310
  • [5] ADVERSARIAL-PLAYGROUND: A Visualization Suite Showing How Adversarial Examples Fool Deep Learning
    Norton, Andrew P.
    Qi, Yanjun
    2017 IEEE SYMPOSIUM ON VISUALIZATION FOR CYBER SECURITY (VIZSEC), 2017,
  • [6] Adversarial Examples Improve Image Recognition
    Xie, Cihang
    Tan, Mingxing
    Gong, Boqing
    Wang, Jiang
    Yuille, Alan L.
    Le, Quoc, V
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 816 - 825
  • [7] Closer Look at the Transferability of Adversarial Examples: How They Fool Different Models Differently
    Waseda, Futa
    Nishikawa, Sosuke
    Trung-Nghia Le
    Nguyen, Huy H.
    Echizen, Isao
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 1360 - 1368
  • [8] Defending Against Adversarial Iris Examples Using Wavelet Decomposition
    Soleymani, Sobhan
    Dabouei, Ali
    Dawson, Jeremy
    Nasrabadi, Nasser M.
    2019 IEEE 10TH INTERNATIONAL CONFERENCE ON BIOMETRICS THEORY, APPLICATIONS AND SYSTEMS (BTAS), 2019,
  • [9] Adversarial Examples that Fool both Computer Vision and Time-Limited Humans
    Elsayed, Gamaleldin F.
    Shankar, Shreya
    Cheung, Brian
    Papernot, Nicolas
    Kurakin, Alexey
    Goodfellow, Ian
    Sohl-Dickstein, Jascha
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [10] Examples of Artificial Perceptions in Optical Character Recognition and Iris Recognition
    Noaica, Cristina Madalina
    Badea, Robert
    Motoc, Julia Maria
    Ghica, Claudiu Gheorghe
    Rosoiu, Alin Cristian
    Popescu-Bodorin, Nicolaie
    SOFT COMPUTING APPLICATIONS, 2013, 195 : 57 - 69