Mixed-Privacy Forgetting in Deep Networks

被引:56
|
作者
Golatkar, Aditya [1 ,2 ]
Achille, Alessandro [1 ]
Ravichandran, Avinash [1 ]
Polito, Marzia [1 ]
Soatto, Stefano [1 ]
机构
[1] Amazon Web Serv, Seattle, WA 98109 USA
[2] Univ Calif Los Angeles, Los Angeles, CA 90024 USA
来源
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021 | 2021年
关键词
D O I
10.1109/CVPR46437.2021.00085
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We show that the influence of a subset of the training samples can be removed - or "forgotten" - from the weights of a network trained on large-scale image classification tasks, and we provide strong computable bounds on the amount of remaining information after forgetting. Inspired by real-world applications of forgetting techniques, we introduce a novel notion of forgetting in mixed-privacy setting, where we know that a "core" subset of the training samples does not need to be forgotten. While this variation of the problem is conceptually simple, we show that working in this setting significantly improves the accuracy and guarantees of forgetting methods applied to vision classification tasks. Moreover, our method allows efficient removal of all information contained in non-core data by simply setting to zero a subset of the weights with minimal loss in performance. We achieve these results by replacing a standard deep network with a suitable linear approximation. With opportune changes to the network architecture and training procedure, we show that such linear approximation achieves comparable performance to the original network and that the forgetting problem becomes quadratic and can be solved efficiently even for large models. Unlike previous forgetting methods on deep networks, ours can achieve close to the state-of-the-art accuracy on large scale vision tasks. In particular, we show that our method allows forgetting without having to trade off the model accuracy.
引用
收藏
页码:792 / 801
页数:10
相关论文
共 50 条
  • [41] LEARNING AND FORGETTING ALGORITHMS FOR FEEDFORWARD NETWORKS
    KUH, A
    PROCEEDINGS OF THE 22ND CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS, VOLS 1 & 2, 1988, : 282 - 286
  • [42] Measuring Catastrophic Forgetting in Neural Networks
    Kemker, Ronald
    McClure, Marc
    Abitino, Angelina
    Hayes, Tyler L.
    Kanan, Christopher
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 3390 - 3398
  • [43] Forgetting Leads to Chaos in Attractor Networks
    Pereira-Obilinovic, Ulises
    Aljadeff, Johnatan
    Brunel, Nicolas
    PHYSICAL REVIEW X, 2023, 13 (01)
  • [44] Privacy-Preserving Localization for Underwater Acoustic Sensor Networks: A Differential Privacy-Based Deep Learning Approach
    Yan, Jing
    Zheng, Yuhan
    Yang, Xian
    Chen, Cailian
    Guan, Xinping
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 737 - 752
  • [45] Hardware for Quantized Mixed-Precision Deep Neural Networks
    Rios, Andres
    Nava, Patricia
    PROCEEDINGS OF THE 2022 15TH IEEE DALLAS CIRCUITS AND SYSTEMS CONFERENCE (DCAS 2022), 2022,
  • [46] Improving deep convolutional neural networks with mixed maxout units
    Zhao, Hui-zhen
    Liu, Fu-xian
    Li, Long-yue
    PLOS ONE, 2017, 12 (07):
  • [47] Evaluation of Mixed Deep Neural Networks for Reverberant Speech Enhancement
    Gutierrez-Munoz, Michelle
    Gonzalez-Salazar, Astryd
    Coto-Jimenez, Marvin
    BIOMIMETICS, 2020, 5 (01)
  • [48] Differential privacy in deep learning: Privacy and beyond
    Wang, Yanling
    Wang, Qian
    Zhao, Lingchen
    Wang, Cong
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 148 : 408 - 424
  • [49] Catastrophic Forgetting in Deep Learning: A Comprehensive Taxonomy
    Aleixo, Everton Lima
    Colonna, Juan G.
    Cristo, Marco
    Fernandes, Everlandio
    Journal of the Brazilian Computer Society, 2024, 30 (01) : 175 - 211
  • [50] “Whispering MLaaS” Exploiting Timing Channels to Compromise User Privacy in Deep Neural Networks
    Shukla S.
    Alam M.
    Bhattacharya S.
    Mitra P.
    Mukhopadhyay D.
    IACR Transactions on Cryptographic Hardware and Embedded Systems, 2023, 2023 (02): : 587 - 613